report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
In 2008, FDA inspected 153 foreign food facilities out of an estimated 189,000 such facilities registered with FDA and estimated that it would conduct 200 inspections in 2009 and 600 in 2010. In 2007, FDA inspected 95 facilities. Table 1 shows the number of FDA inspections of foreign food facilities, by country, from fiscal years 2001 through 2008. As the table shows, FDA conducted 1,186 inspections in 56 countries from fiscal years 2001 through 2008; the majority of FDA inspections were in Mexico, followed by Ecuador, Thailand, and Chile. FDA conducted a total of 46 inspections in China during this period. For fiscal year 2009, FDA allocated 272 full-time employees to examine imported food shipments at U.S. ports of entry and estimated a budget of approximately $93.1 million for field import activities. The total estimated 2009 FDA budget for all FDA products and programs, including food, drugs, medical devices, and other products, was $2.7 billion. In 2008, we testified that if FDA were to inspect each of the 189,000 registered foreign facilities—at the FDA Commissioner’s estimated cost of $16,700 per inspection—it would cost FDA approximately $3.2 billion to inspect all of these facilities once. Since November 2008, FDA has opened overseas offices to help prevent food that violates U.S. standards from reaching the United States. These offices are expected to provide FDA with direct access to information about foreign facilities’ food manufacturing practices so that its staff at U.S. ports of entry can make more informed decisions about which food imports to examine. For example, FDA’s overseas staff are working with staff at counterpart regulatory agencies overseas, as well as with other stakeholders who may be knowledgeable about certain industries. Overseas staff are also educating local exporters to make sure they understand U.S. food safety laws and regulations and FDA expectations. FDA opened offices in China (Beijing, Guangzhou, and Shanghai); in Europe (Brussels, London, and soon in Parma, Italy); in Latin America (San Jose, Costa Rica; Santiago, Chile; and Mexico City, Mexico); and in India (New Delhi and Mumbai). The FDA Middle East Office is operating out of FDA headquarters because the Department of State denied its request to locate in Amman, Jordan, due to security concerns. In addition to having overseas offices assist FDA’s oversight of imported food, the agency is developing PREDICT. PREDICT is intended to assist FDA’s oversight of imported food and uses FDA-developed criteria to estimate the risk of imported food shipments. These criteria are to incorporate, among other things, the violative histories of the product, importer, manufacturer, consignee, and country of origin; the results of laboratory analyses and foreign facility inspections; and general intelligence on recent world events—such as natural disasters, foreign recalls, and disease outbreaks—that may affect the safety of a particular imported food product. In addition, agency officials stated that PREDICT will assign higher risk scores to firms for which the system does not have historical data. PREDICT generates a numerical risk score for all FDA-regulated products. According to FDA, PREDICT is to present the shipment’s risk score to FDA reviewers if the score is above an FDA-specified threshold. Shipments that are below the threshold are to receive a system “may proceed” (cleared) message unless other conditions are present, such as an FDA import alert. FDA intends that reviewers using PREDICT will also be able to view the specific risk factors that contributed to the shipment’s risk score, such as whether the product or importer has a history of FDA violations. FDA expects reviewers to use PREDICT to supplement, rather than replace, their professional judgment when deciding what food products to inspect. A 2007 pilot test of PREDICT in Los Angeles for seafood products indicated that the system could enhance FDA’s risk-based import screening efforts. When compared with baseline data from FDA’s existing import screening system, the Operational and Administrative System for Import Support (OASIS), PREDICT improved FDA’s ability to target imports that the agency considers to be high risk for further examinations and allowed a greater percentage of products the agency considers to be low risk to enter U.S. commerce without requiring a reviewer’s intervention. Specifically, PREDICT nearly doubled the percentage of field examinations—and increased by approximately one-third the percentage of laboratory examinations—that resulted in violations, relative to baseline OASIS data. In addition, according to FDA, the violations in shipments that reviewers targeted using PREDICT, on average, posed a greater risk to human health than the violations that OASIS detected. FDA told us on April 12, 2010, that PREDICT is fully operational in the Los Angeles and New York districts, but due to technical problems, FDA has not determined when the system will be deployed in the Seattle district. In addition, FDA officials stated that a scheduled nationwide rollout of PREDICT this summer has been delayed, primarily because of technical problems, such as server crashes and overloads, which are affecting FDA’s field data systems nationwide. Although the PREDICT pilot produced positive results and demonstrated the system’s potential to improve import screening efforts, we reported that further agency actions were needed to help ensure that the system is effective. For example, FDA had not yet developed a performance measurement plan to evaluate, among other things, PREDICT’s ability to identify high-risk shipments for manual review while simultaneously returning “may proceed” messages for low-risk shipments and enabling them to enter U.S. commerce. We recommended FDA develop such a plan. According to agency officials, since our report was issued in September 2009, FDA had completed a draft performance measurement plan. However, we have not reviewed this draft plan. We identified specific gaps in enforcement that could allow violative food products to enter U.S. commerce: (1) FDA’s limited authority to assess civil penalties on certain violators; (2) lack of unique identifiers for firms exporting FDA-regulated products; (3) lack of information-sharing between agencies’ computer systems and (4) FDA’s not sharing product distribution information during a recall. Importers can retain possession of their food shipments until FDA approves their release into U.S. commerce. However, FDA and CBP officials do not believe that CBP’s current bonding procedures for FDA- regulated food effectively deter importers from introducing violative food products into U.S. commerce. Specifically, importers post a monetary bond for formal entries (i.e., all shipments exceeding $2,000 and certain shipments valued below that amount) to provide assurance that these shipments meet U.S. requirements. According to these officials, many importers still consider the occasional payment of forfeited bonds as part of the cost of doing business. Indeed, as we reported in 1998, forfeiture of the shipment’s maximum bond value is often not sufficient to deter the sale of imported goods that FDA has not yet released. In its response to our September 2009 report, FDA agreed with this finding. According to FDA’s regulatory procedures manual, the bond penalty is intended to make the unauthorized distribution of articles unprofitable, but liquidated damages incurred by importers are often so small that they, in effect, encourage future illegal distribution of imported shipments. Even though the bond may be up to three times the value of the shipment, for a large importer, this sum may be negligible, especially when the importer successfully petitions CBP to reduce the amount. We recommended that the FDA Commissioner seek authority from Congress to assess civil penalties on firms and persons who violate FDA’s food safety laws and that the Commissioner determine what violations should be subject to this new FDA civil penalties authority, as well as the appropriate nature and magnitude of the penalties. FDA agreed with this recommendation and was working with Congress to include civil penalty authority in food safety legislation. FDA officials also told us that if the agency had the authority to impose civil penalties on importers, which is also provided for in H.R. 2749, FDA might be better able to deter violations. High-risk foods may enter U.S. commerce because the identification numbers that FDA uses to target manufacturers that have violated FDA standards in the past are not unique, and therefore these manufacturers and their shipments, may evade FDA review. Importers generate a manufacturer identification number at the time of import, when, among other things, they electronically file entry information with CBP. (CBP is responsible for validating the manufacturer identification numbers and ensuring they are unique.) CBP electronically sends this information to FDA’s computer system. From this new manufacturer identification number, FDA’s computer system automatically creates an FDA firm identification number—called the FDA establishment identifier. Officials told us that a single firm may often have multiple CBP manufacturer identification numbers—and therefore multiple FDA establishment identifiers. FDA officials told us that because CBP has multiple identification numbers for many firms, FDA has an average of three “unique” identifiers per firm, and one firm had 75 identifiers. The creation of multiple identifiers can happen in a number of ways. For example, if information about an establishment—such as its name—is entered by importers incorrectly at the time of filing with CBP, a new manufacturer identification number, and therefore a new FDA establishment identifier, could be created for an establishment that already has an FDA number. In this scenario, an importer may— intentionally or unintentionally—enter a firm’s name or address slightly differently from the way it is displayed in FDA’s computer system. This entry would lead to the creation of an additional FDA number for that firm. If an import alert was set using the original FDA establishment identifier, a shipment that should be subject to the import alert may be overlooked because the new number does not match the one identified in the alert. In addition, foreign facilities that manufacture, process, pack, or hold food for consumption in the United States, with some exceptions, are required to register with FDA. Upon registration, FDA assigns a registration number. FDA calculated that in 2008, 189,000 foreign firms were registered under this requirement. However, some of the firms included in that total may be duplicates because the facility may have been reregistered without the cancellation of the original registration; consequently FDA may not know the precise number of foreign firms registered. As we previously reported, FDA officials told us they are working to address the unique identifier problem by establishing an interactive process in which FDA’s systems recognize when a product’s identifier does not match its manufacturer’s registration number. As we reported, FDA could consider requiring food manufacturers to use a unique identification number that FDA or a designated private sector firm provides at the time of import. However, the use of this unique number would necessitate collaboration with CBP, since importers would use such a number each time they file with CBP to ship goods to the United States. That is, CBP’s computer system would need to be programmed to accept an FDA unique identification number. According to CBP officials, it is unknown if or when CBP’s system will have this capability. To improve FDA’s and CBP’s ability to identify foreign firms with violative histories, we recommended that the FDA Commissioner explore ways to improve the agency’s ability to identify foreign firms with a unique identifier and that the CBP Commissioner ensure that its computer system is able to accept a unique identification number for foreign firms that export FDA- regulated foods. Both FDA and CBP agreed with our recommendation, and CBP officials told us that the agency has developed a plan for implementing a unique identifier. However, we have not reviewed this plan. We observe that H.R. 2749 contains a provision that may allow the Secretary of Health and Human Services, in consultation with the Commissioner of CBP, to specify the unique numerical identifier system to be used, taking into account compatibility with CBP’s automated systems. Such actions would help prevent high-risk foods from entering U.S. commerce. When we issued our report in September 2009, we reported that CBP’s computer system did not notify FDA’s or FSIS’s systems when imported food shipments arrive at U.S. ports, which increases the risk that potentially unsafe food may enter U.S. commerce, particularly at truck ports. If FDA chooses to examine a shipment as part of its admissibility review, the agency notifies both CBP and the importer through its computer system, OASIS. However, once the shipment arrives at the port and clears CBP’s inspection process, the importer is not required to wait at the port for FDA to conduct its examination. Instead, the importer may choose to transport the shipment to the consignee’s warehouse or other facility within the United States. The importer might choose to do so because, for example, CBP and FDA do not have the same hours of operation at some ports, and FDA’s port office may be closed when the shipment arrives. In such cases, as a condition of the bond with CBP, the importer agrees to hold the shipment intact and not distribute any portion of it into U.S. commerce until FDA has examined it. CBP and FDA officials told us that, occasionally, an importer will transport the shipment to the consignee’s warehouse without first notifying FDA. If this occurs, FDA will not quickly know that the shipment has arrived and been transported to a U.S. warehouse because CBP’s computer system does not notify FDA’s OASIS computer system when the shipment arrives at the port. Instead, from the perspective of an FDA reviewer using OASIS, it will appear as if the shipment’s arrival is still pending. FDA port officials told us that it could be 2 or 3 days before FDA reviewers become suspicious and contact CBP to inquire about the shipment’s arrival status. By this time, an unscrupulous importer could have distributed the shipment’s contents into U.S. commerce without FDA’s approval. As we reported, if CBP communicated time-of-arrival information directly to OASIS, then FDA would be able to quickly identify shipments that are transported into the United States without agency notification and arrange to examine them before they are distributed to U.S. markets. Since our report was issued in September 2009, CBP told us that it had modified its software to notify FDA of a shipment’s time of arrival. However, we have not reviewed the effectiveness of these modifications. We are still waiting to see whether CBP has an agreement with FSIS regarding time of arrival modifications. One key issue of concern, according to officials we spoke with from several states, is that FDA does not always share with states certain distribution-related information, such as a recalling firm’s product distribution lists, which impedes the states’ efforts to quickly remove contaminated products from grocery stores and warehouses. According to one state official, because FDA does not provide this information, the state has to spend time tracking it down on its own. Public health may be at risk during the time it takes for the states to independently track distribution information when a product is found to be contaminated. FDA told us that it usually considers such information to be confidential commercial information, the disclosure of which is subject to statutory restrictions, such as the Trade Secrets Act. However, FDA’s regulations allow for sharing of confidential commercial information with state and local government officials if, for example, the state has provided a written statement that it has the authority to protect the information from public disclosure and that it will not further disclose the information without FDA’s permission, and FDA has determined that disclosure would be in the interest of public health, if such sharing is necessary to effectuate a recall, or the information is shared only with state and local officials who are duly commissioned to conduct examinations or investigations under the Federal Food, Drug, and Cosmetic Act. In certain circumstances, FDA may also seek a firm’s consent to disclose its market distribution information. In our past work, we have pointed out that mandatory recall—the authority to require a food company to recall a contaminated product— would help ensure that unsafe food does not remain in the food supply. We also reported that FDA should strengthen its oversight of food ingredients determined to be generally recognized as safe for their intended use and to seek the authority if the agency deems necessary. Likewise, we reported that FDA has identified a need for explicit authority from Congress to issue regulations to require preventive measures by firms producing foods that have been associated with repeated instances of serious health problems or death. We have reported that food recalls are largely voluntary and that federal agencies responsible for food safety, including FDA, have no authority to compel companies to recall contaminated foods, with the exception of FDA’s authority to require a recall for infant formula. FDA does have authority, through the courts, to seize, condemn, and destroy adulterated or misbranded food under its jurisdiction and to disseminate information about foods that are believed to present a danger to public health. However, government agencies that regulate the safety of other products, such as toys and automobile tires, have recall authority not available to FDA for food and have had to use their authority to ensure that recalls were conducted when companies did not cooperate. We have noted that limitations in the FDA’s food recall authorities heighten the risk that unsafe food will remain in the food supply and have proposed that Congress consider giving FDA similar authorities. H.R. 2749 authorizes the Secretary of Health and Human Services to request that a person recall an article of food if the Secretary has reason to believe it is adulterated, misbranded, or otherwise in violation of the Federal Food, Drug, and Cosmetic Act and to require a person to cease distribution if the Secretary has reason to believe the article of food “may cause serious adverse health consequences or death to humans or animals.” It also requires the Secretary to order a recall of such an article of food if the Secretary determines (after an informal hearing opportunity) it is necessary. Finally, it authorizes the Secretary to proceed directly to a mandatory recall order if the Secretary has credible evidence that an article of food subject to an order to cease distribution presents an imminent threat of serious adverse health consequences or death to humans or animals. As our previous work has shown, mandatory recall authority would allow FDA to ensure that unsafe food does not remain in the food supply. We have reported that FDA should strengthen its oversight of food ingredients determined to be generally recognized as safe (GRAS) for their intended use. Manufacturers add these substances—hundreds of spices and artificial flavors, emulsifiers and binders, vitamins and minerals, and preservatives—to enhance a food’s taste, texture, nutritional content, or shelf life. Currently, companies may conclude a substance is GRAS without FDA’s approval or knowledge. We reported that FDA only reviews those GRAS determinations that companies submit to the agency’s voluntary notification program. The agency generally does not have information about other GRAS determinations companies have made because companies are not required to inform FDA of them. Among other things, we recommended to FDA that it develop a strategy to require any company that conducts a GRAS determination to provide the agency with basic information about this determination, and to incorporate such information into its public Web site. We also reported that FDA is not systematically ensuring the continued safety of current GRAS substances. According to FDA regulations, the GRAS status of a substance must be reconsidered as new scientific information emerges, but the agency has not systematically reconsidered GRAS substances since the 1980s. Rather, FDA officials said, they keep up with new developments in the scientific literature and, on a case-by-case basis, information brought to the agency’s attention could prompt them to reconsider the safety of a GRAS substance. We recommended that FDA develop a strategy to conduct reconsiderations of the safety of GRAS substances in a more systematic manner. We also recommended that, if FDA determines that it does not have the authority to implement one or more of our recommendations, the agency should seek the authority from Congress. FDA generally agreed with the report’s findings and recommendations. In addition, we reported that FDA has taken steps to make information about its GRAS notification program available to the public by posting its inventory of all GRAS notices FDA has received on its Web site. By placing information about the GRAS notice and its response on its Web site, FDA enhances the ability of Congress, stakeholders, and the general public to be better informed about GRAS substances. H.R. 2749 contains provisions on GRAS substances, including a requirement that the Secretary post on FDA’s Web site information about GRAS notices submitted to FDA within 60 days of receipt of the notice. We have also reported that FDA should strengthen its oversight of fresh produce. For example, we noted that FDA has identified a need for explicit authority from Congress to issue regulations requiring preventive controls (risk-based safety regulations) by firms producing foods that have been associated with repeated instances of serious health problems or death. FDA already has preventive regulations for seafood and juice, which require firms to analyze safety hazards and implement plans to address those hazards. According to FDA, such authority would strengthen the agency’s ability to implement risk-based processes to reduce illnesses from high-risk foods. FDA officials told us that issuing preventive regulations may be one of the most important things they can do to enhance their oversight of fresh produce. We therefore recommended that the Commissioner of FDA seek authority from Congress to make explicit FDA’s authority to adopt preventive controls for high-risk foods. FDA agreed with this recommendation and has sought authority to issue additional preventive controls for high-risk foods. Furthermore, H.R. 2749 requires FDA to create preventive controls for produce and certain raw agricultural commodities. Such measures could help the agency reduce illnesses from these high-risk foods. In conclusion, food imports from around the world constitute a substantial and increasing volume of imported foods. Our work has shown that FDA could strengthen its oversight of imported food by improving its enforcement, such as by assessing civil penalties and providing unique identification numbers to firms. Additional statutory authorities, such as mandatory recall authority, could also help FDA oversee food safety. FDA generally agreed with our recommendations and has some taken actions to address them. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other Members of this Subcommittee may have. For further information about this testimony, please contact Lisa Shames at (202) 512-3841 or shamesl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this statement were José Alfredo Gómez, Assistant Director; Kevin Bray; Candace Carpenter; Anne Johnson; Carol Herrnstadt Shulman; Nico Sloss; and Rebecca Yurman. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Food imported from around the world constitutes a substantial and increasing percentage of the U.S. food supply. Ensuring the safety of imported food challenges the Food and Drug Administration (FDA) to better target its resources on the foods posing the greatest risks to public health and to coordinate efforts with the Department of Homeland Security's Customs and Border Protection (CBP) so that unsafe food does not enter U.S. commerce. This testimony focuses on (1) FDA's overseas inspections, (2) identified gaps in agencies' enforcement efforts to ensure the safety of imported food, and (3) statutory authorities that GAO has identified that could help FDA's oversight of food safety. This testimony is principally based on GAO's September 2009 report, Food Safety: Agencies Need to Address Gaps in Enforcement and Collaboration to Enhance Safety of Imported Food (GAO-09-873) and has been updated with information from FDA. While the number of FDA overseas inspections has fluctuated, FDA has opened up several overseas offices to address the safety of imported food at the point of origin, and is testing a computer-based system to target high-risk imports for additional inspection when they arrive at ports of entry. Specifically, in 2008, FDA inspected 153 foreign food facilities out of an estimated 189,000 such facilities registered with FDA; in 2007, FDA inspected 95 facilities. FDA estimated that it would conduct 200 inspections in 2009 and 600 in 2010. In addition, FDA opened offices in China, Costa Rica, and India and expects to open offices in Mexico and Chile and to post staff at European Union agencies. Furthermore, FDA's testing of a new computer screening system--the Predictive Risk-Based Evaluation for Dynamic Import Compliance Targeting (PREDICT)--indicates that the system could enhance FDA's risk-based screening efforts at ports of entry, but the system is not yet fully operational. PREDICT is to generate a numerical risk score for all FDA-regulated products by analyzing importers' shipment information using sets of FDA-developed risk criteria and to target for inspection products that have a high risk score. GAO previously identified several gaps in enforcement that could allow food products that violate safety laws to enter U.S. commerce. For example, FDA has limited authority to assess penalties on importers who introduce such food products, and the lack of a unique identifier for firms exporting food products may allow contaminated food to evade FDA's review. In addition, FDA's and CBP's computer systems do not share information. FDA does not always share certain distribution-related information, such as a recalling firm's product distribution lists with states, which impedes states' efforts to quickly remove contaminated products from grocery stores and warehouses. GAO identified certain statutory authorities that could help FDA in its oversight of food safety. Specifically, GAO previously reported that FDA currently lacks mandatory recall authority for companies that do not voluntarily recall food products identified as unsafe. Limitations in FDA's food recall authorities heighten the risk that unsafe food will remain in the food supply. In addition, under current FDA regulations, companies may conclude a food ingredient is generally recognized as safe without FDA's approval or knowledge. GAO recommended that if FDA determines that it does not have the authority to implement one or more recommendations, the agency should seek the authority from Congress. Finally, GAO reported that FDA has identified a need for explicit authority from Congress to issue regulations requiring preventive controls by firms producing foods that have been associated with repeated instances of serious health problems or death. FDA already has preventive regulations for seafood and juice, which require firms to analyze safety hazards and implement plans to address those hazards.
According to recent studies on the adoption of health information technology, most health care providers in the United States still use paper health records to store, maintain, and share patients’ information. Sharing this information among multiple providers treating the same patient requires transferring paper documents by mail, fax, or hand delivery. In addition to being slow and cumbersome, these methods of transferring health information can result in loss or late delivery of the information, which may require the requesting provider to conduct duplicate tests, or may contribute to medical errors due to the lack of information needed to properly treat the patient. Additionally, the phys delivery of paper health records typically does not provide an effective means for securing the information while it is being transferred, nor does it provide ways to identify who accesses the records or discloses the information contained in the records. In part to address these types of deficiencies, health care providers have been adopting electronic health information systems to record, store, and maintain patients’ information. As more providers have adopted and used these systems, additional capabilities have been developed and implemented, including the ability to electronically share patients’ information from a provider directly to other providers, or among providers participating in an HIE. HIEs facilitate the sharing of electronic health information by providing the services and technology that allow providers (such as physicians, hospitals, laboratories, and public health departments) to request and receive information about patients from other providers’ records. For example, when a provider requests information through the exchange, the HIE identifies the source of the requested data, then initiates the electronic transmission that delivers the data from the provider that is disclosing the patient’s information to the provider that requested the information. A simplified model of this exchange activity is shown in figure 1. Research indicates that most HIEs were formed to share information among health care providers and organizations within a geographic area (e.g., metropolitan area, state, region, or nation). However, others were designed for unique purposes, such as to collect and share information about participants involved in a state Medicaid program or to aggregate information about patients within a community, state, or region in support of efforts to improve the health of a population. The ways that HIEs are established and managed vary. Some are established by state governments, while others are established by private organizations. They may be managed by public-private partnerships or other organizations that were created to promote collaboration among health care providers. Efforts in the United States to establish organizations that facilitate the sharing of electronic health information among providers began in the early 1990’s. These organizations, called Community Health Information Networks, evolved into Regional Health Information Organizations throughout the early to mid 2000s. Since then, there has been a steady increase in the number of HIEs that are fully operational and actively facilitate the electronic sharing of patients’ health information. In 2007, the eHealth Initiative reported that, in a nationwide survey, it had identified 32 operational HIEs. One year later, its survey identified 42 operational exchanges, representing a 31 percent increase. Then, in 2009, the survey identified 57 operational exchanges, a nearly 36 percent increase from 2008. Most of the nearly 150 exchanges that responded to the eHealth Initiative’s 2009 survey reported that they were not yet engaged in the electronic exchange of health information but were involved in activities such as defining a business plan, identifying participants’ information requirements, and securing funding. Others responded that they were in the process of defining and implementing technical, financial, and legal procedures. The 57 operational HIEs that responded to the survey reported that they support a variety of information-sharing services for their participating providers. The most common services included the delivery of laboratory and test results and clinical documentation, electronic health records, electronic prescribing, and alerts about critical conditions, such as adverse drug interactions. Other services included data sharing for public health purposes, such as for tracking and managing childhood immunizations, and for reporting health care quality measures to participating providers. The United States and several other countries base privacy laws and policies on practices for protecting personal information, including health information. While there is no single federal law in the United States that defines requirements for protecting electronic personal health information from inappropriate use or disclosure, there are a number of separate laws and policies that provide privacy and security protections for information used for specific purposes or maintained by specific entities. Further, some states impose additional restrictions on the use and disclosure of personal health information through state laws and regulations, while others do not define restrictions beyond those imposed by federal rules. Privacy experts refer to a set of basic principles, known as Fair Information Practices, as a framework for protecting personally identifiable information such as personal health information. These practices were first proposed in 1973 by a U.S. government advisory committee. The practices provided the basis for subsequent laws and policies in the United States and other countries. While there are different versions of Fair Information Practices, their core elements are reflected in the privacy and security regulations promulgated under the Health Insurance Portability and Accountability Act of 1996 (HIPAA), and in the seven key practices that we addressed with the case study HIEs and providers. They are described in table 1. The HIPAA Privacy and Security Rules define the circumstances under which personal health information may be disclosed by covered entities to other entities, such as providers, patients, health plans (insurers), and public health authorities. The HIPAA Privacy Rule places certain limitations on when and how covered entities may use and disclose personal health information. However, the Privacy Rule permits the use or disclosure of personal health information for treatment, payment, and other health care operations. To ensure that this information is reasonably protected from unauthorized access, the HIPAA Security Rule specifies a series of administrative, technical, and physical security practices for providers and plans to implement to ensure the confidentiality of electronic health information. The HITECH Act includes a series of privacy and security provisions that expand certain provisions under HIPAA. Although final regulations for implementing these provisions remain under development, HITECH requires certain entities that were not initially covered by HIPAA, which may include health information exchanges, to meet the requirements defined in the HIPAA privacy and security rules. Further, if certain conditions are met, the act may limit the disclosure of information to health plans (insurers) upon patient request. HITECH also provides an individual with a right to receive an accounting of disclosures of patient information (for the purposes of treatment, payment, and health care operations) from covered entities using electronic health records. The four exchanges in our case studies reported that they implement various practices to ensure appropriate disclosure of electronic personal health information for treatment purposes. The 18 case study providers that participate in these exchanges also described practices they implement for disclosing patients’ personal health information when the information was shared through an HIE or directly with other providers. Some of the providers reported that they inform patients that their electronic personal health information may be shared through a health information exchange. The practices reported by the HIEs and providers reflect the seven Fair Information Practices that we described. In all cases, the providers we studied stated that their participation in an HIE did not require them to change their established practices for disclosing and safeguarding patients’ personal health information. While providers take responsibility for implementing the three practices that involve direct contact with patients (providing information about and obtaining consent for use and disclosure, and making corrections to personal health information), both providers and exchanges share responsibility for implementing the other four practices. For example, the 18 case study providers inform their patients about the use of personal health information by giving notices of privacy practices and by other means. They also obtain patients’ consent to disclose health information for purposes of treatment, payment, and operation. Additionally, providers stated that they implement practices to facilitate patients’ ability to access and request corrections to personal health information, and all the HIEs and providers described practices intended to limit the disclosure and use of information to specific purposes. They also described practices they implement that are intended to address security, data quality, and accountability for protecting electronic personal health information. Detailed information about the privacy practices identified by our case study organizations is included in appendix II. All of the 18 providers in our case studies inform their patients of their overall privacy practices by giving them a notice in paper form, and 13 of them post a copy of the notice on their Web site. These notices state that the provider intends to use and disclose patients’ personal health information for treatment purposes, explain the provider’s commitment to protect that information and how it intends to do so, and inform patients about their right to take action if they believe their privacy has been violated. Six of the case study providers stated that they inform patients by various methods that personal health information will be shared with other providers through an HIE. Three of the six providers include this information in the paper privacy notices that they give to patients, and three providers use other methods to inform patients of their participation in an HIE. For example, two providers inform patients by displaying materials, such as posters, in the waiting room. Another informs patients that their information will be shared electronically when obtaining consent to disclose information for treatment purposes. As with giving notice, all case study providers that treat patients stated that they obtain and document patients’ consent for disclosure, most often doing so by having patients sign paper forms that include language authorizing the disclosure of personal health information for treatment, payment, and health care operations. However, the 15 providers that send patients’ information to other providers through an HIE described varying approaches for obtaining patients’ consent. For example, 14 of the providers rely on patients’ general consent; 8 of the 14 do not give patients an option to exclude their health information from the HIE. Six case study providers assume patients are willing to have their information shared through an exchange unless this consent is explicitly denied. The other one of these 15 providers actively seeks patient consent to share information through the exchange before it allows such sharing to take place. The practice of obtaining patients’ consent for sharing their information through an HIE is intended to help ensure that patients are aware of how and with whom their information is being shared. None of the case study providers had implemented electronic means for obtaining patients’ consent for disclosure. However, one HIE had developed an electronic tool that its providers use to record patients’ consent preferences that are obtained by other means. Allowing patients to review their personal health information and request corrections to their records helps ensure that patients have a way to verify the accuracy and integrity of their personal health information. Seventeen of the 18 providers in our case studies reported that they require patients to request access to their information in writing and to then view or obtain their information in person. In most cases, providers require patients to submit a written request for a correction. The correction is included in the patients’ records after a doctor determines that it is appropriate. One provider allowed patients to use a Web portal to view the demographic and medical information in their files and to request changes to that information. Once a correction to a patient’s record has been made by a provider, it may be difficult to ensure that the same correction is made in the records of other providers with whom the patients’ information has been previously shared through an HIE. While the case study exchanges are not directly involved with patients’ requests, two reported that they help providers remain up-to-date with patients’ corrected records. For example, these exchanges stated that they generate reports that identify where patient information has been shared. Providers can use this information to notify other providers about corrections and better ensure that the patient’s information remains consistent and up-to-date with all providers. The four HIEs and 18 providers also described steps they take to limit the use of personal health information to specific purposes. All of the exchanges and providers reported that they limit disclosures by implementing role-based access controls through their systems. For example, HIEs and providers generally grant individuals involved in treating patients, such as physicians and nurses, access to all patient information, while those whose roles are limited to administrative functions (e.g., scheduling appointments) are provided access only to information relevant to those functions, such as patient demographics. Further, two of the exchanges limit the amount and types of patient information shared with their participating providers to certain types of data, such as those specified in standard continuity of care documents or in summary reports defined by the HIEs and their participating providers. Fifteen providers stated that when they receive requests for a patient’s information directly from other providers (that do not participate in the HIE), they examine requests on a case-by-case basis. Based on the examinations, these providers limit disclosure to the data they determine is appropriate to address the purpose of each request. By taking these steps, the case study exchanges and providers intend to limit sharing of electronic personal health information to specific purposes and to protect this information from inappropriate use and disclosure. While HIEs and providers described ways that they limit the disclosure of information in ordinary circumstances, three of the exchanges also reported that they have provisions for allowing special access to electronic information in emergency situations. All the exchanges in our study allow authorized emergency department physicians full access to data for patients they treat. One also allows providers broader access to patient information for some non-emergency situations, such as when obtaining historical information about new patients. In those cases, users are able to access data on any patient by providing a justification for the need to access the information. By allowing access to the electronic information about patients that they have stored in their health information systems, the HIEs support the providers’ ability to provide care to new patients and to patients in emergency situations. HIEs and providers said that they limit disclosure of patient information for uses other than treatment—i.e., secondary uses—to the purposes allowed by the HIPAA Privacy Rule. Specifically, the rule allows reporting de-identified health data to public health agencies for purposes such as disease tracking and sharing health information with medical research facilities. However, representatives from one case study HIE described an additional secondary use of the personal health information for a quality improvement program that it conducts. This exchange analyzes participating providers’ overall performance based on specific indicators (e.g., performing mammograms, screening for diabetes, and providing well checks for children and infants) and compares their performance to that of other providers that treat similar patient populations. By showing providers how they compare to their peers in providing chronic care treatment and preventive care to patients, these reports encourage providers to match their performance with that of their peers. In addition to the steps they take to limit the disclosure of personal health information, HIEs and providers described practices they implement for securing patients’ electronic personal health information against misuse and inappropriate disclosure. These practices include mechanisms intended to limit access to health information systems and patients’ data that are stored in these systems, and to secure data during transmission. All the case study HIEs and providers reported that they register and approve users before they are allowed access to their systems (i.e., the HIEs’ information systems and the providers’ own internal health information systems). They require users to log in to the systems with unique user names and passwords that were established during registration. In addition, two of the exchanges and five providers reported that they take more rigorous steps for verifying users’ identities. For example, one exchange implemented a two-stage login process that requires users to identify pictures that they select during registration in addition to confirming the user’s name and password. In two cases, providers’ systems require the entry of an additional code generated by a security token before allowing users to log in from a remote location (i.e., a location other than the place of employment). HIEs also described additional steps they take to restrict access to patients’ personal health information. For example, one requires providers to enter patient identification information when requesting data from other providers’ records; this practice is intended to restrict providers’ access to data about patients they are treating at the time of the request. Another limits the time period for which a provider can access a patient’s information—i.e., providers can only access information for a 90-day treatment period. One of the exchanges described a role-based method it had implemented for restricting access to system data. In this case, the system requires and verifies additional information about the requester before allowing access to certain data stored in the system. By restricting access to the systems in which patient information is stored and to only the information needed by providers for treating a patient, HIEs intend to protect the personal health information that they maintain in their systems from access by unauthorized individuals. In addition to access control mechanisms, HIEs and their participating providers reported that they implement a combination of practices to ensure that the data they store are secure. The HIEs in our case studies reported that they intend to store all electronic patient data indefinitely to accommodate legal requirements and varying data retention requirements. Most HIEs stated that they store detailed personal health information on patients, although the types and amount of stored data varies. For example, two of the exchanges store all patient health information that is sent from participating providers. Representatives of two other exchanges reported that they do not provide a repository for personal health information but retain limited information, such as (1) demographic information used to identify the patient, including the patient’s name and date of birth; (2) identifying information used to locate patients’ records when users search for data; and (3) data for maintaining an audit trail of access and use of patients’ personal health information. These HIEs described technical safeguards they implement for protecting the data that they store, such as the use of virtual private networks, firewalls, and intrusion detection systems. Providers reported that they implement similar security mechanisms to protect the electronic personal health information that they store. Additionally, all the exchanges and ten of the providers reported that they implement practices for securing personal health information that is transmitted electronically to an HIE or other providers. They stated that the data that they share electronically are encrypted prior to the data being transmitted. The implementation of these mechanisms is intended to prevent unauthorized individuals from accessing data being stored or transmitted for misuse, such as exploitation of confidential information for monetary gain or health identity theft. To ensure that the information they share about patients is accurate and complete, the HIEs and providers stated that they conduct testing and other activities to verify the quality of their data. Specifically, all the exchanges stated that they perform data quality testing prior to incorporating providers’ data into their systems. This testing entails the use of automated tools to verify that patients and data are matched accurately, along with manual reviews of data performed by personnel within the HIE. However, all of the exchanges generally rely on their participating providers to ensure the accuracy and completeness of patients’ personal health information; they stated that their responsibility is limited to maintaining the quality of the data as it is received from and transmitted to providers. In addition, providers described practices for ensuring the quality of patient data that they maintain in their own health information systems and share through HIEs. For example, 13 of the providers told us that they conduct manual or automated data review processes similar to those described by the exchanges. By reviewing and testing data prior to integrating patients’ health information into electronic information exchange systems, the case study HIEs and providers are taking steps to ensure that the providers with which they exchange electronic personal heath information receive accurate and complete data about their patients. The HIEs in our study described steps they take to hold individuals accountable for protecting patients’ personal health information. All four exchanges stated that policies and procedures for the appropriate disclosure of health information and consequences for improper use of personal health information are included in agreements that HIEs establish with their participating providers prior to initiating health information sharing activities. Specifically, exchanges described potential consequences for misuse of data by their participating providers and their employees, including suspending system access, terminating employment, and prosecuting criminal activities. All the case study exchanges stated that they maintain system access logs, which are reviewed periodically to identify inappropriate use or disclosure of data . Further, one exchange reported that its security officer performs reviews of providers’ internal security and privacy policies and procedures to ensure that minimum protections are in place, such as mechanisms fo obtaining patients’ consent to share information, and that practices meet legal requirements. Although the exchanges stated that they had not conducted formal studie of the effects of electronic sharing of personal health information on the quality of care their providers deliver, three of the HIEs repo of positive effects resulting from the services they provide. One of the exchanges reported that it provides alerts and reminders to participating providers regarding the health care of their patients, which can result in more timely interventions. An official from this exchange described one instance whereby a physician was reminded that seven of ived his patients needed colonoscopies based on exchange alerts he rece for each patient. Because of these alerts, the physician notified the patients, and they received this procedure. Results of the tests identified ation about three of the patients, and they were important clinical inform able to begin treatment. Two of these three exchanges provide a direct connection from participating hospitals to the state’s Department of Public Health for re time reporting of conditions and for supporting the early detection of disease outbreaks. According to an official with one of these exchanges, this service facilitated the state’s ability t f H1N1 more quickly than other states. oo obtain information about cases Another exchange provides physicians with quality indicator reports basedon clinical results from all participating institutions and physicians across the community. Specifically, physicians can create individualized report s based on patients for whom they are listed as the primary care physician r specific quality indicators, such as determining which patients have fo had a Pap smear in the last 3 years. While none of the HIEs has conducted formal studies or otherwise evaluated the overall effect of the electronic sharing of health information of care, three of them discussed plans to study the effect of on the quality the electronic sharing of health information on specific aspects of health care quality. One exchange reported that it has started working with a local public health department to develop metrics based on prevalent health conditions in the community, such as the percentage of each provider’s patients that have appropriate immunizations and the percentage of eligible patients that have had mammograms or other tests to screen for cancer. The exchange plans to aggregate some of its data to track these metrics and to study whether and how monitoring these metrics impacts the quality of care. An official at this HIE said that they intend to begin this initiative in 2010, but doing so is dependent on available funding. Another exchange reported that it initiated a quality improvement progra in March 2009 that is intended to help physicians adhere to evidence-bas medical practices to improve the health of their patients and to promote patient satisfaction. According to officials, this program merges claims data from health plans and clinical data from hospitals, laboratories, an physicians’ offices. These data are used for metrics that target preventative care services and chronic disease management, including cancer screening, diabetic testing, and medications for those with asthma Officials from this exchange stated that they ha o were not able to provide us with a time frame. . ve plans to study the effect f this program on the quality of care, but at the time of our review, they A third HIE said that it has developed a plan to conduct an overall evaluation that will include analyzing how the electronic exchange of health information affects the quality of care, such as determining wheth providers’ use of the exchange has reduced the time it takes a provider to diagnose a patient because of easier access to information. Surveys and focus groups of providers who use data will be used to evaluate the effect of the exchange on the quality of care. Officials stated that they anticipate beginning this evaluation in 2010. er Additionally, the participating providers in our study—all of whom wer e identified as active users of an exchange—reported that, because they ar part of an HIE, they have had more comprehensive and timely patient health information available at the point of care, which they believe has had a positive effect on the quality of care. Providers said that they can access information about their patients through their exchange that is no otherwise available in their own records, including information about medications and test results obtained from other providers, which gives them more comprehensive information about the status of their patient health. Additionally, providers save time by obtaining patients’ informati from the exchange rather than by contacting other providers by fax or mail, or by repeating tests that other providers have already conducted, allowing for more timely information at the point of care. Some pr also told us that they use the HIE to obtain patient laboratory results more quickly than by the traditional methods of mail or fax, which has facilitated earlier intervention for patients. Participating providers gave us these additiona through the exchange as having a positive effect on the quality of care for their patients: l examples of how they saw the information obtained A large hospital reported that physicians in the emergency department on information about patients, such s information about patients’ medication allergies, to identify and avoid have used the HIE to obtain medicati a potential adverse drug interactions. A medium-sized hospital stated that information obtained through the HIE helped their emergency department physician ascertain that a patient who was requesting medication for pain had been in five area hospitals in seven nights seeking pain medication. The physician did not prescribe any pain medication for the patient. A small physician’s practice and a large hospital reported that information available through the exchange facilitates the transfer of patients. Hospita officials said that by having immediate access to information on patients transferred to them, physicians can begin to develop treatment plans for the patient earlier, resulting in more timely care. Also, because they have a not end up repeating tests that have already been performed. ccess to the patients’ test results, physicians at the receiving facility do A participating physician from a family practice clinic reported that HIE provided valuable information about a patient who had left the hospital before being treated. Information about this patient in the exchange revealed that emergency department physicians had been trying itial signs of a to reach the patient because he had been experiencing the in the heart attack. As a result of having this information, the physician sent thi patient back to the hospital to be treated for cardiac arrest. Officials at a participating public health department stated that they use information obtained through the HIE to maintain their immunization and exam records for children, including exams that screen for vision, hear nutrition, or other issues. The officials reported that this information has h private physicians, who may also conduct exams for these children. elped to eliminate duplication of exams by the health department or A large hospital from one of the exchanges reported that their cardio was able to obtain an abnormal laboratory result electronically exchange one day before they would have without using it, allowing earlier intervention for a potentially life-threatening condition. Two of the other entities we interviewed—integrated health care delivery systems that share information within their own systems—reported that they have joined or have considered joining an HIE to obtain more comprehensive information about their patients, who may obtain health care services from providers outside of their systems. Officials with one of the integrated health care systems said that they joined an exchange recently because they felt it would provide physicians and other health care providers with a more complete picture of a patient’s health information regardless of where the patient obtains care, which could help to eliminate unnecessary or duplicative care, including tests that may have already been performed by other providers. Officials from the second integrated health care delivery system told us that they were considering joining an exchange because it could provide them with information about care—such as medications prescribed—obtained from providers outside of their system. In addition, these officials noted that joining the exchange could benefit emergency department physicians by helping them obtain immediate, more comprehensive information about patients. If you have any questions on matters discussed in this report, please contact me at (202) 512-6304, Gregory C. Wilshusen at (202) 512-6244, or Linda T. Kohn at (202) 512-7114, or by e-mail at melvinv@gao.gov, wilshuseng@gao.gov, or kohnl@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and key contributors to this report are listed in appendix III. Our objectives were to describe (1) the practices implemented for disclosing personal health information for purposes of treatment, including the use of electronic means for obtaining consent, as reported by selected health information exchanges, their participating providers, and other entities; and (2) the effects of the electronic sharing of health information on the quality of care for patients as reported by these organizations. To address both objectives, we conducted case studies of selected health information exchanges and their participating providers. We selected four operational health information exchanges and a sample of participating providers for each of the four exchanges. To select the case study health information exchanges, we compiled a list of 68 health information exchanges that were reported to be operational and actively sharing data among providers. From this list, we selected a judgmental, non- generalizable sample of four exchanges. Each exchange we selected met two of the following three characteristics: had an interstate data exchange and the need to address different state laws and regulations applicable to the disclosure of protected health information; included varying numbers, sizes, and types of provider organizations that disclose health information through the exchange; and operated with some degree of state involvement such as a state-led or state-level health information exchange. To identify exchanges with the selected characteristics, we reviewed our prior reports, and reports on outcomes of relevant Department of Health and Human Services projects such as the Nationwide Health Information Network, the State-level Health Information Exchange Consensus Project, and the Health Information Security and Privacy Collaboration. We also reviewed published research identifying active health information exchanges and relevant policy issues, and research published by health information technology professional associations and other health information privacy experts having data and knowledge about active health information exchange organizations. Finally, we considered the geographic location of the exchanges when making our final selection. We worked with each health information exchange to select a judgmental sample of participating providers. The categories of providers we used to ensure that we would have a variety in our sample included: small and medium hospitals with 199 beds or fewer; large hospitals with 200 beds or more; small physician practices with fewer than 10 full-time equivalent large physician practices that have 10 or more full-time equivalent employees; and other types of organizations, including long-term care facilities, public health facilities, pharmacies, laboratories, and insurance plans. We were unable to include all categories of participating providers for each exchange in our sample because some exchanges did not include providers from each category. We studied various types of providers that were active users of health information exchanges and that shared information directly with other providers (that are not members of the exchange). Because each health information exchange defined parameters for and tracked usage of the exchanges differently, we relied on officials from each exchange to identify providers from each category that were active users of the HIE’s services. For each of the four case studies, we gathered documentation and conducted interviews with the exchanges to determine the practices they implemented for disclosing personal health information, including electronic means of obtaining consent, practices they required for participating providers, and reported effects of sharing health information electronically on the quality of care; and gathered documentation and conducted interviews with officials from selected participating providers to determine the practices they implemented as part of the health information exchange and the practices they had implemented in their own organization for disclosing personal health information. In addition, we interviewed officials from these participating providers to determine how and to what extent the electronic sharing of health information affected the quality of care. At the conclusion of our study, we validated the information that we included in this report with the exchanges and providers to confirm that their disclosure practices and examples of the effects of electronic sharing of personal health information were accurately portrayed. While we did not independently test the reported practices and examples of effects on quality of care, we corroborated the testimonial evidence obtained during our case studies with supporting documentation. For additional information about the health information exchanges and participating providers we studied, see appendix II. To supplement the information we obtained from our case studies, we gathered information from and conducted interviews with other entities, including two integrated health care delivery systems. We also held discussions with two professional associations (eHealth Initiative and Healthcare Information Management and Systems Society) and 11 of their affiliated health information exchanges, and the New York eHealth Collaborative, an organization focused on developing and enforcing New York State’s health information exchange policy. We interviewed and obtained additional information from other health care organizations, including the American Hospital Association, the Agency for Healthcare Research and Quality, the American Medical Association, and the Center for Studying Health System Change. Additionally, we reviewed federal requirements for protecting electronic personal health information, accepted privacy guidelines produced by the Organization for Economic Cooperation and Development and the Markle Foundation’s Connecting for Health Collaborative, and reports and guidance on implementing privacy practices produced by the Department of Health and Human Services’ Office for Civil Rights and Office of the National Coordinator for Health Information Technology. We also interviewed privacy experts from the Health Policy Institute at Georgetown University and the World Privacy Forum. We conducted our work from May 2009 to February 2010 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions. Case Study 1 was of a health information exchange (HIE) serving a metropolitan area and a neighboring state. The HIE was organized in 1995 by a private company and has supported the exchange of health information among providers in its metropolitan area and neighboring state since 2006. For this case study, we identified disclosure practices reported by the HIE, two of its participating hospitals, and three provider practices. Providers participating in the exchange Other (e.g., clinical laboratories, long-term care facilities, hospices, etc.) The tools and services offered by this HIE include: Delivery of results to providers via an online inbox (e.g., laboratory test results) Communications (e.g., messaging for sending, receiving and managing information about patients among providers) Access by emergency department physicians to clinical history from all participating providers for patients being treated in emergency departments Electronic health record system for providers opting to store their patient records with the HIE instead of in internal systems Assistance to providers with technology implementation and training as well as provider-specific analysis of patient data for quality review purposes Table 1 describes the methods of implementing disclosure practices reported by the HIE and the five participating providers that we studied. Case Study 2 was of an HIE that serves multiple states. The HIE is led by a nonprofit organization and supports regional level information exchange. Organized in 2005, this HIE began actively exchanging data among its participating providers in 2008. For this case study, we identified disclosure practices reported by the HIE, a hospital, two provider practices, and a public health department. Providers participating in the exchange Other (e.g., health plan, public health department) The tools and services offered by the HIE include: Communications (e.g., receiving and managing information about patients among providers) Interface to support searching for patient data by providers and presenting data in a standard summary medical record format Assistance to providers with technology implementation and training as well as provider-specific analysis of patient data for quality review purposes Table 2 describes the methods of implementing disclosure practices reported by the HIE and the four participating providers that we studied. Case Study 3 was of an HIE serving one state. Operated by a public- private partnership, the exchange was created by state statute in 1997 and began supporting the exchange of health information among providers in May 2007. The participating providers selected for review as part of this case study included two hospitals and two provider practices. Providers participating in the exchange Provider practices (e.g., private physician practices, health centers, hospital emergency departments, clinics) Other (e.g., two national clinical laboratories, pathology provider) The tools and services offered by this HIE include: Secure delivery of clinical results in a standardized format (e.g., laboratory test results), reports (e.g., radiology), and face sheets (demographic and billing information) Enhanced provider-to-provider communication (e.g., forwarding clinical results to HIE users’ inbox) Interface to support searching for patient data in electronic health record for those providers with them Interface for public health reporting and biosurveillance activities Table 3 describes the methods of implementing disclosure practices reported by the HIE and the four participating providers that we studied. Case Study 4 was of an HIE serving one state. The HIE was established as a public-private partnership led by a nonprofit organization in 2004. It supports state-level information exchange and began actively exchanging data amongst its participating providers in 2007. For this case study, we identified disclosure practices reported by the HIE, two hospitals, a physician practice, and two clinics. Providers participating in the exchange Other (e.g., clinics and state and local health departments) The tools and services offered by this exchange include: Delivery of results to providers (e.g., laboratory test results) Communications (e.g., messaging for sending, receiving and managing information about patients among providers) Automatic delivery of patients’ clinical history from all participating providers to emergency departments when patients are registered Quality metrics based upon analysis of provider data for key indicators (e.g., mammograms provided to patients for whom they are indicated) Interface to support reporting by hospitals of reportable conditions and emergency department chief complaint data to state health department Table 4 describes the methods of implementing disclosure practices reported by the HIE and five of its participating providers that we studied. In addition to the contacts named above, key contributors to this report were Bonnie W. Anderson (Assistant Director), John A. de Ferrari (Assistant Director), Teresa F. Tucker (Assistant Director), Monica Perez Anatalio, Danielle A. Bernstein, April W. Brantley, Susan S. Czachor, Neil J. Doherty, Rebecca E. Eyler, Amanda C. Gill, Nancy E. Glover, Ashley D. Houston, Fatima A. Jahan, Thomas E. Murphy, and Terry L. Richardson.
To promote the use of information technology for the electronic exchange of personal health information among providers and other health care entities, Congress passed the Health Information Technology for Economic and Clinical Health (HITECH) Act. It provides incentives intended to promote the widespread adoption of technology that supports the electronic sharing of data among hospitals, physicians, and other health care entities. Pursuant to a requirement in the HITECH Act, GAO is reporting on practices implemented by health information exchange organizations, providers, and other health care entities that disclose electronic personal health information. GAO's specific objectives were to describe (1) the practices implemented for disclosing personal health information for purposes of treatment, including the use of electronic means for obtaining consent, as reported by selected health information exchange organizations, their participating providers, and other entities; and (2) the effects of the electronic sharing of health information on the quality of care for patients as reported by these organizations. To address both objectives, GAO conducted case studies of 4 of more than 60 operational health information exchanges and a selection of each of the exchanges' participating providers. The health care entities GAO studied reported that they implement disclosure practices that reflect widely accepted practices for safeguarding personal information-the Fair Information Practices-to help ensure the appropriate use and disclosure of electronic personal health information for treatment purposes. For example, providers in the study described various implementations of practices that require direct interaction with patients, such as informing patients of the use and disclosure of personal health information and providing patients access to their own records. Some of them inform patients that their electronic personal health information may be shared through health information exchanges-entities that were formed to facilitate the electronic sharing of patients' health information among providers. Both the providers and exchanges in the study described practices that limit disclosure of information, secure electronic information that they store and transmit, and help ensure accountability for safeguarding electronic personal health information. Although the health information exchanges reported that they have not conducted formal studies or evaluations of the overall effect of electronically sharing personal health information, both the exchanges and providers reported examples of ways that sharing electronic personal health information about patients has had a positive effect on the quality of care that providers deliver to patients. (1) Officials from two exchanges stated that they provide a direct connection from participating hospitals to their state's Department of Public Health for real-time reporting of conditions and for supporting the early detection of disease outbreaks. According to one of these officials, this service facilitated the state's ability to obtain information about cases of H1N1 more quickly than other states. (2) A large hospital that participated in one of the exchanges reported that a cardiologist was able to obtain an abnormal laboratory result electronically from the exchange one day earlier than they would have otherwise. This timely access to the patients' electronic health information allowed the provider to perform earlier intervention for a potentially life-threatening condition. (3) Another hospital reported that information obtained through its health information exchange helped its emergency department physician ascertain that a patient who was requesting medication for pain had been in five area hospitals in seven nights seeking pain medication. As a result, the physician did not prescribe any additional pain medication.
Medicaid and SCHIP are joint federal-state programs that finance health care coverage for certain categories of low-income individuals. To qualify for Medicaid or SCHIP, individuals must meet specific eligibility requirements related to their income, assets, and other personal characteristics such as age. Each state operates its program under a CMS- approved state plan. Almost immediately after Hurricane Katrina, CMS announced in a State Medicaid Director’s letter on September 16, 2005, that states could apply for Medicaid demonstration projects authorized under section 1115 of the SSA, through which the federal government would fund its share of expenditures for health care services for certain individuals affected by the hurricane. These demonstration projects provided for (1) time-limited Medicaid and SCHIP services to allow states to quickly enroll eligible individuals who were affected by the hurricane, and (2) time-limited uncompensated care services—allowing states to pay providers rendering services for individuals affected by the hurricane who do not have an alternative method of payment or insurance. Interested states could apply to CMS to offer demonstration projects for either or both categories, and those receiving CMS approval were permitted to seek reimbursement for the federal share of allowable expenditures for covered beneficiaries under the demonstrations. To assist states in applying for these demonstration projects, CMS convened a conference call with all state Medicaid agencies to brief them on the agency’s September 16, 2005, letter, discuss the application process, and provide information on other implementation issues, such as benefits for evacuees and relevant federal regulations regarding Medicaid eligibility. For time-limited Medicaid and SCHIP services under the demonstrations, states received approval to provide Medicaid and SCHIP coverage to certain evacuees and affected individuals. In establishing eligibility for this type of demonstration, states primarily used simplified eligibility criteria that CMS developed to determine if affected individuals and evacuees could enroll to receive time-limited Medicaid and SCHIP services (see table 1). States with approved demonstrations for time-limited uncompensated care services could pay providers who delivered services to affected individuals and evacuees who either did not have any other coverage for health care services (such as private or public health insurance), or who had Medicaid or SCHIP coverage but required services beyond those covered under either program. On February 8, 2006, the DRA appropriated $2 billion to be available until expended for four funding categories—two categories associated with the demonstration projects, and two additional categories of funding. DRA applied time limits on the first two categories that were linked to the demonstration projects—that is, services must have been provided by certain dates. The DRA did not specify time limits for the two remaining funding categories. (See table 2.) States could receive allocations from CMS based on certain criteria identified in the DRA, including whether they were directly affected by the hurricane or hosted evacuees. States directly affected by the hurricane— Alabama, Louisiana, and Mississippi—and states that hosted evacuees could receive DRA funding through Categories I and II, the nonfederal share of expenditures for time-limited Medicaid and SCHIP services and expenditures for time-limited uncompensated care services. In contrast, as specified by DRA, funds for Category III, the nonfederal share of expenditures for existing Medicaid and SCHIP beneficiaries, were available only to certain areas in the directly affected states. These areas were counties or parishes designated under the Robert T. Stafford Disaster Relief and Emergency Assistance Act as areas eligible to receive federal disaster assistance. According to a CMS official, shortly after Hurricane Katrina, 10 counties in Alabama, 31 parishes in Louisiana, and 47 counties in Mississippi were identified as eligible to receive such assistance and were declared individual assistance areas. (See fig. 1.) States receive reimbursement for their expenditures in each of the funding categories through the submission of claims to CMS. To obtain reimbursement of claims for services, providers first submit claims to states for health care services provided to affected individuals and evacuees. States then submit claims to CMS for DRA-covered expenditures made for health care services provided to affected individuals and evacuees under each of the DRA funding categories. In addition, although the DRA was not enacted until February 8, 2006, CMS allowed funding to be retroactive to August 24, 2005. As of September 30, 2006, CMS had allocated approximately $1.9 billion of the total $2 billion in DRA funds to states that were directly affected by Hurricane Katrina or that hosted evacuees in the aftermath of the storm. CMS allocated funds to the first three categories: Category I—the nonfederal share of expenditures for time-limited Medicaid and SCHIP services; Category II—expenditures for time-limited uncompensated care services; and Category III—the nonfederal share of expenditures for existing Medicaid and SCHIP beneficiaries from designated areas of the directly affected states. CMS chose not to allocate any DRA funding to Category IV, for restoring access to health care in impacted communities. CMS allocated the majority of DRA funding (78.3 percent of the $1.9 billion allocated) to Category III, the nonfederal share of expenditures for existing Medicaid and SCHIP beneficiaries, which, by law, was limited to the three directly affected states (Alabama, Louisiana, and Mississippi). CMS allocated funds to states on two occasions—an initial allocation of $1.5 billion on March 29, 2006, and a subsequent allocation on September 30, 2006. Both of these allocations were based on states’ estimates of their DRA expenditures. In the second allocation on September 30, 2006, no state received less funding than it received in the March 29, 2006, allocation, but allocations shifted among the DRA categories. As of September 30, 2006, CMS had allocated approximately $1.9 billion of DRA funds to three DRA funding categories to 32 states. The majority of the $1.9 billion allocation—about $1.5 billion (78.3 percent)—is for Category III, existing Medicaid and SCHIP beneficiaries, which is limited to the three directly affected states (Alabama, Louisiana, and Mississippi). For Category I, time-limited Medicaid and SCHIP services, and Category II, time-limited uncompensated care services, states received about $102 million (5.5 percent of the total allocation) and about $302 million (16.2 percent of the total allocation), respectively. (See fig. 2.) With regard to Category I, 32 states received approval to extend time-limited Medicaid and SCHIP coverage to individuals affected by Hurricane Katrina; however, no states actually enrolled individuals in SCHIP. Therefore, only Medicaid services were covered through this DRA funding category. Of these 32 states, 8 states also received approval for Category II to pay providers for rendering extend time-limited uncompensated care services to individuals affected by the hurricane. CMS officials stated that the agency approved the majority of states’ applications for demonstration projects within 45 days of the hurricane. Of the 32 states that received allocations totaling $1.9 billion, Louisiana received the largest amount—44.6 percent (about $832 million) of the total allocation. Combined, the 3 directly affected states—Louisiana, Alabama, and Mississippi—received approximately 90 percent ($1.7 billion) of the $1.9 billion allocated to states. While not a directly affected state, Texas hosted a large number of evacuees and received about 7.6 percent ($142 million) of the allocation. These 4 selected states together received approximately 97.5 percent ($1.8 billion) of the $1.9 billion allocation. (See table 3.) CMS provided DRA allocations on two occasions, and both allocations were based on states’ estimated DRA expenditures. CMS first allocated $1.5 billion to 32 states on March 29, 2006. After the DRA was enacted in February 2006, CMS requested states’ estimated fiscal year 2006 expenditures for three of the four DRA funding categories: Category I—the nonfederal share of expenditures for time-limited Medicaid services; Category II—expenditures for time-limited uncompensated care services; and Category III—for directly affected states, the nonfederal share of expenditures for existing Medicaid and SCHIP beneficiaries. CMS did not request that the three directly affected states estimate expenditures for Category IV—restoring access to health care in impacted communities. CMS officials told us that they viewed restoring access to care as discretionary in nature and not associated with direct service expenditures. In the March 29, 2006, allocation, CMS fully funded 32 states’ estimated expenditures for DRA funding for Categories I and II, and also provided the three directly affected states with allocations to approximately half of their estimated expenditures for Category III. Because allocations were based on states’ estimates, CMS withheld $500 million of the $2 billion available for the initial allocation, anticipating that allocations would need to be realigned. In July 2006, CMS requested updated estimates of DRA expenditures for fiscal year 2006 for the same three categories: the two time-limited categories for Medicaid and uncompensated care services (Categories I and II) and the existing Medicaid and SCHIP beneficiaries (Category III). On September 30, 2006, CMS allocated an additional amount of about $364 million to states, which, combined with the initial March 29, 2006, allocation of $1.5 billion, provided a total allocation of approximately $1.9 billion. This allocation was based on states’ updated estimated expenditures for each of the three DRA categories for which CMS provided funding. For the second allocation, each of the three directly affected states received allocations of 100 percent of their updated estimated expenditures for all three funding categories. While CMS did not decrease any state’s allocation as a result of the July 2006 request for updated estimates, it did shift allocation amounts among DRA funding categories when necessary for the September 30, 2006, allocation. Therefore, each state received its allocation amount from March 29, 2006, plus any additional funding included in the updated estimated expenditures. As a result, some states that lowered their subsequent estimates received more than they requested. For example, Texas lowered its initial estimated expenditures from $142 million (its March 29, 2006, estimate) to approximately $36 million. CMS did not change Texas’ allocation from the amount the state received on March 29, 2006; thus, Texas retained an allocation of $142 million. Other states received more than they were initially allocated. For example, Alabama requested about $181 million initially, but gave CMS an updated estimate of $248 million. CMS initially allocated Alabama approximately $97 million, but increased its allocation to $248 million on September 30, 2006. (See table 4.) As of September 30, 2006, $136 million in DRA funding remained available for allocation. CMS officials stated that, during the first quarter of fiscal year 2007, they plan to reconcile states’ expenditures submitted to CMS with the allocation amounts provided to states on September 30, 2006. After this reconciliation is completed, CMS will determine how to allocate the remaining $136 million of available DRA funds and any unexpended funds of the approximately $1.9 billion previously allocated to states. As of October 2, 2006, states had submitted to CMS claims for services— including associated administrative costs—totaling about $1 billion (or 54 percent) of the $1.9 billion in DRA funds allocated to them. The amount of claims submitted and the number of states that submitted claims varied by DRA category. Of the 32 states that received allocations from CMS, 22 states have submitted claims, including the 3 directly affected states. Some state officials said they faced obstacles processing DRA-related claims. While DRA-related expenditures varied by state, claims were concentrated in nursing facilities, inpatient hospital care, and prescription drugs. Of the 32 states that received DRA allocations, about two-thirds (22) had submitted claims for expenditures to CMS as of October 2, 2006. The submitted claims accounted for about 54 percent of CMS’s $1.9 billion allocated to states. States that submitted claims for reimbursement did so for amounts that ranged from about 7 percent to approximately 96 percent of their allocations. (See table 5.) Each of the 4 selected states we reviewed—Alabama, Louisiana, Mississippi, and Texas—had submitted claims by this time. Of the claims submitted for the two time-limited funding categories, 22 of 32 states submitted claims for Medicaid services (Category I) and 6 of 8 states submitted claims for uncompensated care services (Category II). The claims submitted constituted approximately 20 percent of total allocations to Medicaid and about 42 percent of total allocations to uncompensated care services. Of the 4 selected states, 3 states—Alabama, Mississippi, and Texas—submitted claims for Medicaid services, while all 4 selected states submitted claims for uncompensated care services. (See table 6.) Only the three directly affected states—Alabama, Louisiana, and Mississippi—were eligible to receive DRA funding for existing Medicaid and SCHIP beneficiaries (Category III). The claims submitted by the directly affected states constituted approximately 58 percent of total allocations to Category III. (See table 7.) In addition, claims from the three directly affected states for existing Medicaid and SCHIP beneficiaries accounted for about 85 percent of all DRA claims filed. While funds for existing Medicaid and SCHIP beneficiaries were available for both programs, about 98 percent of claims submitted were for Medicaid expenditures. It has taken longer than usual for states—both those directly affected by the hurricane as well as states that hosted evacuees—to submit claims. Typically, Medicaid expenditure reports are due the month after the quarter ends. CMS officials estimated that about 75 percent of states submit their Medicaid expenditures within 1 to 2 months after the close of a quarter. However, data are not finalized until CMS and states ensure the accuracy of claims. The process of states submitting claims for DRA- related expenditures has been more prolonged. As with other Medicaid claims, states are permitted up to 2 years after paying claims to seek reimbursement from CMS. Therefore, these initial results are likely to change as states continue to file claims for services. As of October 2, 2006, 10 of 32 states that received allocations of DRA funding had not submitted any claims even though fiscal year 2006 ended on September 30, 2006. Some state officials told us that they were having difficulties submitting claims because of various obstacles related to processing claims or receiving claims from providers, including needing to manually process claims or adapt computer systems to accommodate the new types of claims being submitted. For example, Mississippi officials explained that they were manually processing claims for time-limited uncompensated care services because they did not have an electronic system for processing such claims. Georgia officials reported that the state’s claims processing system had to be adjusted in order to properly accept claims for time-limited uncompensated care services. After such adjustments were made, Georgia officials anticipated accepting these claims from mid- July through the end of August 2006. Alabama officials noted that they had to specifically request that providers submit claims for the costs of providing uncompensated care services they may have assumed would not be reimbursable. Claims that the four selected states submitted for Medicaid expenditures in the three categories of DRA funding we reviewed varied, but were typically concentrated in three service areas: nursing facilities, inpatient hospital care, and prescription drugs. For example, all four selected states had nursing facility services as one of their top four services for which they submitted claims, while only Alabama had home and community- based services as one of its services with the highest expenditures. Of the claims submitted by states, the proportions attributed to specific services varied across the states. (See table 8.) Alabama, Louisiana, and Mississippi submitted claims for the nonfederal share of expenditures for SCHIP services to existing SCHIP beneficiaries. Overall, the dollar amount of claims for SCHIP represented approximately 2 percent of the total value of claims submitted. As of October 2, 2006, the top four SCHIP expenditures in Alabama were for physician services (22.8 percent), prescription drugs (20.7 percent), inpatient hospital services (13.4 percent), and dental services (12.1 percent). The top four SCHIP expenditures in Louisiana were for prescription drugs (45.4 percent), physician services (22.4 percent), outpatient hospital services (12.5 percent), and inpatient hospital services (9.8 percent). For Mississippi, all of the claims for DRA funds were for expenditures associated with paying SCHIP premiums for certain enrollees. Two of our four selected states raised concerns about their ability to meet the future health care needs of those affected by the hurricane once DRA funds have been expended: Louisiana, which is eligible for DRA funding for Category III services that may be provided beyond June 30, 2006; and Texas, which is not eligible for such ongoing assistance. Of the three directly affected states—Alabama, Louisiana, and Mississippi—only Louisiana raised concerns that it would need additional funds to provide coverage for individuals affected by the hurricane who evacuated the state yet remain enrolled in Louisiana Medicaid. Alabama and Mississippi officials did not anticipate the need for additional funding beyond what was already allocated by CMS. In contrast, because Texas is eligible only for the time-limited DRA funds from Category I and Category II, state officials expressed concern about future funding needs in light of the many evacuees remaining in the state. To learn more about this population, the state commissioned a survey that indicated that evacuees responding to the survey continue to have a high need for services, including health care coverage under Medicaid and SCHIP. Only the three directly affected states—Alabama, Louisiana, and Mississippi—are eligible for DRA funds for Category III services, which were designated to compensate states for the state share of expenditures associated with services provided to existing Medicaid and SCHIP beneficiaries from certain areas of directly affected states beyond June 30, 2006. This additional DRA funding could potentially be available from any unused funds of the $1.9 billion allocated on September 30, 2006, and the $136 million remaining from the $2 billion appropriated. It is unclear how much of the $1.9 billion allocation will be unused and thus available for redistribution. Additionally, it is not yet known how the remaining $136 million will be distributed, but CMS will make that determination after reconciling states’ claims submitted during the first quarter of fiscal year 2007 with the allocations. Of the three states eligible for ongoing DRA funding, only Louisiana raised concerns that additional funds will be necessary; Alabama and Mississippi did not anticipate additional funding needs beyond those CMS already allocated. Louisiana’s funding concerns were associated with managing its program across state borders as evacuees who left the state continue to remain eligible for Louisiana Medicaid. State officials acknowledged that their immediate funding needs have been addressed by the September 30, 2006, allocation; however, they remain concerned that they do not have the financial or administrative capacity to serve their Medicaid beneficiaries across multiple states. Louisiana officials also cited the difficulty of maintaining what they characterized as a national Medicaid program for enrolled individuals and providers living in many different states. Louisiana has submitted claims for DRA funding for Category III for existing Medicaid and SCHIP beneficiaries (individuals enrolled in Louisiana Medicaid) who resided in 1 of the 31 affected parishes in Louisiana prior to Hurricane Katrina, but evacuated to another state after the hurricane, and who continue to reside in that state. Because many of these evacuated individuals have expressed intent to return to Louisiana, they have not declared residency in the state where they have been living since Hurricane Katrina. Under these circumstances, these individuals have continued to remain eligible for Louisiana Medicaid. However, Louisiana officials were uncertain how long the state would be expected to continue this coverage on a long-distance basis. While DRA funds cover the nonfederal (Louisiana state) share of service expenditures for these Medicaid and SCHIP beneficiaries (Category III), they are not designated to include reimbursement for the administrative costs associated with serving Louisiana Medicaid beneficiaries living in other states. In particular, Louisiana officials noted the following difficulties, which were also outlined in a May 15, 2006, letter to HHS and a May 26, 2006, letter to CMS. These letters requested specific direction from CMS on the issues presented as well as permission to waive certain federal Medicaid requirements that Louisiana believes it has been unable to comply with. In commenting on a draft of our report, Louisiana officials stated that as of November 30, 2006, they had not received the written guidance that they requested from CMS on the following issues: Managing and monitoring a nationwide network of providers. Covering individuals who have evacuated from the state but remain eligible for Louisiana Medicaid requires the state to identify, enroll, and reimburse providers from other states. According to Louisiana officials, the state has enrolled more than 16,000 out-of-state providers in Louisiana Medicaid since August 28, 2005. The state does not believe that it can manage and monitor a nationwide network of providers indefinitely. Therefore, Louisiana is seeking guidance from CMS to ensure that the state is continuing to comply with federal Medicaid requirements for payments for services furnished to out-of-state Medicaid beneficiaries. Redetermining eligibility. Federal Medicaid regulations require that states redetermine eligibility at least annually as well as when they receive information about changes in individuals’ circumstances. Louisiana officials indicated that they had received approval through its demonstration project to defer redetermination processes through January 31, 2006. Officials noted that they have more than 100,000 individuals from affected areas whose eligibility had not yet been redetermined as of May 26, 2006. Officials say they do not want to take beneficiaries who need coverage off the state’s Medicaid rolls for procedural reasons, and thus would prefer to conduct mail-in renewals and have a process for expedited reenrollment upon return to the state. According to Louisiana officials, the state’s redetermination processes are currently on hold while CMS examines the possibility of granting a waiver for redetermining eligibility for individuals from the most severely affected parishes around New Orleans. Maintaining program integrity. Louisiana officials explained that running a Medicaid program in multiple states raises issues of program integrity. While some providers have contacted Louisiana Medicaid to report that they have received payment from more than one state, Louisiana officials believe that other providers are not reporting overpayments. State officials indicated that they will conduct postpayment claims reviews to ensure that double billing and other fraudulent activities have not occurred. These officials estimated that this effort to review claims could be time consuming, taking approximately 3 to 8 years to complete. Because Louisiana believes that it is unable to ensure the integrity of the program as long as it continues enrolling out-of-state providers, the state requested specific direction from CMS on whether to continue such enrollment efforts. Ensuring access to services. Louisiana officials expressed a concern about the state’s ability to ensure access to home and community-based services in other states. Officials noted that some states have long waiting lists for this type of long-term care, making it difficult for them to provide services that assist in keeping individuals in the community rather than in an institution. Additionally, as a requirement of providing home and community-based services, measures are needed to protect the health and welfare of beneficiaries. However, officials stated that Louisiana is not in the position to assure the health and safety of individuals requiring these services out of the state. Thus, the state asked CMS for direction on how to continue operating its Medicaid program without violating the federal requirement to assure the health and welfare of beneficiaries receiving home and community-based services. While Texas is not a directly affected state and therefore not eligible for DRA funding for any Medicaid or SCHIP services provided beyond June 30, 2006, it has been significantly affected by the number of evacuees seeking services, thus prompting concern among state officials regarding the state’s future funding needs. To address the health needs of evacuees entering the state, Texas enrolled these individuals into Medicaid under Category I—providing time-limited Medicaid services for evacuees who were eligible under an approved demonstration project. In comparison to Alabama and Mississippi, which also enrolled evacuees into time-limited Medicaid services, Texas enrolled the largest number of evacuees— peaking at nearly 39,000 individuals in January 2006. (See table 9). Texas also submitted claims for Category II DRA funds for time-limited uncompensated care services to evacuees, shortly after the hurricane. Enrollment into this category grew steadily from 2,224 individuals in October 2005 to 9,080 individuals in January 2006. Figure 3 shows the enrollment patterns for the Texas Medicaid program, as well as Category I and Category II services provided for the period following Hurricane Katrina. To better understand the characteristics, needs, and future plans of the evacuee population, the Texas Health and Human Services Commission contracted with the Gallup Organization to survey Hurricane Katrina evacuees in Texas. Data from survey respondents indicated that, as of June 2006, evacuees remaining in the state were predominantly adult women who lived in low-income households with children and had increasing rates of uninsurance since the hurricane. Despite the loss of insurance coverage, the survey indicated that fewer evacuees received Medicaid than previously expected and the loss of insurance primarily affected children’s health coverage. Evacuees appear to be turning to hospital emergency departments to meet their health care needs, as survey respondents reported an increase in emergency room visits in the past 6 months. Texas officials confirmed that evacuees who were previously eligible for the two DRA categories for time-limited coverage (Medicaid and uncompensated care services) are beginning to present themselves to local county facilities for their health care needs, thus straining local resources to provide care for all Texas residents. Based on this survey, Texas officials said they are concerned that they will continue to host an evacuee population with high needs who do not have immediate plans to leave the state. In particular, over half of the survey respondents believe they will continue to reside in Texas in the next 6 months and half believe they will still be there in 1 year. Texas was not a directly affected state and is therefore not eligible for ongoing assistance through the DRA; funding for Category I only covers services provided as of June 30, 2006, and funding for Category II only covers services provided as of January 31, 2006. We provided copies of a draft of this report to CMS and the four states we reviewed: Alabama, Louisiana, Mississippi, and Texas. We received written general and additional comments from CMS (see app. II) and from Louisiana and Texas (see apps. III and IV, respectively). Alabama provided technical comments, while Mississippi did not comment on the draft report. In commenting on the draft report, CMS provided information on an initiative it took to respond to Hurricane Katrina. The agency indicated that HHS, which oversees CMS, worked closely with Louisiana’s Department of Health and Hospitals to assist the state in convening the Louisiana Health Care Redesign Collaborative, which will work to rebuild Louisiana’s health care system. We did not revise the text of the report to include information on this effort because it was beyond the scope of this report. However, we have earlier reported on HHS efforts to help rebuild Louisiana’s health care system. CMS also commented on three issues: our characterization of the categories of funding provided through DRA, our description of CMS’s reconciliation process, and criticism it faced in communicating with the states, particularly Louisiana and Texas, regarding program implementation, coverage for out-of-state evacuees, and other issues. These comments are addressed below. CMS commented that we mischaracterized the categories of DRA funding by specifying them in the report as Categories I, II, III, and IV. We developed these four descriptive categories, which were derived from provisions of the DRA, in order to simplify report presentation. However, to respond to CMS’s comment, we included additional legal citations in the report to better link the statutory language of the DRA with the categories of funding presented in this report. We did not, however, adopt all of CMS’s descriptions of DRA provisions as CMS presented some of the descriptions inaccurately. In particular, CMS presented DRA sections 6201(a)(3) and 6201(a)(4) as providing federal funding under an approved section 1115 demonstration project, but as stated in the report, such approval is irrelevant to this funding. CMS also commented that the report was misleading because it did not fully describe the reconciliation process that will be used to allocate remaining and unused DRA funds. Specifically, the agency indicated that we did not explain that additional DRA allocations would be made to states not only from the remaining $136 million in unallocated funds but also from any unspent funds already allocated to states. The draft report did contain a full explanation of the reconciliation process. However, to address CMS’s comment, we clarified this process in the report’s Highlights and Results in Brief. Finally, CMS disagreed with statements in the draft report that Louisiana had not received the requested direction detailed in letters written to HHS on May 15, 2006, and CMS on May 26, 2006. Louisiana’s letters included concerns and questions that arose after the state implemented its section 1115 demonstration project. CMS indicated that it provided and continues to provide technical assistance to all states with section 1115 demonstration projects for Hurricane Katrina assistance beyond the states reviewed in this report. In particular, immediately following the hurricane CMS provided guidance to states through a conference call and a September 16, 2005, letter sent to all state Medicaid directors that explained the process of applying for the section 1115 demonstration project, the benefits and eligibility criteria for evacuees, the uncompensated care pool, and other pertinent information. We revised the report to reflect the guidance that CMS provided to the states immediately following the hurricane. CMS also commented that it worked with Louisiana and the other hurricane-affected states on redetermining eligibility through a conference call, and provided information to Louisiana several times regarding regulations that the state should follow for redetermining eligibility on an annual basis. Further, CMS indicated that it provided technical assistance to Louisiana in its efforts to ensure program integrity and access to health care services. While CMS may have provided such assistance, from Louisiana’s perspective, it was not sufficient to address the many issues the state is facing. In Louisiana’s written comments, state officials maintained that as of November 30, 2006, they had not received written guidance from CMS regarding the issues outlined in their May 15, 2006, letter. Comments from Louisiana and Texas centered on each state’s efforts to assist those affected by the hurricane and the ongoing challenges that exist as a result of Hurricane Katrina. In particular, Louisiana emphasized the lack of response from HHS regarding its concerns about running its Medicaid program in many states and related difficulties to ensuring the program’s integrity. Texas commented on its continued need to provide health care services to Hurricane Katrina evacuees given the results of a survey conducted by the Gallup Organization, which indicated that most of the evacuees still residing in Texas were uninsured as of June 2006. Additional technical and editorial comments from CMS and the states were incorporated into the report as appropriate. We are sending a copy of this report to the Secretary of Health and Human Services and the Administrator of CMS. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7118 or allenk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Under the authority of the Deficit Reduction Act of 2005, the Centers for Medicare & Medicaid Services (CMS) allocated funding totaling approximately $1.9 billion to 32 states, as of September 30, 2006. The agency allocated funds to all 32 states for the time-limited Medicaid category of demonstration projects, to 8 of those 32 states for the time- limited uncompensated care category of demonstration projects, and to the 3 directly affected states—Alabama, Louisiana, and Mississippi—for the nonfederal share of expenditures for existing Medicaid and SCHIP beneficiaries. The 4 states selected for this study—Alabama, Louisiana, Mississippi, and Texas—received approximately 97.5 percent of the $1.9 billion allocation. All allocations were based on estimates states submitted for each of the funding categories in response to CMS’s July 2006 request for updated estimates. (See table 10.) In addition to the contact named above, Carolyn Yocom, Assistant Director; Jennie Apter; Laura M. Mervilde; JoAnn Martinez-Shriver; Sari B. Shuman; and Hemi Tewarson made key contributions to this report. Hurricane Katrina: Status of Hospital Inpatient and Emergency Departments in the Greater New Orleans Area. GAO-06-1003. Washington, D.C.: September 29, 2006. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO-06-618. Washington, D.C.: September 6, 2006. Hurricane Katrina: Status of the Health Care System in New Orleans and Difficult Decisions Related to Efforts to Rebuild It Approximately 6 Months After Hurricane Katrina. GAO-06-576R. Washington, D.C.: March 28, 2006. Hurricane Katrina: GAO’s Preliminary Observations Regarding Preparedness, Response, and Recovery. GAO-06-442T. Washington, D.C.: March 8, 2006. Statement by Comptroller General David M. Walker on GAO’s Preliminary Observations Regarding Preparedness and Response to Hurricanes Katrina and Rita. GAO-06-365R. Washington, D.C.: February 1, 2006.
In February 2006, the Deficit Reduction Act of 2005 (DRA) appropriated $2 billion for certain health care costs related to Hurricane Katrina through Medicaid and the State Children's Health Insurance Program (SCHIP). The Centers for Medicare & Medicaid Services (CMS) was charged with allocating the $2 billion in funding to states directly affected by the hurricane or that hosted evacuees. GAO performed this work under the Comptroller General's statutory authority to conduct evaluations on his own initiative. In this report, GAO examined: (1) how CMS allocated the DRA funds to states, (2) the extent to which states have used DRA funds, and (3) whether selected states--Alabama, Louisiana, Mississippi, and Texas--anticipate the need for additional funds after DRA funds are expended. To conduct this review, GAO reviewed CMS's allocations of DRA funds to all eligible states, focusing in particular on the four selected states that had the highest initial allocation (released by CMS on March 29, 2006). GAO obtained data from Medicaid offices in the four selected states regarding their experiences enrolling individuals, providing services, and submitting claims; collected state Medicaid enrollment data; and analyzed DRA expenditure data that states submitted to CMS. As of September 30, 2006, CMS allocated $1.9 billion of the $2 billion in DRA funding to states. CMS allocated funds to: Category I--the nonfederal share of expenditures for time-limited Medicaid and SCHIP services for eligible individuals affected by the hurricane (32 states); Category II--expenditures for time-limited uncompensated care services for individuals without a method of payment or insurance (8 of the 32 states); and Category III--the nonfederal share of expenditures for existing Medicaid and SCHIP beneficiaries (Alabama, Louisiana, and Mississippi). CMS did not allocate funds to Category IV--for restoration of access to health care. After CMS reconciles states' expenditures with allocations, it will determine how to allocate the unallocated $136 million and unexpended funds from the $1.9 billion allocated to states. Of the $1.9 billion in allocated DRA funds, almost two-thirds of the 32 states that received these funds submitted claims totaling about $1 billion as of October 2, 2006. Claims from Alabama, Louisiana, and Mississippi for Category III accounted for about 85 percent of all claims filed. These initial results are likely to change as states continue to file claims for services. Of the four selected states, Louisiana and Texas raised concerns about their ability to meet future health care needs once the DRA funds are expended. Louisiana's concerns involved managing its Medicaid program across state borders as those who left the state remain eligible for the program. Texas was significantly affected by the number of evacuees seeking services, thus raising concerns among state officials about the state's future funding needs. CMS, Alabama, Louisiana, and Texas commented on a draft of this report. CMS suggested the report clarify the DRA funding categories, reallocation process, and communication strategy with states, especially Louisiana. Louisiana and Texas commented on their ongoing challenges, and Alabama provided technical comments. The report was revised as appropriate.
In part to improve the information available and management of DOD’s acquisition of services, in 2001 Congress enacted section 2330a of title 10 of the U.S. Code, which required the Secretary of Defense to establish a data collection system to provide management information on each purchase of services by a military department or defense agency. Congress amended section 2330a in 2008 to add a requirement for the Secretary of Defense to submit an annual inventory of the activities performed pursuant to contracts for services on behalf of DOD during the preceding fiscal year. The inventory is to include a number of specific data elements for each identified activity, including the function and missions performed by the contractor; the contracting organization, the component of DOD administering the contract, and the organization whose requirements are being met through contractor performance of the function; the funding source for the contract by appropriation and operating agency; the fiscal year the activity first appeared on an inventory; the number of contractor employees (expressed as FTEs) for direct labor, using direct labor hours and associated cost data collected from contractors; a determination of whether the contract pursuant to which the activity is performed is a personal services contract; and a summary of the information required by subsection 2330a(a) of title 10 of the U.S. Code. Within DOD, USD(AT&L), USD(P&R), and the Office of the Under Secretary of Defense (Comptroller) have shared responsibility for issuing guidance for compiling and reviewing the inventory. USD(P&R) compiles the inventories prepared by the components, and USD(AT&L) is to submit a consolidated DOD inventory to Congress no later than June 30 of each fiscal year. DOD has submitted annual, department-wide inventories for fiscal years 2008 through 2015, the most recent submitted on September 20, 2016 (see table 1). Since DOD began reporting on the department-wide inventory of contracted services in fiscal year 2008, the primary source used by most DOD components to compile their inventories, with the exception of the Army, has been FPDS-NG. The Army developed its CMRA system in 2005 to collect information on labor-hour expenditures by function, funding source, and mission supported on contracted efforts, and has used its CMRA as the basis for its inventory. The Army’s CMRA is intended to capture data directly reported by contractors on services performed at the contract line item level, including information on the direct labor dollars, direct labor hours, total invoiced dollars, the functions performed, and the organizational unit for which the services are being performed. In instances where contractors are providing different services under the same contract action, or are providing services at multiple locations, contractors can enter additional records in CMRA to capture information associated with each type of service or location. It also allows for the identification of services provided under contracts for goods. Subsection 2330a(e) of title 10 of the U.S. Code requires the secretaries of the military departments or heads of the defense agencies to complete a review of the contracts and activities in the inventory for which they are responsible within 90 days of the inventory being submitted to Congress. USD(P&R), as supported by the Comptroller, is responsible for, among other things, developing guidance for the conduct and completion of this review. As part of this review, the military departments and defense agencies are to ensure that any personal services contracts in the inventory were properly entered into and performed appropriately; the activities on the list do not include any inherently governmental functions; and to the maximum extent practicable, the activities in the inventory do not include any functions closely associated with inherently governmental functions. This review also requires the secretaries of the military departments and heads of defense agencies to identify activities that should be considered for conversion to government performance, or insourced, pursuant to section 2463 of title 10 of the U.S. Code, or to a more advantageous acquisition approach. Section 2463 specifically requires the Secretary of Defense to make use of the inventory to identify critical functions, acquisition workforce functions, and closely associated with inherently governmental functions performed by contractors—and to give special consideration to converting those functions to DOD civilian performance. In addition, subsection 2330a(f) of title 10 of the U.S. Code requires the secretaries of the military departments or heads of the defense agencies responsible for contracted services in the inventory to develop a plan, including an enforcement mechanism and approval process, for using the inventory to inform management decisions (see figure 1). Collectively, these statutory requirements mandate the use of the inventory and the associated review process to enhance the ability of DOD to identify and track services provided by contractors, achieve accountability for the contractor sector of DOD’s total workforce, help identify contracted services for potential conversion from contractor performance to DOD civilian performance, support DOD’s determination of the appropriate workforce mix, and project and justify the number of contractor FTEs included in DOD’s annual budget justification materials. Over the past five years, we have issued several reports on DOD’s efforts to compile and review its inventory of contracted services and made recommendations on a variety of issues related to the inventories. For example, in January 2011, we found that the military departments had differing approaches to reviewing the activities performed by contractors, and the department stated it had a goal of collecting manpower data from contractors for future inventories. We recommended that the department develop a plan of action to facilitate the department’s intent of collecting manpower data and address other limitations to its current approach to meeting inventory requirements. The department concurred with our recommendation but had not addressed it as of August 2016. In November 2015, we found that the lack of documentation on whether a proposed contract includes closely associated with inherently governmental functions may result in inventory review processes incorrectly reporting these functions, and recommended that DOD require acquisition officials to document, prior to contract award, whether the proposed contract action includes activities that are closely associated with inherently governmental functions. DOD concurred with our recommendation, but has not yet implemented it. A full list of our prior reports on DOD’s inventory of contracted services, the recommendations from those reports, and the current status of those recommendations— including eight that remain open—is included in appendix I. Our prior work has also consistently found that the absence of a complete and accurate inventory of contracted services hinders DOD’s ability to improve its management of these services. For example, in a June 2016 report on DOD headquarters personnel reduction efforts, we found that DOD does not have reliable data for assessing headquarters functions and associated costs, including those performed by contractor personnel. We concluded that without reliable information, DOD may not be able to accurately assess specific functional areas or identify potential streamlining and cost savings opportunities. In a December 2015 report on civilian and contractor personnel reductions, we found that limitations in the methodology for contractor FTE estimates in the inventory may hinder efforts to implement statutorily mandated reporting on reductions in contractor personnel. Further, in a February 2016 report on DOD efforts to forecast service contract requirements, we found that existing data on DOD’s future spending for contracted service requirements was not fully captured by DOD’s programming and budget processes, an effort the inventory of contracted services is intended to support. We noted that critical to being more strategic is knowing what DOD is spending today and what DOD intends to spend in the future. More DOD components conducted and certified the completion of an inventory review as required by subsection 2330a(e) of title 10 of the U.S. Code and DOD’s guidance, respectively, in fiscal year 2014 as compared to previous years. Overall, we found that the 40 components’ certification letters addressed more of DOD’s required elements in comparison to prior years, with over half of the components including all six of the required elements. In some areas, however, we continued to find limitations with the information provided in the certification letters. For example, the level of detail and input provided on the use of the inventory to inform annual program reviews and budget processes varied. In addition, we continued to find significant differences and potential underreporting in the extent to which components identified instances of contractors providing services that are closely associated with inherently governmental functions in their inventories. For example, through its review process, the Army identified $8.1 billion in invoiced dollars for contracts that include closely associated with inherently governmental functions, nearly three times the amount identified by the Navy, Air Force, and other defense agencies collectively for similar types of contracts. USD(AT&L) and USD(P&R)’s December 29, 2014, guidance governing the fiscal year 2014 inventory of contracted services required the military departments and defense agencies to certify—through submission of a certification letter to the USD(P&R)—that their review was conducted in accordance with subsection 2330a(e) of title 10 of the U.S. Code. As of July 2016, 40 DOD components reporting for fiscal year 2014 certified that they had reviewed their inventories. Notably, the Air Force, which represented close to 18 percent of DOD’s contract obligations for services in fiscal year 2014, submitted a review certification letter for the first time since the fiscal year 2011 inventory. The Army submitted an interim certification letter in April 2016 based on a review of the contracted functions performed by 73 percent of its contractor FTEs from its fiscal year 2014 inventory. DOD’s guidance for fiscal year 2014, among other things, requires components to include six elements in their certification letters. DOD components’ certification letters have generally improved each year since 2011 in terms of the number of elements addressed. See figure 2 for the list of required elements and the percentage of components that addressed each element in their certification letters for fiscal years 2011 to 2014. Overall, in fiscal year 2014 components addressed more of DOD’s required elements in comparison to prior years, as 21 of the 40 components—or over half—addressed all required elements in their certification letters (see figure 3). While these findings demonstrate that improvements have been made in terms of compliance with the review requirements, review results reported in certification letters varied in terms of the level of detail and insights provided on certain elements, in particular for the element that requires components to provide input on actions being taken or considered with regard to annual program review and budget processes based on the inventory review results. For example, of the 23 components that we found addressed this requirement in their fiscal year 2014 certification letters, the Navy and one other component discussed specific actions taken or plans based on the inventory review results to inform existing or future program and budget processes; nine components, including the Air Force, discussed their existing or planned program review or budget processes, but did not explicitly state how the review results would be used to inform those processes; ten components, including the Army, described the inventory as one source of information available to inform programming and budget matters, but did not provide input on whether specific actions were taken or considered based on those review results; and in two cases, components reiterated language from DOD’s review guidance in their certification letters to affirm that they had addressed the required element, but did so without adding any component- specific information. Of the 17 components that we found did not address the requirement in their fiscal year 2014 certification letters, 12 did not include any narrative related to the required element and therefore it is not clear whether the component had considered the use of the inventory in program reviews and budget processes. Three components’ certification letters stated explicitly that no actions were taken or considered based on the fiscal year 2014 review results, nor did they provide additional narrative to indicate whether the inventory review information is used generally to inform programming and budget matters. Two components each submitted a consolidated inventory and certification letter consisting of the collective review results and responses for the components under their purview, in which not all of the individual responses addressed the requirement. Similar to our November 2015 report, we found that components may continue to be underreporting instances of contractors providing services that are closely associated with inherently governmental functions in their inventory review. In this regard, our analysis indicates that DOD obligated about $28 billion for contracts in the 17 product service codes that OFPP and GAO identified as more likely to include closely associated with inherently governmental functions. In comparison, of the 40 components reporting for fiscal year 2014, 25 components identified a total of $10.8 billion in obligations or dollars invoiced for contracts that included work identified as closely associated with inherently governmental functions—either within the 17 product service codes or for any other category of service. We also found significant disparity among the components’ reporting of these functions (see figure 4). Specifically, through its review process, the Army identified $8.1 billion in invoiced dollars for contracts that include closely associated with inherently governmental functions. In comparison, our analysis of Army’s inventory data identified $10.2 billion in invoiced dollars for Army contracts in the 17 product service codes. In contrast, the Navy, Air Force, and other defense agencies collectively identified only about $2.7 billion in obligations and invoiced dollars for contracts that include closely associated with inherently governmental functions in their inventories, while our analysis of each component’s inventory data identified $17.9 billion in collective obligations for contracts in the 17 product service codes. We previously found shortcomings with DOD’s annual inventory review guidance, such as a lack of specific guidance on how to identify or review contract functions, and concluded that, as a result, components may be missing opportunities to properly identify contractors performing closely associated with inherently governmental functions. In November 2014, we recommended, in part, that DOD revise its guidance to clearly identify the basis for selecting contracts to review and to provide approaches the components may use to conduct inventory reviews to ensure that the nature of how the contract is being performed is adequately considered. In November 2015, we reported that DOD’s December 2014 guidance for the fiscal year 2014 inventory did not address our recommendation to provide such clarification; however, DOD officials noted that a risk-based approach to select which contracts to review may be appropriate. As such, we recommended that DOD ensure that components review, at a minimum, those contracts within the product service codes identified as requiring heightened management attention and as more likely to include closely associated with inherently governmental functions. DOD’s March 2016 guidance for the review of the fiscal year 2015 inventory—the first issued after our recommendation—requires components to review those contracts; however, it is too soon to determine what effect the revised guidance will have on the components’ forthcoming inventory reviews. In addition to the lack of specific inventory review guidance, our November 2015 review also identified other factors that may also contribute to components incorrectly identifying contracts that may include closely associated with inherently governmental functions during the pre-contract award process. Specifically, we concluded that the lack of a requirement for acquisition officials to document, during the pre- award process, whether a proposed contract includes closely associated with inherently governmental functions hinders a component’s ability to both identify and report on contractors performing such functions. The Army’s pre-award process, specifically the Request for Services Contract Approval form, requires documentation of a determination whether a contract includes closely associated with inherently governmental functions; however, the Air Force and Navy do not have department-wide requirements to document this determination in their contract files. DOD concurred with both of our November 2015 recommendations to require acquisition officials to document, prior to contract award, whether contract actions include such activities, and to provide clear instructions on how the service requirement review boards will be used to identify whether contracts contain such functions. Officials from the Office of Defense Procurement and Acquisition Policy—the office within USD(AT&L) responsible for contracting and acquisition policy—indicated at that time that a forthcoming DOD Instruction on service acquisitions would include direction to consider planned activities under a contract during the service requirement review boards. DOD Instruction 5000.74, issued in January 2016, includes discussion related to identifying closely associated with inherently governmental functions in the inventory, but not in the context of the service requirement review boards. The military departments generally have not developed plans to use the inventory of contracted services to inform workforce mix, strategic workforce planning, and budget decision-making processes, as required by the National Defense Authorization Act for Fiscal Year 2012. DOD has recently made progress in identifying accountable officials to develop plans and establish processes for using the inventories in decision making, a step we recommended in November 2014 to help ensure the inventory is integrated into key management decisions. Despite this effort, DOD faces continued delays to key steps in the implementation of the inventory process, including choosing the path forward for its underlying inventory data collection system, staffing its inventory management support office, and formalizing the roles and responsibilities of that office and its relationship to the military departments and other stakeholders. Collectively, these persistent delays hinder the department’s ability to use the inventory of contracted services as intended. The military departments generally have not developed plans and enforcement mechanisms as required by subsection 2330a(f) of title 10 of the U.S. Code to use the inventory of contracted services to inform workforce mix, strategic workforce planning, and budget decision-making processes. Our November 2014 report on the fiscal year 2012 inventory found that the military departments—with the exception of the Army, which used the inventory to inform decisions about workforce mix and insourcing—lacked plans and processes to incorporate the inventory into decision making. While DOD’s December 2014 guidance for the fiscal year 2014 inventory more explicitly required components to use the inventory reviews to inform programming and budget matters, and to inform their strategic workforce planning efforts—which carried through to their fiscal year 2015 guidance—our current work found that the military departments generally continue to lack plans and processes to do so. Appendix II presents the findings of the November 2014 report on these plans and processes, with updates, where appropriate. At the department level, in January 2016, USD(AT&L) issued DOD Instruction 5000.74, Defense Acquisition of Services, which establishes policy, assigns responsibilities, and provides direction for the acquisition of contracted services. In commenting on our November 2015 report, DOD stated that this instruction would provide guidance on identifying closely associated with inherently governmental activities. The instruction notes that DOD components will submit an annual inventory of contracted services, and that the inventory and associated review are to be used to inform acquisition planning and workforce shaping decisions, but does not provide any specific guidance as to how the inventories are to contribute to such decisions, including guidance for identifying closely associated with inherently governmental activities. DOD officials more recently stated that this instruction is intended as policy for acquisition officials, not as a document on workforce planning. We previously found that the responsibility for developing plans and enforcement mechanisms to use the inventory for decision-making processes was not clearly assigned and was divided across multiple offices. In our November 2014 report, we recommended that the secretaries of the military departments identify an accountable official to lead and coordinate efforts across the functional communities to develop plans and establish processes for using the inventory for decision making. DOD concurred with this recommendation. No components identified an accountable official with their fiscal year 2014 inventory submission. However, DOD’s March 2016 guidance for the fiscal year 2015 inventory explicitly required the identification of an accountable official to help ensure that the inventory is integrated into key management decisions. As of July 2016, 41 components had submitted their fiscal year 2015 inventories, of which 30 identified an accountable official in their transmittal letter. However, none of the three military departments, which represent 73 percent of service contract obligations reported in the fiscal year 2014 inventory, have yet identified an accountable official. In its transmittal letter for fiscal year 2015, the Air Force stated that it first needs to better understand the roles and responsibilities of the inventory management support office. The Army’s fiscal year 2015 transmittal letter states that it is in the process of identifying an appropriate official. As of July 2016, the Navy has not yet submitted its fiscal year 2015 transmittal letter. DOD has twice conducted reviews in the past two years to assess its approach to conducting the inventory. DOD officials noted that, to some degree, these reviews have contributed to delays in choosing the path forward for its underlying inventory data collection system, staffing the support office, and formalizing the roles and responsibilities of that office and its relationship to the military departments and other stakeholders. These delays may, in turn, hinder the development and implementation of plans and enforcement mechanisms for using inventory data to inform workforce and budget decision-making processes. As shown in figure 5, DOD has struggled since 2011 to determine the best way forward for collecting data for the inventories. In September 2014, DOD undertook an internal review of strategic options to identify, develop, and consider all reasonable options, in both the short and long terms, and propose courses of action for appropriate enterprise solutions to facilitate data collection for the inventory. However, DOD’s strategic review of options in 2014 did not lead to a definitive way forward. In November 2014, we found that DOD’s strategic review of options raised questions as to whether DOD will continue to implement ECMRA—a DOD-wide inventory data collection system modeled after the Army’s CMRA system—or attempt to develop a new system. We concluded that, until such time as DOD components are able to collect the required data for their inventories, the utility of the inventory for making workforce decisions will be hindered. We recommended that, should a decision be made to use or develop a system other than the ECMRA system currently being fielded, USD(P&R) should document the rationale for doing so and ensure that the new approach provides data that satisfies the statutory requirements for the inventory. In 2015, the Joint Explanatory Statement to the National Defense Authorization Act for Fiscal Year 2016 mandated that DOD report on the approach the department is taking to comply with the inventory requirement and whether it is producing a product that enhances oversight of service contracting activities. DOD contracted with the RAND National Defense Research Institute in December 2015 to assess the methods used by DOD to produce the inventory of contracted services and to recommend improvements, including alternative methods of collecting, processing, and reporting data on contracted services. RAND provided preliminary briefings to DOD in March and May of 2016, and its final report is expected to be delivered later this year. While awaiting the results of its internal review and, subsequently, the RAND review, DOD delayed fully staffing its support office and defining its specific roles, authorities, and relationships to the military departments and other stakeholders, as shown in figure 6. In 2014, a USD(P&R) official told us that DOD would defer the use of additional resources for the support office until such time as there had been a decision whether to pursue a new approach or continue forward with implementation of ECMRA. Similarly, in 2016, USD(P&R) officials told us that they wanted to be more confident of the planned direction for the inventories before committing to additional hiring. Further, more than two years since the support office was funded, DOD has yet to define the roles and responsibilities of the office. In November 2015, we recommended that USD(P&R) clearly identify these longer term relationships between the support office, military departments, and other stakeholders with respect to collection and use of inventory data. DOD concurred and told us that the release of a memorandum of agreement between the Assistant Secretary of Defense for Manpower and Reserve Affairs and the Director of the Defense Human Resource Activity on short term roles and responsibilities for the support office would do so, but as of August 2016, the memorandum of agreement had yet to be formalized. Additionally, DOD officials indicated that the memorandum of agreement will not address the roles to be played by the support office, the military departments, and other stakeholders in exploring the longer term solution to collecting contractor manpower data and integrating inventory data within the military departments’ decision-making processes. Supplemental agreements will be necessary to formalize these relationships. The absence of clearly identified relationships between the support office and other stakeholders has hindered efforts to implement ECMRA and integrate the data into decision-making processes that will meet user needs and expectations. In addition to these uncertainties about finalizing an approach to the inventory, our review found that DOD components’ reliance on data captured in their CMRA systems for their inventories has varied. DOD’s March 2014 guidance for the fiscal year 2013 inventory, as well as guidance for subsequent inventories, required components to include the percentage of their total contracts that were reported by contractors in their CMRA system and the extent to which reported data were used to support their inventory submission. Contractors are required to report labor hour data by the end of October for work executed during the period of performance within the one year period beginning October 1 of the prior year and ending September 30. DOD components are then supposed to use this contractor-reported data from CMRA to help develop their inventories. We found that 22 out of the 40 components, comprising about 96 percent of total FTEs reported in the DOD inventory, reported using CMRA data for the fiscal year 2014 inventory submission. In contrast, only nine components reported using CMRA to do so in the fiscal year 2013 inventory. Table 2 identifies changes in use of CMRA data by the military departments from fiscal year 2013 to fiscal year 2015. Air Force and Navy both continue to rely heavily on FPDS-NG data to derive the contractor FTEs for those contracts not entered into CMRA. Navy officials stated that they do not view all contractor-reported CMRA data to yet be robust enough to support consistent, reliable use for the inventory. However, as we have previously reported, the FPDS-NG system has several limitations that limit its utility for purposes of compiling a complete and accurate inventory, including not being able to identify and record more than one type of service purchased for each contracting action entered into the system, not being able to capture any services performed under contracts that are predominantly for supplies, not being able to identify the requiring activity specifically, not capturing service contracts awarded on behalf of DOD by non- not being able to determine the number of contractor FTEs used to perform each service. Since 2011, we have made 13 recommendations to help improve how DOD collects, reviews, and uses the data from the inventory of contracted services (see appendix I for a complete list and the status of DOD’s actions to address them). We are not making any new recommendations in this report, but rather we underscore the need to address the 8 recommendations that remain open. In particular, DOD needs to resolve the long-standing delays and uncertainties regarding implementation of the ECMRA system—or an alternative to that system—which have hindered efforts to provide reliable and accurate data. Over five years ago, we recommended that DOD develop a plan of action with timeframes and necessary resources to measure DOD’s progress in implementing a common data system and we offered a similar recommendation two years ago when it began to explore options for an appropriate enterprise solution to facilitate data collection. Delays in making that decision have had a cascading effect on fully staffing its management support office, as well as defining the roles, responsibilities, and relationships between this office, the military departments, and other stakeholders. Continued delays in making a decision increase the risk that DOD will remain unable to collect and analyze service contract data and develop associated business processes in a manner that supports workforce and budget planning. Conversely, choosing a path forward, providing a rationale for that choice, and developing a plan of action with implementation timeframes and milestones could help the department move toward an environment in which it can stop endlessly agonizing on whether to use ECMRA or an alternative system and focus on what data to collect and how best to use that data once collected. As we concluded in January 2011, the real benefit of the inventory will ultimately be measured by its ability to inform decision making. We further noted that the absence of a way forward was hindering the achievement of this objective. More than five years later, those conclusions remain unchanged. We are not making new recommendations in this report. We provided a draft of this report to the Department of Defense for comment. In its written comments, which are reprinted in appendix III, DOD stated that it is committed to improving its processes surrounding the inventory and to working to close the eight open recommendations discussed in the report. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense; the Secretaries of the Army, Air Force, and Navy; the Under Secretary of Defense for Personnel and Readiness; and the Under Secretary of Defense for Acquisition, Technology, and Logistics. In addition the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or dinapolit@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. In November 2014, GAO reported on the status of efforts by the military departments to develop plans with enforcement mechanisms to use the inventory of contracted services to inform management decisions in three primary areas: strategic workforce planning; workforce mix; and budgeting. In November 2015, we updated these findings. In those reports, we determined that the military departments generally had not developed plans and enforcement mechanisms to use the inventory to inform these decisions, as required by subsection 2330a(f) of title 10 of the U.S. Code. Our current work found minimal updates to these guidance with specific reference to use of the inventory of contracted services for management decisions. The primary exception relates to budgeting, where the Army’s Command Program Guidance memorandum for the Fiscal Years 2018-2022 Program Objective Memorandum requires the Army to use the inventory review when formulating budget requests for contracted services. The following summarizes the degree to which the Department of Defense (DOD) and the military departments’ guidance currently require the use or consideration of the inventory in these areas and identifies where DOD or the military departments have updated their guidance since our November 2015 report. Updates since our November 2015 report are italicized in the following tables. The Under Secretary of Defense for Personnel and Readiness (USD(P&R)) has overall responsibility for developing and implementing DOD’s strategic workforce plan to shape and improve DOD’s civilian workforce, including an assessment of the appropriate total force mix. USD(P&R) issued guidance that designated responsibility for the development of the strategic workforce plan to the Deputy Assistant Secretary of Defense for Civilian Personnel Policy, but did not require use of the inventory. The guidance pre-dates the statutory requirement to use the inventory to inform strategic workforce planning. For example, the Fiscal Years 2013-2018 Strategic Workforce Plan, the most recent plan available at the time of our November 2014 and 2015 reviews, stated that DOD’s plans for identifying and assessing workforce mix will leverage the inventory of contracted services, but did not provide any additional details on using the inventory. None of the three military departments had developed a statutorily required plan or enforcement mechanism to use the inventory of contracted services for strategic workforce planning and generally they had not developed guidance or processes for these purposes (see table 4). DOD has two department-wide policies for determining workforce mix— DOD Directive 1100.4 and DOD Instruction 1100.22—but neither currently requires the use of the inventory to inform workforce mix planning. DOD Directive 1100.4, dated February 2005, provides general guidance concerning determination of manpower requirements, managing resources, and manpower affordability. According to USD(P&R) officials, revisions to this directive, which are currently under review, will explicitly require use of the inventory to inform budgeting and total force management decisions. DOD Instruction 1100.22, dated April 2010, provides manpower mix criteria and guidance for determining how individual positions should be designated based on the work performed. This instruction does not direct the military departments to develop a plan to use the inventory to inform management decisions, as DOD issued it before the enactment of the requirement for developing such plans. DOD’s primary insourcing guidance is reflected in April 4, 2008, and May 28, 2009, memorandums. These memorandums reiterate statutory requirements by calling for DOD components and the military departments to use the inventory of contracted services to identify functions for possible insourcing and to develop a plan for converting these functions within a reasonable amount of time. Among the military departments, however, only Army has guidance and a process that requires use of the inventory of contracted services for insourcing. However, the military departments have not issued guidance for managing workforce mix that requires the use of the inventory of contracted services (see table 5). DOD’s Financial Management Regulation provided, among other things, guidance to the military departments on budget formulation and presentation; however, these regulations did not require the military departments to use the inventory in formulating and presenting their budgets. At the military department level, the Air Force had issued additional instructions in terms of budget formulation and presentation. However, the Air Force’s guidance did not require the use of the inventory. More recently, the Army’s February 2016 guidance, Command Program Guidance Memorandum for the Fiscal Years 2018-2022 Program Objective Memorandum, requires the use of the inventory review certification in budget formulation. The Comptroller issued supplemental guidance requiring, among other things, that the military departments and defense components provide information on the number of FTEs as required under section 235 of title 10 of the U.S. Code, but this guidance did not require reporting the amount of funding requested for contracted services. The Comptroller guidance for budget submissions from all components instructed DOD components to ensure that contractor FTEs reported in the budget exhibit were consistent with those in DOD’s inventory of contracted services. Both Navy and Air Force officials reported that they used the inventory of contracted services to estimate the number of contractor FTEs for inclusion in their budget request. The Army budget office could not identify how the Army estimated FTEs in the Army’s budget submission (see table 6). In addition to the contact named above, Janet McKelvey (Assistant Director); Emily Bond; Virginia Chanley; Mackenzie Doss; Kristine Hassinger; Julia Kennon; Scott Purdy; and Roxanna Sun made significant contributions to this review.
DOD is the government's largest purchaser of contractor-provided services. In 2008, Congress required DOD to compile and review an annual inventory of its contracted services to identify the number of contractors performing services and the functions contractors performed. In 2011, Congress required DOD to use that inventory to inform certain decision-making processes, including workforce planning and budgeting. GAO has previously reported on the challenges DOD faces in compiling, reviewing, and using the inventory. Since 2011, GAO made 13 recommendations intended to improve DOD's use of the inventory. Of these, DOD has yet to fully address 8 open recommendations. Congress included a provision in statute for GAO to report on DOD's required reviews and plans to use the inventory. This report assesses the extent to which DOD components (1) reviewed contracts and activities in the fiscal year 2014 inventory of contracted services, and (2) developed plans to use the inventory for decision making. GAO reviewed relevant laws and guidance and 40 components' inventory review certification letters, and interviewed DOD acquisition, manpower, and programming officials. In fiscal year 2014, 40 Department of Defense (DOD) components in total certified that they had conducted an inventory review. Components are required by DOD guidance to address six elements in their certification letters, including, for example, identifying any inherently governmental functions and unauthorized personal services contracts. More components—21 out of 40—addressed all of the required review elements compared to prior years. However, DOD components may continue to underreport the extent to which contractors were providing services that are closely associated with inherently governmental functions, a key review objective to help ensure that DOD has proper oversight in place. For example, GAO's analysis indicates that DOD obligated about $28 billion for contracts in 17 categories—such as professional and management support services—that the Office of Federal Procurement Policy and GAO identified as more likely to include closely associated with inherently governmental functions. In comparison, components identified a total of $10.8 billion in obligations or dollars invoiced for contracts that included work identified as closely associated with inherently governmental functions—either within the 17 categories or for any other category of service. Most of these functions were identified by the Army using its long-standing review process. The military departments have not yet developed plans to use the inventory to inform workforce mix, strategic workforce planning, and budget decision-making processes, as statutorily required. DOD has made some recent progress on requiring components to identify an accountable official to lead efforts to develop plans and establish processes for using their inventories in decision making, a step GAO recommended in November 2014. However, DOD faces continued delays in deciding on the path forward for its underlying inventory data collection system, staffing its inventory management support office, and formalizing the roles and responsibilities of that office and stakeholders (see figure). GAO previously recommended that DOD address these issues to improve the usefulness of the inventory. DOD concurred with these recommendations but has not yet addressed them. These continued delays hinder DOD's ability to use the inventory of contracted services as intended, including using the inventory data to inform workforce and budget decision-making processes. GAO is not making new recommendations in this report. In its comments, DOD noted that it intends to address GAO's eight open recommendations, including those related to determining its approach for compiling the inventory and defining the roles and responsibilities of a key support office and stakeholders.
DOD invests in electronic warfare capabilities as a means to maintain unimpeded access to the electromagnetic spectrum during war and selectively deny adversary use of the spectrum. Traditionally, electronic warfare has been composed of three primary activities: Electronic attack: Use of electromagnetic, directed energy, or antiradiation weapons to attack with the intent of degrading, neutralizing, or destroying enemy combat capability. Electronic protection: Passive and active means taken to protect personnel, facilities, and equipment from the effects of friendly or enemy use of the electromagnetic spectrum. Electronic warfare support: Actions directed by an operational commander to search for, intercept, identify, and locate sources of radiated electromagnetic energy for the purposes of immediate threat recognition, targeting, and planning, and the conduct of future operations. Airborne electronic attack—a subset of the electronic attack mission— involves use of aircraft to neutralize, destroy, or temporarily degrade (suppress) enemy air defense and communications systems, either through destructive or disruptive means. These capabilities are increasingly important and complex as networked systems, distributed controls, and sophisticated sensors become ubiquitous in military equipment, civilian infrastructure, and commercial networks— developments that complicate DOD’s ability to exercise control over the electromagnetic spectrum, when necessary, to support U.S. military objectives. Airborne electronic attack systems increase survivability of joint forces tasked to enter denied battlespace and engage anti-access threats or high-value targets,a potential near-peer adversary or in irregular warfare. They also enable access to the battlespace for follow-on operations. Aircraft executing airborne electronic attack missions employ a variety of mission systems, such as electronic jammers, and weapons, such as antiradiation missiles and air-launched expendable decoys. These aircraft also rely on aircraft self-protection systems and defensive countermeasures for additional protection. All four services within DOD contribute to and rely upon airborne electronic attack capabilities using a variety of different aircraft. Each service is also separately acquiring new airborne electronic attack systems. whether involved in major combat operations against Section 1053 of the National Defense Authorization Act for Fiscal Year 2010 requires that for each of fiscal years 2011 through 2015, the Secretary of Defense, in coordination with the Joint Chiefs of Staff and secretaries of the military departments, submit to the congressional defense committees an annual report on DOD’s electronic warfare strategy. department’s electronic warfare strategy and organizational structures for oversight; (2) a list and description of all electronic warfare acquisition programs and research and development projects within DOD; and (3) for the unclassified programs and projects, detail on oversight responsibilities, requirements, funding, cost, schedule, technologies, potential redundancies, and associated capability gaps, and for the classified programs and projects, a classified annex addressing these topics, when appropriate. In response to this requirement, DOD submitted its first Electronic Warfare Strategy of the Department of Defense report in October 2010. The department produced its second electronic warfare strategy report in November 2011. Pub. L. No. 111-84, § 1053 (a) (2009). DOD’s strategy for meeting airborne electronic attack requirements— including both near-peer and irregular warfare needs—centers on acquiring a family of systems, including traditional fixed wing aircraft, low observable aircraft, unmanned aerial systems, and related mission systems and weapons. Department analyses dating back a decade have identified capability gaps and provided a basis for service investments in airborne electronic attack capabilities. However, budget realities and lessons learned from operations in Iraq and Afghanistan have driven changes in strategic direction and program content. Most notably, the department canceled some acquisitions, after which services revised their operating concepts for airborne electronic attack. These decisions saved money, allowing the department to fund other priorities, but reduced the planned level of synergy among airborne electronic attack systems during operations. As acquisition plans for these systems have evolved, operational stresses upon the existing inventory of weapon systems have grown. These stresses have materialized in the form of capability limitations and sustainment challenges for existing systems, prompting the department to invest in improvements to these systems to mitigate shortfalls. Key DOD analyses completed since 2002 identified capability gaps, provided a basis for service investments in airborne electronic attack systems, and supported an overarching acquisition strategy for achieving these requirements. The department outlined its findings in reports that included an analysis of alternatives, a capabilities-based assessment, and initial capabilities documents. Figure 1 highlights a chronology of these analyses and identifies key airborne electronic attack components of each report. The 2002 Airborne Electronic Attack Analysis of Alternatives established the primary framework by which the department began investing in new airborne electronic attack capabilities. The analysis focused on those capabilities needed to suppress enemy air defenses from 2010 to 2030. The study identified two primary components required to provide a complete and comprehensive airborne electronic attack solution: Core component: A recoverable platform or combination of platforms operating in enemy airspace. The core component provides the airborne electronic attack detection and battle management capabilities for reactive jamming. Stand-in component: An expendable air platform providing critical capabilities against certain advanced threat emitters and employed in threat environments not accessible to the core component. Subsequent to this analysis, DOD developed a system of systems strategy for meeting airborne electronic attack mission needs. A system of systems is a set or arrangement that results when independent and useful systems are integrated into a larger, connected and interdependent system that delivers unique capabilities during military operations. The system of systems strategy established specific roles and operating responsibilities among the military services in a joint environment and expanded the basic core and stand-in component needs into four major capability areas for airborne electronic attack: Stand-off: Jamming occurring outside of defended airspace. Planned stand-off systems included the Air Force’s EC-130H Compass Call aircraft and development of an electronic attack variant of the Air Force’s B-52. Modified escort: Jamming occurring inside defended airspace, but outside of the range of known surface-to-air missiles. Planned modified escort systems included the Navy’s EA-18G Growler and EA-6B Prowler aircraft. Penetrating escort: Jamming occurring inside the intercept range of known surface-to-air missiles. The department planned to rely on aircraft equipped with active electronically scanned array (AESA) radars, including the F-22A Raptor and F-35 Lightning II aircraft to perform this jamming function. Stand-in: Jamming occurring inside the “no escape range” of known surface-to-air missiles. The department planned to rely on development of recoverable Joint Unmanned Combat Air Systems (J- UCAS) and the Air Force’s Miniature Air Launched Decoy—Jammer (MALD-J) to provide this function. As time progressed, budget issues and lessons learned from operations in Iraq and Afghanistan drove changes to the strategy and program content. Most notably, the department canceled development of two major components of the system of systems—the B-52 Standoff Jammer and J-UCAS—in 2005 and 2006, respectively, citing higher-priority needs and budget constraints. The B-52-based jamming concept was later rejuvenated through the Air Force’s Core Component Jammer initiative, but that program was similarly canceled in 2009. Following these developments, the department revised operating concepts and joint service responsibilities, moving away from its system of systems plans in favor of a family of systems strategy for airborne electronic attack. A family of systems is fundamentally different from a system of systems. Under a family of systems construct, independent systems—using different approaches—together provide capability effects to support military operations. Unlike the synergy found in a system of systems, a family of systems does not acquire qualitatively new properties or necessarily create capability beyond the additive sum of the individual capabilities of its members. The member systems may not even be connected into a whole. In the case of airborne electronic attack, DOD officials stated that a system of systems would have employed a dynamic, networked capability to share data in real-time among platforms—a concept known as electronic warfare battle management. Under the family of systems strategy, officials stated that this process is less automated and the parts are less connected. Therefore, in making this strategy change, the department traded some unique, synergistic capabilities that the system of system’s interdependent components might have provided in favor of near-term budget savings and other priorities. Figure 2 outlines the department’s current family of systems strategy for countering near-peer adversaries. This family of systems includes traditional fixed wing aircraft, low observable aircraft, and related mission systems and weapons. DOD’s 2009 electronic warfare capabilities analysis identified the growth of irregular warfare in urban areas as presenting challenges to military operations. The analysis noted that irregular adversaries can exploit civilian and commercial communications infrastructure to minimize detection and subsequent attack. According to the department, precise electronic attack planning and execution are required to ensure that these threats are defeated while avoiding interruption to U.S. communications capabilities. The department has used existing airborne electronic attack systems, such as the EA-6B and EC-130H, to meet its near-term irregular warfare needs in Iraq and Afghanistan. However, officials report that these platforms are optimized for countering high-end, near-peer threats, and their use against irregular warfare threats is inefficient and costly. Consequently, the department has begun investing in new, less expensive airborne electronic attack systems tailored to counter irregular warfare threats. These systems are fielded from both traditional fixed- wing aircraft and from unmanned aerial vehicles. Figure 3 illustrates operations involving these systems. As DOD’s acquisition plans for airborne electronic attack systems have evolved, operational stresses upon the current inventory of systems have grown. These systems date back to the 1970s and 1980s and were originally designed to counter Cold War era threats. Many of the department’s existing airborne electronic attack systems face capability limitations, requiring the department to pursue modernization efforts to increase the effectiveness of the systems or to identify and develop replacement systems. Further, existing systems face sustainment challenges from age, parts obsolescence, and increased operational stresses from lengthy and sustained operations in Iraq and Afghanistan. According to Air Force and Navy officials, these challenges have reduced the availabilities of some systems to warfighters. Table 1 identifies the department’s existing airborne electronic attack systems and related characteristics, including future replacement systems identified to date. DOD is taking actions to address capability limitations and sustainment challenges across several key systems, such as the following: EA-6B Prowler: Since its introduction in the 1970s, the Navy and Marine Corps have made significant upgrades to the EA-6B Prowler. The latest of these upgrades—the Improved Capability electronic suite modification (ICAP III) provides the Prowler with greater jamming capability and is designed to improve the aircraft’s overall capability as both a radar-jamming and HARM platform. By the end of fiscal year 2012, 32 EA-6Bs will be upgraded to the ICAP III configuration. Navy officials told us that persistent operations in Iraq and Afghanistan, however, have degraded the condition of EA-6B aircraft. In addition, we have previously reported that parts obsolescence presents the biggest challenge to the EA-6B’s ability to fulfill its mission role. We noted that although the Navy has made several structural upgrades to the EA-6B fleet, it is actively tracking a number of key components, including cockpit floors, side walls, fin pods, bulkheads, actuators, engine components, landing gear, and avionics software—all of which are at increasing risk for costly replacement the longer the aircraft remains in service. HARM: According to Navy officials, even though HARM has undergone various block upgrades to provide increased capabilities since fleet introduction in 1983, advancements in enemy radar technology have rendered the weapon somewhat ineffective for typical Navy targets. As a result, the Navy is fielding a major technological upgrade to HARM through its AARGM acquisition program. AARGM provides a new multimode guidance section and modified control section mated with existing HARM propulsion and warhead sections. The Air Force, similarly, is pursuing modifications to HARM control sections on missiles in its inventory—a process that will provide a global positioning system receiver to those units. Air Force officials stated that they have long sought this receiver component addition because of vulnerabilities in the HARM targeting method. This effort is being pursued in conjunction with other modernization efforts for Air Force F-16CM aircraft. TALD and ITALD: Navy officials stated that advancements in enemy integrated air defense systems have decreased the effectiveness of both TALD and ITALD units. According to program officials, newer radars can discern from the TALD/ITALD flight profile that the system is a decoy and not a valid target. The Navy has begun evaluating TALD/ITALD replacement options under its Airborne Electronic Attack Expendable program initiative. EC-130H Compass Call (Baselines 0 and 1): Although the Air Force initially fielded the EC-130H Compass Call as a communications jammer supporting suppression of enemy air defenses, the system has evolved to include irregular warfare missions and radar jamming. Air Force officials told us that the Compass Call is the most utilized aircraft within the C-130 family and has been continuously deployed since 2003 supporting operations in Iraq and Afghanistan, accelerating the need for the Air Force to replace the center wing box on each of the 14 aircraft in the Compass Call fleet. Further, Air Force officials told us that they are increasing the size of the fleet by one aircraft to alleviate stress on current aircraft and to increase the availability of airborne electronic attack capability to the Air Force. According to a fleet viability assessment completed in 2010, the current size of the fleet is insufficient to meet combatant commander taskings for Compass Call. AN/ALQ-99 Tactical Jamming System: The Navy’s Low Band Transmitter upgrade to the AN/ALQ-99 system is intended to replace three aging legacy transmitters that suffer from obsolescence and reliability problems. According to Navy officials, persistent use of these transmitters in support of operations in Iraq and Afghanistan has exacerbated system shortfalls. Navy officials told us that they are also identifying options for improving reliability and resolving obsolescence issues with the mid and high bands of the AN/ALQ-99 system. However, Navy officials project that even with these improvements, system capabilities will be insufficient to counter anticipated evolutions in threat radars and missiles beginning in 2018. This shortfall is expected to be addressed by the new Next Generation Jammer. AN/ALQ-131 and AN/ALQ-184 Pod Systems: The Air Force has identified obsolescence issues and capability shortfalls affecting these systems, which provide tactical aircraft self-protection. The Air Force is pursuing a replacement/upgrades program designed to move the Air Force to a single, self-protection pod system for its F-16 and A-10 aircraft. DOD is investing in new airborne electronic attack systems to address its growing mission demands and to counter anticipated future threats. However, progress acquiring these new capabilities has been impeded by developmental and production challenges that have slowed fielding of several planned systems. Some programs, including the Navy’s EA-18G Growler and the Air Force’s EC-130H Compass Call modernization, are in stable production and have completed significant amounts of testing. On the other hand, the Navy’s AARGM, the Air Force’s Miniature Air Launched Decoy (MALD), and other programs have required additional time and money to resolve technical challenges. In addition, certain airborne electronic attack systems in development may offer capabilities that overlap with one another—a situation brought on in part by the department’s fragmented urgent operational needs processes. As military operations in Iraq and Afghanistan decrease, opportunities exist to consolidate current acquisition programs across services; however, this consolidation may be hampered by leadership deficiencies affecting the department’s electronic warfare enterprise. Furthermore, current and planned acquisition programs, even if executed according to plan, will not fully address the materiel-related capability gaps identified by the department—including some that date back 10 years. DOD investments to develop and procure new and updated airborne electronic attack systems are projected to total more than $17.6 billion from fiscal years 2007 through 2016. These systems represent the department’s planned mix of assets for (1) countering near-peer, integrated air defense and communications systems and (2) providing communications and radio frequency jamming against irregular warfare threats. Table 2 outlines the department’s recent and planned investments toward developing and acquiring several of these systems. As table 2 shows, several airborne electronic attack systems are in an advanced stage of funding. However, under current estimates, over $6.0 billion in funding is still required to fully deliver these new systems to the warfighter. Further, the department has not yet identified the full amount of funding required for certain key systems, such as the Next Generation Jammer, which could require billions of additional dollars to field. Correspondent to their different funding profiles, the department’s new systems are also in various stages of development, with some progressing more efficiently than others. Table 3 identifies the mission role(s), developmental status, and fielding plans for these systems. In addition, appendix II provides additional details on the status of several of these programs. Some airborne electronic attack acquisition programs have reached stable production with limited cost growth or schedule delays. Two primary examples include the following: EA-18G Growler: Acquisition of the EA-18G Growler—a modified escort jamming platform designed to carry AN/ALQ-99 and future Next Generation Jammer pods—achieved initial capability in September 2009, consistent with its 2007 baseline schedule. Additionally, program costs per aircraft increased less than one-half of 1 percent from 2003 to 2010—an outcome partially attributable to quantity increases from 90 to 114. EC-130H Compass Call (Baselines 2 and 3): Modernization of the EC-130H Compass Call is on schedule for fielding a new increment of capability, Baseline 2, in 2014 within available funding limitations. Baseline 2 introduces several new capabilities, including reactive radar response and the Joint Tactical Radio System terminal that has been delayed because of testing challenges. However, Compass Call program officials do not expect the radio system delay to affect the program’s fielding plans for Baseline 2 aircraft. According to the Air Force, cost considerations are a primary criterion in developing EC- 130H capability requirements. The program office does not entertain potential aircraft improvements unless those improvements are accompanied by full funding. The Air Force is initiating technology development activities for a subsequent phase of the modernization program, Baseline 3, and plans to begin production of these aircraft in 2014, with initial fielding scheduled for 2017. Our previous work has shown that good acquisition outcomes are achieved through the knowledge-based approach to product development that demonstrates high levels of knowledge before significant commitments are made.This model relies on increasing knowledge when developing new products, separating technology development from product development, and following an evolutionary or incremental approach to product development. In this approach, developers make investment decisions on the basis of specific, measurable levels of knowledge at critical junctures before investing more money and before advancing to the next phase of In essence, knowledge supplants risk over time. acquisition. The good outcomes on the EA-18G and EC-130H programs can be attributed, in part, to acquisition strategies embodying elements of best practices. Other airborne electronic attack acquisition programs have not progressed as efficiently, however. These systems have proceeded through product development with lower-than-desired levels of knowledge and subsequently faced technical, design, and production challenges, contributing to significant cost growth, fielding delays or both. Most notably, these systems entered—or are on track to enter—production before completing key development activities, including achievement of stable designs. We previously reported that concurrency in development and production activities limits the ability of an acquisition program to ensure that the system will work as intended and that it can be manufactured efficiently to meet cost, schedule, and quality targets. MALD/MALD-J: MALD was authorized for low rate initial production in June 2008 with an initial plan for 300 low rate initial production units in two lots, beginning in March 2009. However, testing failures in 2010 and 2011—coupled with a desire to avoid a potentially costly break in production—prompted the Air Force to extend MALD low rate initial production by two additional lots and increase total quantities under contract to 836. In September 2011, citing “successful completion of MALD-J engineering and manufacturing development activities,” the Air Force exercised a priced option to upgrade 240 of its planned MALD units to the MALD-J configuration, subsequently decreasing MALD quantities to 596. Because all future production lots are now planned as jammer-configured decoys (MALD-J), the 596 total represents the full MALD procurement—without the program having ever met the criteria necessary to proceed into full rate production. Since the MALD and MALD-J designs are identical—except for the addition of a jammer module to MALD-J—the absence of a proven manufacturing process for MALD introduces schedule risk to production of MALD-J. This risk is accentuated by continuing deficiencies affecting the MALD and MALD-J designs, which have required the Air Force to schedule additional developmental flight tests for each system in February 2012 to test corrective fixes. To the extent that this retesting phase shows a need for additional design changes, the Air Force may be forced to revisit its planned May 2012 production start for MALD-J. AARGM: The Navy authorized low rate initial production of AARGM units in September 2008 with initial deliveries scheduled to begin in January 2010. A total procurement objective of 1,919 units was set and an initial operational capability scheduled for March 2011. However, as a result of intermittent hardware and software failures in testing, the program was decertified for initial operational test and evaluation in September 2010, and low rate initial production deliveries were delayed until June 2011. The missile has subsequently reentered testing, but significant concerns about the system’s reliability remain. Further, Navy officials stated that the current program schedule is oriented toward success with virtually no margin to accommodate technical deficiencies that may be discovered during operational testing. In the event operational testing reveals new or lingering major deficiencies, program officials report the planned April 2012 fielding date will be at risk, and the Navy may be forced to revisit its commitment to the program. IDECM: From December 2000 to June 2010, the Navy authorized six different low rate initial production lots of IDECM Blocks 2 and 3, providing system improvements to the jammer and decoy components. Block 2 production units delivered ahead of schedule, but early Block 3 units encountered operational testing failures; later resolved, these failures drove production delays to remaining units. In Block 4, the Navy is introducing significant hardware design changes to the ALQ-214 jammer component. Ground and flight testing to prove out these design changes is scheduled concurrent with transition to production in April 2012, increasing risk that initial Block 4 units will require design changes and retrofits.concurrency is necessary in order to maintain an efficient production line transition from Block 3 to Block 4 and to meet the desired June 2014 fielding date. They further noted that transition to Block 4 production will initially be for 19 systems, with production rates increasing to as many as 40 per year following completion of testing. Certain airborne electronic attack systems in development may offer capabilities that unnecessarily overlap with one another. This condition appears most prevalent with irregular warfare systems that the services are acquiring under DOD’s fragmented urgent operational needs processes. For example, the Marine Corps, Army, and Air Force have all separately invested to acquire unique systems intended to jam enemy communications in support of ground forces. Further, Navy and Air Force plans to separately invest in new expendable decoy jammers—systems intended to counter near-peer adversaries—also appear to overlap. Declining military operations in Iraq and Afghanistan—coupled with recent changes in the Air Force’s MALD-J program—afford opportunities to consolidate current service-specific acquisition activities. The department’s ability to capitalize on these opportunities, however, may be undermined by a lack of designated, joint leadership charged with overseeing electronic warfare acquisition activities. DOD is investing millions of dollars to develop and procure airborne electronic attack systems uniquely suited for irregular warfare operations. Services are acquiring these systems under both rapid acquisition authorities as well as through the traditional acquisition process. These systems overlap—at least to some extent—in terms of planned mission tasks and technical challenges to date. Yet, they have been developed as individual programs by the different services. Table 4 highlights overlap among three of these systems. According to DOD officials, airborne electronic attack limitations in recent operations, urgent needs of combatant commanders, and the desire to provide ground units with their own locally controlled assets have all contributed to service decisions to individually develop their own systems to address irregular warfare threats. For example, one Marine Corps official told us that his service is focused on increasing its airborne electronic attack capacity to meet Marine Air-Ground Task Force requirements in combat. Marine Corps systems typically equipped to perform these tasks—especially the EA-6B Prowler aircraft—have reached their capacity limits responding to combatant commander taskings. Similarly, Air Force officials stated that ground warfighter requests for airborne electronic attack capabilities sometimes go unfulfilled or are delayed because of the overall constrained capacity during current operations. Further, Army and Marine Corps officials see operational benefits to providing ground unit commanders with smaller airborne electronic attack assets—permanently integrated within the unit—to free up Air Force and Navy assets for larger-scale missions. In addition, the capabilities offered by current jamming pods, such as the AN/ALQ-99, are often overkill for the irregular warfare mission needs— such as counter-improvised explosive device activities—facing ground unit commanders. Requirements for several of these irregular warfare systems were derived from DOD urgent needs processes—activities aimed at rapidly developing, equipping, and fielding solutions and critical capabilities to the warfighter in a way that is more responsive to urgent requests than the department’s traditional acquisition procedures. As we previously reported, the department’s urgent needs processes often lead to multiple entities responding to requests for similar capabilities, resulting in potential duplication of efforts. Even under these circumstances, the services have shown it is possible to take steps to share technical information among the different programs and services. For instance, the Army’s CEASAR pod is derived from the AN/ALQ-227 communications jammer used on the Navy’s EA-18G—an attribute that Army officials state reduced design risk in the program and provided opportunities for decreased sustainment costs and reuse of jamming techniques between the two services. Similarly, Air Force efforts to develop electronic attack pods flown on MQ-9 Reaper unmanned aerial vehicles (prior to that program’s cancellation) leveraged previous technology investments for the canceled B-52-based stand-off jammer. As military operations in Iraq and Afghanistan wind down—and the services evaluate whether to transition their current urgent needs programs over to the formal weapon system acquisition process— opportunities may exist to consolidate program activities, such as the Intrepid Tiger II and CEASAR systems that are still demonstration programs whose transitions to formal acquisition programs have not yet been determined. The potential for unnecessary overlap in efforts within the airborne electronic attack area is not limited to irregular warfare systems. With respect to near-peer systems, both the Air Force and Navy are separately pursuing advanced jamming decoys—the Air Force through its MALD-J program, and the Navy through its planned Airborne Electronic Attack Expendable initiative. The two services have held discussions with one another about combining efforts toward a joint solution, including a meeting between Navy and Air Force requirements offices and acquisition officials in December 2010, but they have not yet reached resolution on a common path forward. According to Navy officials, relatively minor design and software modifications to what was a planned second increment to the Air Force’s MALD-J system could produce a system that satisfies both services’ mission requirements. However, Air Force officials stated that accommodating the Navy’s mission requirements within the system would increase program costs and delay planned fielding of the Increment II system, essentially rendering the planned program unexecutable. Subsequently, Air Force officials stated that unless Increment II, in its planned configuration, sufficiently met Navy requirements, they did not expect the Navy to have any formal role in the program. In July 2011, however, the Air Force suspended MALD-J Increment II activities because of a lack of future funding availability. In February 2012, the Air Force’s fiscal year 2013 budget submission officially canceled the program. This cancellation affords an opportunity for continued dialogue between the two services on the potential benefits and drawbacks to pursuing a common acquisition solution. In 2009, DOD completed a capabilities analysis that cited electromagnetic spectrum leadership as the highest priority among 34 capability gaps identified. The study concluded, in part, that leadership deficiencies, or its absence, significantly impede the department from both identifying departmentwide needs and solutions and eliminating potentially unnecessary overlap among the services’ airborne electronic attack acquisitions. Specifically, the department lacks a designated, joint entity to both coordinate internal activities and represent electronic warfare activities and interests to outside organizations. Acknowledging this leadership gap, and its relation to acquisition activities, the department has initiated efforts to organize the Joint Electromagnetic Spectrum Coordination Center under the leadership of U.S. Strategic Command. In addition, officials representing the Office of the Assistant Secretary of Defense for Research and Engineering stated that they are considering actions they might take to improve leadership and oversight of electronic warfare acquisition activities across the services. In a separate report, we intend to evaluate planned and existing electronic warfare governance structures within DOD. Notwithstanding the considerable investment over the years in new and enhanced airborne electronic attack systems and subsystems, capability gaps, some identified a decade ago, are expected to persist, or even increase, through 2030 as adversary capabilities continue to advance. In a series of studies since 2002, DOD identified existing current and anticipated gaps in required capabilities. Some have persisted for years— for example, deficiencies in certain jamming capabilities to provide cover for penetrating combat aircraft. The analyses found that, in many cases, new materiel solutions were required to close these gaps. Table 5 outlines primary findings from three major analyses. The 2002 analysis identified needs for stand-in and core component jamming capabilities and suggested numerous ways to meet these. The 2004 study revalidated these gaps and outlined 10 potential materiel solutions to fill those gaps. It also acknowledged the existence of both near-peer and irregular warfare threats requiring airborne electronic attack solutions. The Army and Marine Corps requested that the analysis address irregular warfare threats because of the growing concern over improvised explosive devices in Iraq and Afghanistan and the suboptimal application of existing systems in the inventory to defeat those threats. The Air Force concluded in its analysis that fulfilling airborne electronic attack mission needs would require developing and fielding multiple new systems. The most recent study, U.S. Strategic Command’s Electronic Warfare Initial Capabilities Document, identified additional capability gaps affecting airborne electronic attack. This 2009 analysis built upon a capabilities- based assessment completed a year earlier and outlined mitigation strategies to address these gaps instead of merely prescribing specific platform solutions. This approach was consistent with the analysis’s charter to guide and inform the services’ acquisition programs. However, the analysis did recommend specific capabilities and system attributes for the Next Generation Jammer program to consider that would assist in mitigating some of the gaps identified in the 2009 analysis. The analysis also concluded that new systems would be needed to close nearly half of the gaps identified in airborne electronic attack capabilities. To supplement its acquisition of new systems, DOD is undertaking other efforts to bridge existing airborne electronic attack capability gaps. In the near term, services are evolving their tactics, techniques, and procedures for operating existing systems to enable them to take on additional mission tasks. These activities maximize the utility of existing systems and better position operators to complete missions with equipment currently available. Longer-term solutions, however, depend on the department successfully capitalizing on its investments in science and technology. DOD has recently taken actions that begin to address long- standing coordination shortfalls in this area including designating electronic warfare as a priority area for investment and creating a steering council to link capability gaps to research initiatives. However, these steps do not preclude services from funding their own research priorities ahead of departmentwide priorities. DOD’s planned implementation roadmap for electronic warfare offers an opportunity to assess how closely component research investments are aligned to the departmentwide electronic warfare priority. The refinement of tactics, techniques, and procedures can position the services to maximize the capabilities of existing systems while new capabilities are being developed. As Navy airborne electronic attack operators stated, when a capability gap requiring a new system is identified, warfighters generally do not have the luxury of waiting for the acquisition community to develop and field a system to fill that gap. In the interim, tactics, techniques, and procedures for existing systems must evolve to provide at least partial mitigation to the threat being faced. Development and refinement of new ways to use existing equipment allow the services to maximize the utility of their airborne electronic attack systems and leave them better positioned to complete missions with the assets they have available. The following two systems provide examples where operator communities have refined tactics, techniques, and procedures to meet emerging threats: AN/ALQ-99 Tactical Jamming System: Navy officials told us that threats encountered in Iraq and Afghanistan operations have driven significant changes to how the AN/ALQ-99 Tactical Jamming System is employed. In essence, tactics, techniques, and procedures for the system had to evolve to maximize the system’s capabilities against irregular warfare threats. According to Navy officials, however, these adaptations represent only a temporary solution as their application— coupled with increased operational activity—has caused jamming pods to degrade and burn out at an increasing rate, subsequently increasing maintenance requirements for the system. EC-130H Compass Call: According to Air Force officials, EC-130H tactics, techniques, and procedures have rapidly evolved to encompass dynamically changing electronic attack threats, which include irregular warfare. These changes include modifications to both how the operator employs the aircraft as well as to the range of threats targeted by mission planners. Both Navy and Air Force officials emphasized that sustained investments in tactics, techniques, and procedures offer considerable return on investment and can provide important, near-term solutions to longer-term, persistent threats. According to these officials, these investments position operators to “do more with less”—in effect, offer them the opportunity to mitigate or counteract a threat without the required new system. However, limits exist to the extent to which refinements to current operating approaches for existing systems can bridge capability gaps. For example, it is increasingly difficult to further optimize AN/ALQ-99 jamming pods to counter advanced, integrated air defense systems. Specifically, Navy officials stated that the AN/ALQ-99 has reached its limit in terms of the underlying architecture’s capability to grow to counter new, sophisticated types of threats. Investment in the science and technology research base is a longer-term approach DOD uses to address capability gaps in mission areas. Electronic warfare, including airborne electronic attack, is supported by research investments in fields such as sensors, apertures, power amplifiers, and unmanned aircraft technology that may help address existing capability gaps. Service components categorize research investments differently from one another, which complicates efforts to clearly define funding devoted to airborne electronic attack. Table 6 identifies some of DOD’s current airborne electronic attack-related research investments. However, not all investments in these fields will necessarily improve airborne electronic attack capabilities. Research officials identify the transition to system development and procurement as one of the primary goals of defense research programs, but acknowledge, reasonably, that not every program will successfully develop a transitionable product. Some acquisition programs, such as the Next Generation Jammer and the MQ-9 Reaper Electronic Attack Pod, invest directly in research to guide the transition process and increase the likelihood of success. But even with this direct attention, technology maturation and development for Next Generation Jammer is expected to last 8 to 9 years. Consequently, current science and technology initiatives represent a long-term investment in future capabilities and are less suited to meeting existing needs. DOD analyses during the past decade have identified coordination deficiencies that constrain the department’s ability to capitalize on its science and technology investments. For instance, a 2005 Naval Research Advisory Committee report found that within the Navy, research and development efforts were unduly fragmented, with one laboratory or development activity often being unaware of what another was doing. Further, this study highlighted the lack of a long-range science and technology investment planning process within the Navy. Similarly, in 2007, the Defense Science Board reported that although relevant and valuable science and technology activity was occurring, an overarching, departmentwide strategic technology plan with assigned responsibility, accountability, and metrics did not exist. According to the board, DOD’s science and technology activities and investments should be more directly informed by the department’s strategic goals and top-level missions—an objective that would require a closer coupling of technologists and users, including requirements and capabilities developers. A 2010 Naval Research Advisory Committee report built on previous findings noted that stewardship of long-term naval capabilities was “vague at best” and lacked specific organizational assignment. The report recognized the Navy as having the lead role within DOD for electronic warfare, but identified sporadic and uncoordinated execution across the technical community—noting little evidence of engagement among the science and technology community at large. Further, the report advised that closer coordination between operational and technical communities was essential for the realization of desired long-term capabilities. DOD has recently taken actions that begin to address these shortfalls, including formalizing existing investment processes for several key science and technology areas. Most notably, in April 2011 the Secretary of Defense designated electronic warfare as one of seven priority areas for science and technology investment from fiscal years 2013 through 2017. According to officials from the Office of the Assistant Secretary of Defense for Research and Engineering (ASD(R&E)), this designation carries the promise of increased research funding and has prompted chartering of the interdepartmental Electronic Warfare Priority Steering Council. This council is made up of research officials from ASD (R&E), the services, and various defense science and technology groups, such as the Defense Advanced Research Projects Agency, and is charged with effectively evaluating electronic warfare capability gaps and linking them with research initiatives necessary to fill them. To support this process, the council is developing an implementation roadmap to guide coordination of investments within the electronic warfare area. The council also facilitates ASD(R&E) coordination with requirements teams and service/external research offices to determine the specific fields of inquiry that will be needed to support planning for future electronic warfare capability needs. Previously, this coordination was handled informally, whereas the new council provides authority and visibility to the discussions and decisions made. Notwithstanding these important steps, services may inevitably face situations where they have to choose between funding their own, service- specific research priorities and funding departmentwide priorities. As the Assistant Secretary of Defense for Research and Engineering testified in 2011, DOD’s seven priority areas for science and technology investment are meant to be in addition to the priorities outlined by individual components (i.e., service research agencies and DARPA). In other words, departmentwide science and technology priorities do not necessarily supplant service priorities. Absent strategic direction, however, services have generally been inclined to pursue their own research interests ahead of departmentwide pursuits. DOD’s planned implementation roadmap for electronic warfare offers opportunities to assess how closely component research investments are aligned to the departmentwide electronic warfare priority and to coordinate component investments in electronic warfare. The rapidity of evolving threats, together with the time and cost associated with fielding new systems, creates a major challenge to DOD and its capacity to fill all of its capability gaps. This dynamic makes it imperative that the department get the most out of its electronic warfare investments. At this point, that does not appear to be the case. The systems being acquired have problems and will not deliver as expected; potential overlap, to the extent that it leads to covering some gaps multiple ways while leaving others uncovered, drains buying power from the money that is available; and DOD acknowledges a leadership void that makes it difficult to ascertain whether the current level of investment is optimally matched with the existing capability gaps. Within the airborne electronic attack mission area, budgetary pressures and related program cancellations prompted the department to change its acquisition strategy from a system of systems construct—as underpinned by the 2002 analysis of alternatives—to a potentially less robust, but more affordable, family of systems. In addition, new systems, including AARGM and MALD, that are designed to replace or augment legacy assets have encountered technical challenges while in acquisition, subsequently requiring the services to delay fielding plans within each program. Other acquisition programs, including IDECM and MALD-J, are structured with a high degree of concurrency between development, production, and testing that position them for similar suboptimal outcomes. Although individual service decisions to delay or cancel underperforming or resource-intensive programs may be fiscally prudent, the cumulative effect of these decisions creates uncertainty as to when, or if, current departmentwide airborne electronic attack capability gaps can be filled. At present, even if the department successfully acquires the full complement of systems outlined in its family of systems strategy, some capability gaps identified a decade ago may persist. As such, the department can benefit from reevaluating its capability gaps—using structures like the new Electronic Warfare Priority Steering Council—to identify which ones are highest priorities for science and technology investment and to determine areas where it is more willing to accept mission risk. This analysis, when coupled with an examination of current service-specific science and technology investments, can position DOD to realize improved efficiencies in its electronic warfare research activities and better align constrained budgets with highest-priority needs. Additionally, because underperformance in acquisition programs exacerbates existing capability gaps, realistic assessments of higher-risk programs can provide needed insight into what capabilities each platform is likely to deliver and when. Shortfalls in acquisition should not be the deciding factor on which capability gaps the department accepts. At the same time, services continue to pursue and invest in multiple separate airborne electronic attack systems that potentially overlap with one another. This overlap is most evident in irregular warfare systems, including the Marine Corps’s Intrepid Tiger II and the Army’s CEASAR systems, but is also present in Air Force and Navy efforts to develop expendable jamming decoys through their respective MALD-J and Airborne Electronic Attack Expendable initiatives. Pursuing multiple separate acquisition efforts to develop similar capabilities can result in the same capability gap being filled twice or more, can lead to inefficient use of resources, and may contribute to other warfighting needs going unfilled. Leveraging resources and acquisition efforts across services— not just by sharing information, but through shared partnerships and investments—can simplify developmental efforts, can improve interoperability among systems and combat forces, and could decrease future operating and support costs. Such successful outcomes can position the department to maximize the returns it gets on its airborne electronic attack investments. We recommend that the Secretary of Defense take the following five actions: Given airborne electronic attack programmatic and threat changes since 2002, complete the following: Conduct program reviews for the AARGM, IDECM, MALD, and MALD-J systems to assess cost, schedule, and performance and direct changes within these investments, as necessary. Determine the extent to which the most pressing airborne electronic attack capability gaps can best be met—using the assets that are likely to be available—and take steps to fill any potential gaps. Align service investments in science and technology with the departmentwide electronic warfare priority, recognizing that budget realities will likely require trade-offs among research areas, and direct changes, as necessary. To ensure that investments in airborne electronic attack systems are cost-effective and to prevent unnecessary overlap, take the following actions: Review the capabilities provided by the Marine Corps’s Intrepid Tiger II and Army’s CEASAR systems and identify opportunities for consolidating these efforts, as appropriate. Assess Air Force and Navy plans for developing and acquiring new expendable jamming decoys, specifically those services’ respective MALD-J and Airborne Electronic Attack Expendable initiatives, to determine if these activities should be merged. We provided a draft of this report to DOD for comment. In its written comments, which are reprinted in appendix III, DOD concurred with three of our recommendations and partially concurred with two recommendations. DOD also provided technical comments that we incorporated into the report, as appropriate. DOD concurred with our first recommendation to conduct program reviews for the AARGM, IDECM, MALD, and MALD-J systems and direct changes within these investments, as necessary, identifying a March 2012 Navy review of the IDECM program and planned July 2012 Navy review of the AARGM system. For MALD and MALD-J, DOD plans to conduct a program review in early 2014, which will coincide with a planned full rate production decision for MALD-J. In the interim, DOD intends to continue low rate initial production of MALD-J units. However, because MALD has experienced significant technical challenges within the past 2 years, and because DOD plans to invest an additional $176.9 million toward MALD-J production through fiscal year 2014, we believe an earlier review may be warranted. In its written comments, DOD also stated that the Deputy Assistant Secretary of Defense for Strategic and Tactical Systems will chair a meeting to review AARGM, IDECM, MALD, and MALD-J with the Navy and Air Force to verify progress, but it did not provide a timetable for this review. DOD also concurred with our second recommendation to determine the extent to which the most pressing airborne electronic attack capability gaps can best be met—using the assets that are likely to be available— and take steps to fill any potential gaps. Most notably, DOD cited plans for U.S. Strategic Command to annually assess all DOD electronic warfare capabilities—including current requirements, current and planned future capabilities, and the supporting investment strategy—and present this assessment to the Joint Requirements Oversight Council. Further, DOD concurred with our third recommendation to align service investments in science and technology with the departmentwide electronic warfare priority, noting in its written comments that it expects implementation roadmaps for priority areas (including electronic warfare) will serve to coordinate component investments and accelerate the development and delivery of capabilities. DOD partially concurred with our two recommendations related to potentially unnecessary overlap among airborne electronic attack systems, identifying through its written comments plans for the Deputy Assistant Secretary of Defense for Strategic and Tactical Systems to review the Intrepid Tiger and CEASAR systems with the Marine Corps and Army to investigate the efficacy of additional coordination as future acquisition plans are evaluated. Similarly, DOD noted that following the expected March 30, 2012, completion of a new Air Force plan related to developing and procuring an Increment II variant of MALD-J, the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics; Office of the Director, Cost Assessment and Program Evaluation; and Joint Staff would review Air Force and Navy plans and assess opportunities for coordination among the MALD-J and Airborne Electronic Attack Expendable initiatives, should funding be allocated for a future expendables program. However, the basis for DOD’s partial agreement on these two recommendations appears to stem from its desire to achieve efficiencies through increased coordination among programs—not through consolidation of systems possessing similar capabilities. We emphasize that coordination is not a substitute for consolidation—particularly in the current constrained budget environment—and we encourage DOD to expand the scope of its planned reviews to include assessments of potential unnecessary redundancies within these two sets of systems. Additionally, DOD commented that our draft report overstated the acquisition duplication among airborne electronic attack systems. Most notably, DOD pointed to its cancellations of the MQ-9 Electronic Attack Pod and MALD-J Increment II programs, as outlined in its fiscal year 2013 budget submission, as evidence that duplication was being managed. These cancellations were announced after we had completed our work and drafted the report. During the period that our draft report was with the agency for comment, we revised our report and recommendations, in coordination with DOD, to account for these recent changes. Most notably, we revised our fourth and fifth recommendations to remove the newly canceled MQ-9 Electronic Attack Pod and MALD-J Increment II systems, respectively, as additional platforms where DOD may identify opportunities for consolidation. DOD’s written comments were subsequently crafted in response to our revised set of recommendations. As noted above, opportunities to reduce duplication further remain. We also briefly introduced the Marine Air Ground Task Force Electronic Warfare concept, in response to DOD’s comments, while further clarifying that our report did not evaluate ground- or ship-based electronic warfare systems. DOD also commented that our characterization of the family of systems strategy for airborne electronic attack was misleading, stating that the system of systems synergies envisioned in 2002 continue to be pursued. We acknowledge that DOD is considering options to field additional systems against high-end threats, but we believe that the current acquisition strategy and its distributed approach is very much in line with the definition of a family of systems, as outlined by DOD. When DOD embarked on the system of systems strategy in 2002, it envisioned fielding certain major systems, such as B-52 Standoff Jammer and J-UCAS, which were later canceled. Without these planned elements, there is no evidence to suggest that the remaining systems together possess capability beyond the additive sum of the individual capabilities of its members—a characteristic fundamental to a system of systems. We are sending copies of this report to interested congressional committees, the Secretary of Defense, the Secretary of the Army, the Secretary of the Navy, and the Secretary of the Air Force. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This report evaluates the Department of Defense’s (DOD) airborne electronic attack capabilities and investment plans. Specifically, we assessed (1) the department’s strategy for acquiring airborne electronic attack capabilities, (2) progress made developing and fielding systems to meet airborne electronic attack mission requirements, and (3) additional compensating actions taken by the department to address capability gaps, including improvements to tactics, techniques, and procedures and investments in science and technology. To assess the department’s strategy for acquiring airborne electronic attack capabilities, we analyzed DOD’s documents outlining mission requirements and acquisition needs, including the 2002 Airborne Electronic Attack Analysis of Alternatives, 2004 Initial Capabilities Document for Denying Enemy Awareness through Airborne Electronic Attack, 2008 Electronic Warfare Capabilities-Based Assessment, 2009 Electronic Warfare Initial Capabilities Document, and 2010 Electronic Warfare Strategy of the Department of Defense report to Congress. We also reviewed platform-specific capabilities documents, service roadmaps related to airborne electronic attack, and budget documents to understand how the family of systems construct evolved over time. To identify capability limitations and sustainment challenges facing current airborne electronic attack systems, we reviewed program briefings and acquisition documentation related to these systems. To further corroborate documentary evidence and obtain additional information in support of our review, we conducted interviews with relevant DOD officials responsible for managing airborne electronic attack requirements and overseeing the related family of systems, including officials in the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics; Office of the Director, Cost Assessment and Program Evaluation; Office of the Assistant Secretary of the Navy for Research, Development and Acquisition; Office of the Chief of Naval Operations— Information Dominance and Air Warfare directorates; Office of the Assistant Secretary of the Air Force for Acquisition; Air Force Office of the Deputy Chief of Staff for Operations, Plans, and Requirements— Electronic Warfare division; Air Force Air Combat Command; Army Office of the Deputy Chief of Staff for Operations, Plans, and Training— Electronic Warfare division; Marine Air-Ground Task Force Electronic Warfare; U.S. Strategic Command; and Joint Staff. We also held discussions with DOD officials responsible for sustaining current airborne electronic attack systems, including officials in (1) Navy program offices for Airborne Electronic Attack, Advanced Tactical Aircraft Protection Systems, Direct and Time Sensitive Strike, and Aerial Target and Decoy Systems and (2) Air Force offices, including the F-22A Raptor and F- 16CM program offices and Warner Robins Air Logistics Center. To assess progress made developing and fielding systems to meet airborne electronic attack mission requirements, we analyzed documents outlining acquisition plans, costs, and performance outcomes, including capabilities documents, program schedules, test reports, budget submissions, system acquisition reports, and program briefings. These same materials afforded information on key attributes of individual airborne electronic attack systems, which we used to assess potential overlap among systems in development. Further, we identified persisting airborne electronic attack capability gaps by reviewing the 2009 Electronic Warfare Initial Capabilities Document, along with earlier analyses related to airborne electronic attack requirements, and compared the capability needs identified in those documents with current DOD investments in airborne electronic attack capabilities. To supplement our analyses and gain additional visibility into these issues, we conducted interviews with relevant DOD officials responsible for managing airborne electronic attack requirements, including officials in the Office of the Chief of Naval Operations—Information Dominance and Air Warfare directorates; Office of the Assistant Secretary of the Air Force for Acquisition; Air Force Office of the Deputy Chief of Staff for Operations, Plans, and Requirements—Electronic Warfare division; Air Force Air Combat Command; Army Office of the Deputy Chief of Staff for Operations, Plans, and Training—Electronic Warfare division; Marine Air- Ground Task Force Electronic Warfare; U.S. Strategic Command; and Joint Staff. We also held numerous interviews with DOD officials primarily responsible for developing, acquiring, and testing airborne electronic attack systems, including officials in the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics; Office of the Director, Operational Test and Evaluation; Office of the Deputy Assistant Secretary of Defense for Developmental Test and Evaluation; Office of the Assistant Secretary of the Navy for Research, Development and Acquisition; Office of the Assistant Secretary of the Air Force for Acquisition; Navy program offices for Airborne Electronic Attack, F/A-18 and EA-18G, Direct and Time Sensitive Strike, and Advanced Tactical Aircraft Protection Systems; Army Rapid Equipping Force; and Air Force program offices for MALD/MALD-J and MQ-9 Reaper Electronic Attack Pod. To assess additional compensating actions taken by the department to address airborne electronic attack capability gaps, we reviewed service documents outlining recent improvements and refinements to tactics, techniques, and procedures for EA-18G and EC-130H aircraft. We corroborated this information through interviews with officials from the Naval Strike and Air Warfare Center and Air Force Office of the Deputy Chief of Staff for Operations, Plans, and Requirements—Electronic Warfare division charged with refining tactics, techniques, and procedures for EA-18G and EC-130H aircraft. We also reviewed broad agency announcements to understand ongoing science and technology activities related to airborne electronic attack. We supplemented this documentation review with discussions with officials engaged in science and technology work tied to airborne electronic attack, including officials in the Office of the Assistant Secretary of Defense for Research and Engineering, Office of Naval Research, Air Force Research Laboratory, and Defense Advanced Research Projects Agency. We conducted this performance audit from February 2011 to March 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix provides analyses of 10 selected airborne electronic attack systems. Figures 4 through 13 show images of each system; tables 7 through 16 provide budget data on each system. Estimated end of service life: 2020 Mission description: The primary mission of the Prowler is the suppression of enemy air defenses in support of strike aircraft and ground troops by interrupting enemy electronic activity and obtaining tactical electronic intelligence within the combat area. The EA-6B uses the AN/ALQ-99 radar jamming pod for non-lethal protection by jamming air defense systems and its AGM-88 High Speed Anti-Radiation Missile for lethal physical attack of air defense systems. Status: In 2010, we reported that the Navy had started replacing its EA- 6B aircraft with EA-18G Growlers and expected all Prowlers to be out of its inventory by 2012. However, the Navy projects Prowlers to remain in service until 2016 to further meet the joint expeditionary need. According to the Navy, this is subject to additional change contingent on the fiscal year 2013 budget. The Marine Corps plans to retire its Prowlers by 2020. In addition, the most recent upgrade program for the EA-6B—the third Improved Capability electronic suite modification (ICAP III)—is nearing completion. ICAP III provides the Prowler with greater jamming capability, including the ability to perform selective reactive jamming. Budget: See the following table for budget information. Estimated end of service life: Mid-band: 2024 Low-band: 2026 High-band: 2028 Mission description: The AN/ALQ-99 Tactical Jamming System is an airborne electronic warfare system carried on the EA-6B and EA-18G to support the suppression of enemy air defenses. The system is capable of intercepting, automatically processing, and jamming received radio frequency signals. Status: Obsolescence issues and advances in adversary technology have reduced the AN/ALQ-99’s ability to counter emerging threats. The Navy is developing its Next Generation Jammer program to replace the AN/ALQ-99 and plans to begin fielding the system in 2020. In the interim, the Navy is currently replacing three aging legacy low-band transmitters to resolve obsolescence and reliability problems. Budget: See the following table for budget information. Estimated end of service life: 2053 Mission description: The EC-130H Compass Call is an airborne, wide area, persistent stand-off electronic attack weapon system able to disrupt and deny adversary use of the electronic battlespace using offensive radio frequency countermeasures. Its primary mission is to deny or disrupt command and control of enemy integrated air defenses, air defense surface-to-air missile and anti-aircraft artillery threats. Its secondary mission is to support ground and special operations forces by denying enemy communications and defeating improvised explosive devices. Status: The Air Force has evolved the Compass Call since it was first fielded in 1982 to meet modern and emerging threats, including commercial communications, early warning radars, and improvised explosive devices. Upgrades and modernization efforts are completed during regularly scheduled depot maintenance. In 2003, as a response to Operation Enduring Freedom, these upgrades transitioned from “Block” upgrades to “Baseline” upgrades to allow for smaller and more focused modernization efforts. Currently, the Air Force is completing Baseline 1 upgrades, beginning Baseline 2 efforts, and developing Baseline 3 requirements. In addition, the Air Force is also replacing the center wing box on all 14 Compass Call aircraft, which will extend the service life of the fleet. Compass Call has been on continuous deployment in support of operations in Iraq and Afghanistan since 2003; which has accelerated the need to replace the center wing boxes. Finally, to further alleviate stress on the fleet, the Air Force plans to procure an additional aircraft, increasing the size of the fleet to 15 aircraft by fiscal year 2016. Budget: See the following table for budget information. Estimated end of service life: Not available Mission description: The F-22A is the Air Force’s fifth-generation air superiority fighter that incorporates a stealthy and highly maneuverable airframe, advanced integrated avionics, and a supercruise engine. Originally developed as an air-to-air fighter, additional capabilities will allow the F-22A to perform multiple missions including destruction of enemy air defenses, air-to-ground attack, electronic attack, and intelligence surveillance and reconnaissance. Status: The F-22A, along with the F-35, is expected to fulfill the Air Force’s requirement for penetrating escort jamming capability. The Air Force initiated a formal F-22A modernization and reliability improvement program in 2003 to incrementally develop and deliver increasing capabilities over time. These increasing capabilities would allow the F- 22A to provide penetrating escort jamming, as envisioned in the airborne electronic attack family of systems strategy. However, fielding of these capabilities has been delayed because of reductions in program funding. In addition, we have previously reported on schedule delays within the modernization and reliability improvement program and their effect on fielding additional capabilities within expected time frames. Further delays in fielding these planned capabilities may affect the Air Force’s ability to provide sufficient penetrating escort jamming, increasing mission risk. Budget: See the following table for budget information. Mission description: The EA-18G Growler replaces the EA-6B Prowler as DOD’s tactical electronic attack aircraft. Like the Prowler, the EA-18G will provide full-spectrum electronic attack to counter enemy air defenses and communication networks. The EA-18G incorporates jamming capabilities, such as the AN/ALQ-99 Tactical Jamming System, and the use of onboard weapons such as the High Speed Anti-Radiation Missile, for the suppression of enemy air defenses. The Growler is the Navy’s platform to fulfill modified escort jamming capability needs. Status: The Growler program entered full rate production in 2009, with a planned acquisition of 88 aircraft. However, in 2009, the Office of the Secretary of Defense directed the Navy to buy an additional 26 aircraft, bringing the total units to be acquired to 114. Through fiscal year 2011, the Navy placed 90 of 114 planned EA-18G aircraft under contract for production. Production is slightly ahead of schedule and has incorporated the increase in total units with limited per-unit cost growth. In 2010, the Director, Operational Test and Evaluation, declared the Growler operationally effective, but also found that the aircraft was unsuitable for operations based on maintainability concerns. Since then, the Navy has taken steps to improve the EA-18Gs suitability through software fixes, and the system recently completed follow-on operational test and evaluation. In addition, initial deployment of the aircraft in support of operations in Iraq, Libya, and Afghanistan recently concluded, and the Navy is assessing the aircraft’s performance, including the remaining challenges mitigating electromagnetic interference with the AN/ALQ-99. Additional software improvements are planned through fiscal year 2018. Budget: See the following table for budget information. Estimated fielding date: 2012 Mission description: AARGM is an air-to-ground missile for carrier- based aircraft designed to destroy enemy radio-frequency-enabled surface-to-air defense. AARGM is an upgrade to the AGM-88 High Speed Anti-Radiation Missile (HARM) and will utilize existing HARM propulsion and warhead sections with new guidance and modified control sections. Status: The Navy authorized AARGM production in September 2008, with deliveries scheduled to begin in January 2010. A total of 1,919 units were planned, with initial operational capability scheduled for March 2011. The program began operational testing in June 2010 after a 9-month delay owing, in part, to concerns about the production representativeness of test missiles. The Navy halted operational testing in September 2010 after hardware and software deficiencies caused a series of missile failures. These testing challenges prompted the Navy to delay AARGM’s planned initial operational capability date and undertake corrective actions to the system. These actions included an evaluation of the AARGM system through laboratory, ground, and flight tests from November 2010 through June 2011. Following this testing, Navy officials concluded that previous testing anomalies were successfully corrected but that the system was at high risk of not meeting suitability requirements during operational testing. The Navy found that insufficient system reliability and manufacturing quality controls remain open deficiencies that will likely result in an excessive number of system failures experienced by operational units, which could prevent the Navy from effectively executing planned missions. To address reliability concerns, the Navy instituted a “fly before you buy” program to screen poor weapons prior to government acceptance. As of July 2011, one-third of missiles delivered for testing were returned to the factory for repair. Recently, the AARGM system resumed operational testing. The Navy now plans to field the system beginning in April 2012 and make a full rate production decision and contract award in June and July 2012, respectively. Budget: See the following table for budget information. Estimated fielding date: 2014 (Block 4) Mission description: IDECM is a suite of self-protection countermeasure systems designed for the F/A-18E/F, including onboard jamming and off- board decoy jamming capabilities. The Navy has fielded IDECM in different blocks dating back to 2002 (Block 1), 2004 (Block 2), and 2011 (Block 3). Each block improved the system’s jamming capabilities, decoy capabilities, or both. Block 4—the phase of production currently in development—extends IDECM onboard jamming capabilities to F/A- 18C/D aircraft. Status: IDECM Block 4 entered development in 2009 and includes redesign of the ALQ-214 onboard jammer from the component design used for earlier blocks. This redesign is driven by the need to reduce weight in order to accommodate the IDECM onboard system on F/A- 18C/D aircraft. Essentially, the new ALQ-214 will perform the same onboard jammer function as found in IDECM Blocks 2 and 3 but with a different form and fit. The Navy expects to transition current IDECM Block 3 full rate production to Block 4 units by April 2012. This production transition will occur concurrent with ground and flight testing of the Block 4 system—a strategy that could drive costly design changes, retrofits, or both to units in production, in the event that the ALQ-214 redesign effort does not materialize on schedule. To mitigate this risk, Navy officials stated that Block 4 full rate production will initially be for 19 systems, with production rates increasing to as many as 40 per year following completion of testing. Further, DOD officials report that Block 4 production will be executed under a firm fixed-price contract—a strategy that DOD officials state will place the financial burden of any retrofits on the vendor. Budget: See the following table for budget information. Estimated fielding date: 2020 (Mid-band on EA-18G) Mission description: The Next Generation Jammer will be an electronic warfare system to support the suppression of enemy air defenses, replacing and improving the capability currently provided by AN/ALQ-99 Tactical Jamming System. The Navy’s EA-18G will employ the Next Generation Jammer as the electronic attack payload. In a separate increment of capability, the Navy plans to integrate the Next Generation Jammer onto the F-35B, which will eventually replace Marine Corps EA- 6B Prowlers. Each increment of capability will be divided into developmental blocks—Block 1 for mid-band, Block 2 for low-band, and Block 3 for high-band frequencies. Status: The Next Generation Jammer is nearing completion of technology maturation activities performed by four different contractors before the program’s entry into the technology development phase. The Navy plans to enter the technology development phase in the third quarter of fiscal year 2013, with an engineering and manufacturing development contract planned for 2015. The Navy has adopted an evolutionary block approach to fielding the Next Generation Jammer. Initial operational capability for Block 1, on the EA-18G aircraft, is scheduled for 2020. The Navy expects to field Blocks 2 and 3 on the EA- 18G in 2022 and 2024, respectively. Fielding dates for the F-35 increment’s blocks are currently undetermined. Budget: See the following table for budget information. 2012 (MALD—actual) 2012 (MALD-J—estimated) Mission description: MALD is an expendable decoy able to represent small, medium, or large aircraft in order to saturate or degrade enemy air defense systems. MALD-J is a variant of MALD that adds jamming capability to the decoy and forms the stand-in jamming component for the airborne electronic attack family of systems. The Air Force plans to acquire a total quantity of 596 MALD and 2,404 MALD-J units. Status: The Air Force approved MALD for low rate initial production in 2008. The Air Force expected to procure 300 MALD units in low rate production before transitioning to full rate production. However, following flight testing failures in summer 2010—attributable, in part, to design issues with the fuel filter—and a later test failure in February 2011 caused by foreign object debris in the fuel line, the MALD system was decertified, and remaining initial operational testing and evaluation activities were suspended. After additional corrective actions by the program office to the MALD design, the system reentered operational testing in July 2011, with test shots fired in late August 2011. According to Air Force testing officials, during the last test shot in the August series (OT-8), the engine for one decoy never started after it detached from the host aircraft, causing that MALD unit to crash. This operational testing event was the final one scheduled for MALD, and DOD officials report that, in January 2012, the Air Force Operational Test and Evaluation Center delivered the MALD initial operational test and evaluation report assessing system performance. As a result of MALD’s testing shortfalls, the Air Force authorized additional low rate initial production purchases for MALD quantities—to the extent that the Air Force will now purchase the entire 596 unit inventory of MALD quantities under low rate initial production, without ever authorizing or achieving full rate production. Technical deficiencies and design changes during low rate initial production prevented demonstration of an efficient manufacturing capability, which in turn prevented MALD from meeting the department’s criteria to enter full rate production. DOD policy states that in order for a system to receive full rate production approval, the system must (1) demonstrate control of the manufacturing process and acceptable reliability, (2) collect statistical process control data, and (3) demonstrate control and capability of other critical processes. Because the MALD and MALD-J designs are identical—except for the addition of a jammer module to MALD-J—the absence of a proven manufacturing process for MALD introduces cost and schedule risk to production of MALD-J. Deficiencies affecting the MALD vehicle have already contributed to MALD-J program delays. The MALD-J low rate initial production decision review—previously planned for September 2009—was delayed until September 2011. Operational testing has subsequently been delayed and is now expected to begin in May 2012. To mitigate this schedule delay, the Air Force has moved to compress MALD-J operational testing from 15 months to 7 months, which program officials report reflects an increase in test range priority and decrease in data turnaround time. According to DOD officials, however, test range execution issues such as aircraft and test equipment availability could potentially extend MALD-J operational testing beyond the currently projected completion date. In addition, the Air Force delayed, and later canceled, plans to develop a second increment of capability for MALD-J—one intended to provide more advanced jamming capabilities. Prior to these decisions, the Air Force’s fiscal year 2012 budget submission outlined plans to budget $54.8 million in research, development, testing, and evaluation funding to MALD-J Increment II in fiscal year 2013. According to DOD, the Air Force is to provide a new plan for developing and procuring an Increment II variant of MALD-J and report to the Deputy Secretary of Defense by March 30, 2012. Budget: See the following table for budget information. Estimated fielding date: To be determined Mission description: The F-35 Joint Strike Fighter is a family of fifth- generation strike aircraft to replace and complement existing Navy, Air Force, and Marine Corps aircraft, such as the F-16 and the F/A-18. The F-35, along with the F-22A, is expected to fulfill DOD’s requirement for penetrating escort jamming capability. Status: The F-35 program entered low rate initial production in 2007, with a planned baseline acquisition of 2,886 aircraft. The program experienced development challenges, including delays in testing, leading to a program-wide review. Based on this review, DOD restructured the program in 2010, increasing the time and funding for development. This restructure triggered a breach of the critical Nunn-McCurdy cost growth threshold. Presently, the program plans to procure 2,457 aircraft, and the services are still reviewing scheduled plans for operational capability and fielding. Budget: See the following table for budget information. In addition to the contact named above, key contributors to this report were Bruce Fairbairn, Assistant Director; Christopher R. Durbin; Laura Greifner; James Kim; Scott Purdy; Sylvia Schatz; Brian Smith; and Roxanna Sun.
Airborne electronic attack involves the use of aircraft to neutralize, destroy, or suppress enemy air defense and communications systems. Proliferation of sophisticated air defenses and advanced commercial electronic devices has contributed to the accelerated appearance of new weapons designed to counter U.S. airborne electronic attack capabilities. GAO was asked to assess (1) the Department of Defense’s (DOD) strategy for acquiring airborne electronic attack capabilities, (2) progress made in developing and fielding systems to meet airborne electronic attack mission requirements, and (3) additional actions taken to address capability gaps. To do this, GAO analyzed documents related to mission requirements, acquisition and budget needs, development plans, and performance, and interviewed DOD officials. The Department of Defense’s (DOD) evolving strategy for meeting airborne electronic attack requirements centers on acquiring a family of systems, including traditional fixed wing aircraft, low observable aircraft, unmanned aerial systems, and related mission systems and weapons. DOD analyses dating back a decade have identified capability gaps and provided a basis for service investments, but budget realities and lessons learned from operations in Iraq and Afghanistan have driven changes in strategic direction and program content. Most notably, DOD canceled some acquisitions, after which the services revised their operating concepts for airborne electronic attack. These decisions saved money, allowing DOD to fund other priorities, but reduced the planned level of synergy among systems during operations. As acquisition plans have evolved, capability limitations and sustainment challenges facing existing systems have grown, prompting the department to invest in system improvements to mitigate shortfalls. DOD is investing in new airborne electronic attack systems to address its growing mission demands and to counter anticipated future threats. However, progress acquiring these new capabilities has been impeded by developmental and production challenges that have slowed fielding of planned systems. Some programs, such as the Navy’s EA-18G Growler and the Air Force’s modernized EC-130H Compass Call, are in stable production and have completed significant amounts of testing. Other key programs, like the Navy’s Advanced Anti-Radiation Guided Missile, have required additional time and funding to address technical challenges, yet continue to face execution risks. In addition, certain systems in development may offer capabilities that overlap with one another—a situation brought on in part by DOD’s fragmented urgent operational needs processes. Although services have shared technical data among these programs, they continue to pursue unique systems intended to counter similar threats. As military operations in Iraq and Afghanistan decrease, opportunities exist to consolidate current acquisition programs across services. However, this consolidation may be hampered by DOD’s acknowledged leadership deficiencies within its electronic warfare enterprise, including the lack of a designated, joint entity to coordinate activities. Furthermore, current and planned acquisitions will not fully address materiel-related capability gaps identified by DOD—including some that date back 10 years. Acquisition program shortfalls will exacerbate these gaps. To supplement its acquisition of new systems, DOD is undertaking other efforts to bridge existing airborne electronic attack capability gaps. In the near term, services are evolving tactics, techniques, and procedures for existing systems to enable them to take on additional mission tasks. These activities maximize the utility of existing systems and better position operators to complete missions with equipment currently available. Longer-term solutions, however, depend on DOD successfully capitalizing on its investments in science and technology. DOD has recently taken actions that begin to address long-standing coordination shortfalls in this area, including designating electronic warfare as a priority investment area and creating a steering council to link capability gaps to research initiatives. These steps do not preclude services from funding their own research priorities ahead of departmentwide priorities. DOD’s planned implementation roadmap for electronic warfare offers an opportunity to assess how closely component research investments are aligned with the departmentwide priority. GAO recommends that DOD conduct program reviews for certain new, key systems to assess cost, schedule, and performance; determine the extent to which the most pressing capability gaps can be met and take steps to fill them; align service investments in science and technology with the departmentwide electronic warfare priority; and review capabilities provided by certain planned and existing systems to ensure investments do not overlap. DOD agreed with three recommendations and partially agreed with the two aimed at reducing potential overlap among systems. DOD plans to assess coordination among systems, whereas GAO sees opportunities for consolidation, as discussed in the report.
The Aviation and Transportation Security Act, enacted in November 2001, assigned TSA responsibility for security in all modes of transportation, which include aviation, maritime, mass transit, highway and motor carrier, freight rail, and pipeline. The act included requirements for deploying a federal screening workforce at airports and screening all passengers and property transported from or within the United States on commercial aircraft. While TSA has a more direct role in ensuring the security of the aviation mode through its management of a passenger and baggage screener workforce that inspects individuals and their property to deter and prevent an act of violence or air piracy, TSA has a less direct role in securing other modes—such as freight rail and highway and motor carrier—in that it generally establishes voluntary standards, conducts inspections, and provides recommendations and advice to owners and operators within those modes. Responsibility for securing these modes is shared with other federal agencies, state and local governments, and the private sector. However, TSA has responsibility for receiving, assessing, and distributing intelligence information related to transportation security in all modes and assessing threats to the transportation system. Within TSA, the Office of TSNM is responsible for setting policy for all modes of transportation. For example, the Mass Transit TSNM office develops strategies, policies, and programs to improve transportation security including operational security activities, training exercises, public awareness, and technology. TSA-OI receives intelligence information regarding threats to transportation and aims to disseminate it, as appropriate, to officials in TSA, the federal government, state and local officials, and to industry officials with transportation responsibilities. Although it is not an intelligence generator, the office receives and assesses intelligence from within and outside of the intelligence community to determine its relevance to transportation security. Sources of information outside the intelligence community include other DHS components, law enforcement agencies, and owners and operators of transportation systems. TSA-OI also reviews suspicious activity reporting by Transportation Security Officers, Behavior Detection Officers, and Federal Air Marshals. TSA-OI has deployed Field Intelligence Officers (FIO) throughout the United States to provide additional intelligence support to Federal Security Directors (FSD) who are responsible for providing day-to-day operational direction for federal security at airports—and their staffs. In addition, the FIOs serve as liaisons with state, local, and tribal law enforcement officials and intelligence fusion centers. TSA-OI disseminates security information through security-related information products including reports, assessments, and briefings. These products are also shared with intelligence community members and other DHS organizations. Table 1 describes TSA’s primary security-related information-sharing products. TSA is one of several sources of security-related information for transportation stakeholders. These stakeholders may also receive information from other federal agencies such as the Federal Bureau of Investigation (FBI), Department of Defense, and Department of Transportation, as well as, among others, state and local fusion centers and industry associations. TSA uses multiple mechanisms to distribute these products. Table 2 describes some of the mechanisms that TSA uses. Other mechanisms that transportation stakeholders may use to obtain security-related information include those operated by regional, state, and local entities such as law enforcement agencies and emergency operations centers, as well as industry-sponsored mechanisms such as the Association of American Railroads’ Railway Alert Network, among others. Because the private sector owns and operates the majority of infrastructure and resources that are critical to our nation’s physical and economic security, it is important to ensure that effective and efficient information-sharing partnerships are developed with these private sector entities. Both the TSISP and DHS’s Information Sharing Environment Implementation Plan emphasize the importance of two-way information sharing between government and industry through a framework that communicates actionable information on threats and incidents. In support of this endeavor, TSA is responsible for receiving, assessing, and distributing intelligence information related to transportation security and acting as the primary liaison for transportation security to the intelligence and law enforcement communities. TSA has developed security-related information products as part of its efforts to share security-related information with transportation stakeholders. Our 2011 survey results indicate general satisfaction among transportation stakeholders who received these products across each mode of transportation, but satisfaction varied by transportation sector. As highlighted in figure 1, 57 percent (155 of 275) of all stakeholders who responded to our survey question concerning overall satisfaction were satisfied with the security-related information they received, while approximately 10 percent (27 of 275) were dissatisfied. Survey results regarding satisfaction with security-related information products and briefings across transportation sectors indicate that respondents from five of the seven sectors we surveyed were satisfied. However, less than half of all respondents from both the air cargo (20 of 53) and class I rail (2 of 7) sectors respectively, were satisfied with TSA’s products, as shown in figure 2. We also asked survey respondents about their satisfaction with the transportation security-related information they received or obtained from a variety of other sources including the industry associations, FBI, and security consultants, among others. As discussed earlier, other organizations also provide transportation security information to state and local transportation agencies. Stakeholders were generally satisfied with the information from these other sources. For example, stakeholder satisfaction among respondents that received information from the industry associations, FBI, and security consultants was 81 percent (165 of 203), 69 percent (96 of 139), and 51 percent (52 of 102) respectively. Stakeholder satisfaction with TSA products was measured both in terms of overall satisfaction with all products combined, as well as across five separate dimensions of quality—accuracy, actionability, completeness, relevance, and timeliness for each product type. Regarding these specific dimensions, more stakeholders were satisfied with the relevance and completeness of these products, whereas fewer stakeholders were satisfied with the actionability of TSA’s products, as shown in figure 3. As shown in figure 3, an average of 72 and 69 percent of stakeholders we surveyed reported being satisfied with the relevance and completeness, respectively, of these products, compared to an average of 59 percent satisfaction with the actionability of this information. For the purposes of the survey, actionability was defined as the degree to which TSA’s security-related information products enabled stakeholders to make adjustments to their security measures, if such a change was warranted. In open-ended comments included in our survey, stakeholders from each of the sectors stated that actionable information also includes analysis of trends, practices, and probability that would allow them to adjust their security measures as appropriate. For example, of the 53 air cargo stakeholders that completed our survey, 6 provided open-ended comments in our survey that TSA provides very little security-related information to their industry concerning unscheduled air carriers such as on-demand cargo operations. These stakeholders stated that the information they receive is usually related to either large cargo companies like FedEx and UPS or passenger air carriers. While only one Class I rail survey respondent reported being dissatisfied with the security-related information their organization receives, five of the seven Class I respondents cited concerns with the lack of analysis associated with the information they receive from TSA. For example, one Class I respondent suggested TSA increase incident analysis and provide more detail on various terrorist approaches and how these methodologies may impact freight rail. According to this respondent, more rail-specific analysis would assist their industry with developing current countermeasures to be as effective as possible against mitigating potential threats. TSA officials stated that the GRID will provide a better opportunity for TSA to provide an analytical summary of law enforcement and open source reporting emerging in the last 30 days, including information on threats, significant airport and aircraft incidents, terrorist groups, security trends and new technologies, and intelligence and law enforcement advisories. freight rail stakeholders we interviewed stated that TSA’s security-related information products lacked actionable analysis and did not contain information that would allow them to take any specific actions. Also, 7 of the 18 stakeholders we interviewed across each of the three modes commented that opportunities exist for TSA to increase incident analysis and provide more detail on pre-attack planning as well as the trends identified in various terrorist attempts and how these may impact their industry. Our previous work on information sharing highlights continuing challenges that DHS faces in providing actionable information to its stakeholders. For example, we previously reported that most information- sharing and analysis centers established to share information with stakeholders from critical sectors have expressed concerns with the limited quantity of information and the need for more specific, timely, and actionable information from DHS and/or their sector-specific agencies. According to DHS, the federal government is uniquely positioned to help inform critical security investment decisions and operational planning as private sector operators generally look to the government as a source of security-related threat information. However, we found that a lack of actionable TSA data has led stakeholders to rely on other sources for relevant security-related information. Of the 275 stakeholders who completed our survey, 203 reported receiving security-related information from other sources. Additionally, Amtrak officials told us that they have contracted with intelligence analysts at Spectel to monitor open and sensitive data sources for rail-related security material. The analysts produce a weekly report called Railwatch that, according to these officials, helps them develop tactics to defend against terrorist activity. Amtrak officials told us that these analysts also work closely with government agencies, including fusion centers, to develop and share information that they described as much more rail-centric than the daily security information that DHS makes available to them. TSA officials noted that aviation stakeholders may receive security directives that outline required steps for enhancing security. They stated that providing prescriptive actionable intelligence is challenging as there is not always information available. However, they recognized the need to provide this information to stakeholders when available and to improve the analysis provided in their products. According to the TSISP, TSA’s information-sharing products represent an important part of its efforts to establish a foundation for sharing security- related information with all appropriate public and private transportation stakeholders. We have previously reported that information is a crucial tool in fighting terrorism and that its timely dissemination is critical to maintaining the security of our nation. When stakeholders are provided with a comprehensive picture of threats or hazards and participate in ongoing multidirectional information flow, their ability to make prudent security investments and develop appropriate resiliency strategies is substantially enhanced. According to the TSISP, two-way information sharing between government and industry is one of the goals of maintaining the security of our nation’s transportation system. However, some of TSA’s stakeholders are not receiving these products. We surveyed stakeholders who TSA had identified as points of contact who should receive TSA security-related information products. As shown in figure 4, approximately 18 percent (48 of 266 stakeholders who provided responses to this question) of the transportation stakeholders we surveyed reported that they did not receive TSA’s transportation security-related information reports, 34 percent (91 of 271) reported that they did not receive a TSA briefing, and approximately 48 percent (128 of 264) reported that they did not receive TSA’s assessments in 2010. Among the rail stakeholders we surveyed, approximately 11 percent (6 of 57) reported not receiving any security-related information reports while 32 percent (18 of 56) reported they did not receive an assessment from TSA. Approximately 78 percent (207 of 266) of the survey respondents across all modes reported receiving TSA reports. However, the number of transportation security stakeholders who received TSA’s assessments and briefings varied by mode. Survey responses also indicated that TSA is the primary, but not only, source for these products. For example, 36 percent (49 of 207) of survey respondents answered that they received TSA’s reports from other sources and 27 percent (18 of 97) of respondents answered that they received TSA’s assessments from other sources. TSA uses different approaches to disseminate its security-related information products among the aviation, rail, and highway modes, which may help explain some of the variation in products received across modes. For example, TSA officials responsible for overseeing the freight rail sector said that they maintain contact information for each of their approximately 565 industry stakeholders and aim to provide TSA-OI products directly to the rail security coordinators designated by each railroad. In contrast, TSA officials responsible for overseeing the highway and motor carrier sector said that they share security-related information on a more selective basis because of the large number and broad nature of highway stakeholders. With tens of thousands of stakeholders— including bus, truck, and motor coach operators—across the country, it is not practical for TSA to reach every stakeholder. Therefore, TSA relies on communications with representatives from these industries rather than individual stakeholders. According to TSA officials, TSA works with industry associations to distribute security-related information because leveraging these partnerships allows TSA to broaden its ability to reach stakeholders. However, stakeholders who are not affiliated with industry associations may not receive these communications. For example, according to the United Motorcoach Association, as many as two-thirds of companies in their sector were not represented by an industry association. While we recognize that not all stakeholders can receive every product, stakeholders included in our survey were identified by TSA as those who should be receiving this information. Receiving a full range of TSA security-related information products could help stakeholders improve their situational awareness or change their operations to better protect their facilities and assets. For example, an official from a domestic passenger air carrier also told us that improved information sharing could have prevented their airline from diverting a plane with a disruptive passenger on board to Detroit, Michigan on the same day that a passenger attempted to detonate explosives aboard another Detroit-bound airplane on Christmas day 2009. This official told us that they had not been informed of this attempted bombing and stated that they would have diverted their company’s plane elsewhere to prevent panic. The mechanisms used by TSA to share information with transportation stakeholders include the Aviation Web Boards, the Homeland Security Information Network (HSIN), and e-mail alerts. TSA’s Aviation Web Boards serve as the principal information-sharing mechanism used to share information with the aviation mode, according to TSA officials. Almost all (174 of 176) of the aviation stakeholders responded to our survey that they had heard of one of the Web Boards. Our survey results indicate that aviation stakeholders were generally satisfied with the Web Boards, with more than 70 percent of aviation respondents satisfied with the ability to locate information, and the relevance, completeness, actionability, and accuracy of the information on the Web Boards. Compared to airports and passenger air carriers, air cargo stakeholders expressed lower levels of satisfaction with the Web Boards, as shown in figure 5. Specifically, less than 60 percent of air cargo stakeholders responding to the survey were satisfied with the accuracy, actionability, and completeness of information on the Web Boards. comments provided by air cargo stakeholders did not explain why they reported less satisfaction than other aviation sectors that have the same access to the Web Boards. Additionally, air cargo stakeholders provided open-ended comments that were similar to those of passenger air carriers and airport stakeholders. However, we observed that TSA has established individual Web Boards for each of the sectors, and not all aviation stakeholders have access to the same Web Boards. Specifically, 54 percent (27 of 50) of air cargo stakeholders responding to the survey were very or somewhat satisfied with accuracy; 54 percent (27 of 50) were very or somewhat satisfied with actionability; and 57 percent (29 of 51) were very or somewhat satisfied with completeness of information on the Web Boards. TSA aims to provide the right information to the right people at the right time through collaboration within and across the transportation sector network, according to TSA’s TSISP. In addition, GAO’s Standards for Internal Control in the Federal Government states that agencies should ensure adequate means of communicating with external stakeholders who may have a significant impact on agency goals and that effective information technology management is critical to achieving useful, reliable, and continuous communication of information.national secure web-based portal—owned and maintained by DHS and other domestic and international users in a mission partnership with DHS—that was established for information sharing and collaboration between the federal, state, local, and private sectors engaged in the homeland security mission. DHS has stated that HSIN-CS is to be the HSIN is a primary information-sharing mechanism for critical infrastructure sectors, including the transportation sector. However, as shown in figure 6, almost 60 percent (158 of 266) of transportation stakeholders we surveyed had never heard of HSIN-CS. Awareness and usage of HSIN-CS varied by transportation mode. As figure 7 shows, 72 percent of aviation stakeholders (124 of 173) responding to the survey had not heard of HSIN-CS and 9 percent (15 of 173) were unsure, and several commented that they would be interested in accessing the system. Among aviation stakeholders, the Web Boards were the more commonly utilized information-sharing mechanism. Among the highway respondents, 28 percent (11 of 39) had not heard of HSIN- CS and 8 percent (3 of 39) were unsure. Of the highway stakeholders who had heard of HSIN-CS, 60 percent (15 of 25) had a user account for the system and had accessed it. Less than half (25 of 54) of the rail respondents had heard of HSIN-CS and 11 percent (6 of 54) were unsure. Of the rail stakeholders who had heard of HSIN-CS, 64 percent (16 of 25) had a user account for the system and had accessed it. Similarly, in September 2010 we reported on a lack of awareness of the public transit subportal on HSIN (HSIN-PT) among public transit agencies we surveyed. We recommended that TSA establish time frames for a working group of federal and industry officials to consider targeted outreach efforts to increase awareness of HSIN-PT among transit agencies that are not currently using or aware of this system. DHS officials concurred with this recommendation and in January 2011 provided an implementation plan with target dates for addressing it. However, the plan did not fully address the recommendation. For example, the plan stated that TSA officials created a consolidated “superlist” of current members of another information-sharing mechanism and invited them to join HSIN-PT. However, the plan did not indicate how TSA would target its outreach efforts to those entities not already on TSA’s lists. In a September 2011 update, TSA indicated that its working group would conduct outreach to smaller transit agencies but did not provide an estimated date for completing these actions. These committees include the Sector Coordinating Councils (SCCs) and the Government Coordinating Councils (GCCs). The NIPP defines the organizational structures that provide the framework for coordination of critical infrastructure protection efforts at all levels of government, as well as within and across sectors. Sector-specific planning and coordination are addressed through coordinating councils that are established for each sector. SCCs comprise the representatives of owners and operators, generally from the private sector. GCCs comprise the representatives of the federal sector-specific agencies; other federal departments and agencies; and state, local, tribal, and territorial governments. These councils create a structure through which representative groups from all levels of government and the private sector can collaborate or share existing approaches to critical infrastructure protection and work together to advance capabilities. direct contact with its more than 500 stakeholders in addition to reaching out to industry associations. The Highway and Motor Carrier TSNM office also uses industry associations to help communicate with various industries about HSIN-CS because its stakeholder group includes millions of people. However, these outreach efforts do not reach stakeholders who fall outside of certain regions and are not members of an association. For example, see GAO, Managing for Results: Enhancing Agency Use of Performance Information for Management Decision Making, GAO-05-927 (Washington, D.C.: Sept. 9, 2005); Program Evaluation: Studies Helped Agencies Measure or Explain Program Performance, GAO/GGD-00-204 (Washington, D.C.: Sept. 29, 2000); Managing for Results: Strengthening Regulatory Agencies’ Performance Management Practices, GAO/GGD-00-10 (Washington, D.C.: Oct. 28, 1999); and Agency Performance Plans: Examples of Practices That Can Improve Usefulness to Decisionmakers, GAO/GGD/AIMD-99-69 (Washington, D.C.: Feb. 26, 1999). effectiveness of its information-sharing efforts. However, as of October 2011, TSA had not developed specific goals or outcome-oriented performance measures for TSA Intel on HSIN. TSA-OI officials stated that the only measure currently available to track dissemination is by counting “hits” on its intranet and internet portals, and told us that this method could be improved. The absence of measurable outcomes for targeted outreach to different transportation sectors hinders DHS efforts to ensure dissemination of security-related information to all appropriate stakeholders. DHS’s outreach efforts have not resulted in widespread HSIN-CS awareness and use among transportation stakeholders who we surveyed, and therefore conducting targeted outreach to stakeholders, and measuring the effectiveness of this outreach, could help to increase awareness and use of this mechanism. With respect to stakeholder satisfaction with HSIN-CS, 21 percent of respondents (55 of 266) had logged on to HSIN-CS and could report whether they were satisfied with the mechanism, as shown in figure 8. Survey results indicate that stakeholders who had logged on to HSIN-CS experienced difficulties in locating information on HSIN-CS. Of those that logged on to HSIN-CS, 40 percent (6 of 15) of highway stakeholders and 53 percent (9 of 17) of rail stakeholders were satisfied with their ability to locate information on HSIN-CS, as shown in figure 9. A rail stakeholder who was less than satisfied noted in open-ended comments on the survey and in an interview that HSIN-CS was difficult to navigate with its many layers and that he could not find information for which he was searching. When we attempted in August 2011 to search for TSA security-related information products using the HSIN-CS search tool, we encountered similar difficulties. For example, knowing that a Freight Rail Modal Threat Assessment released in March 2011 mentioned Toxic Inhalation Hazards, we searched HSIN-CS for this information using the search tool, sorting results by date, but could only find the Freight Rail Modal Threat Assessment from September 2009. Furthermore, when we restricted the search to the “rail/pipeline” sector, no information products appeared. Such difficulties may hinder HSIN-CS from meeting the security information needs of transportation stakeholders, and therefore limit TSA in its goal of achieving useful, reliable, and continuous communication of information. A TSA official agreed that the search function on HSIN-CS has technical limitations that can affect the user’s ability to locate information. Stakeholder satisfaction with the quality of the information on HSIN-CS varied by mode, as shown in figure 10. For most aspects of HSIN-CS on which we surveyed stakeholder satisfaction (five of six), aviation stakeholders responding to the survey were the most satisfied, and rail stakeholders were the least satisfied. In September 2010, we reported that certain aspects of HSIN-PT were not user-friendly. For example, 5 of 11 agencies that had access to HSIN-PT and used it to receive security-related information reported problems with using the system once they logged in. We recommended that DHS take steps to ensure that public transit agencies can access and readily utilize HSIN-PT and that HSIN-PT contain security-related information that is of value to public transit agencies. DHS concurred and in January 2011 provided an implementation plan with target dates for addressing it. However, a September 2011 update to the plan did not include estimated dates for completing the actions. Further, the plan did not provide enough details about the actions to determine whether the agency is taking the necessary steps to address the recommendation. Taking steps to ensure transportation stakeholders can access and readily use HSIN-CS—including improving the search function—could help DHS improve capacity of HSIN-CS to meet those stakeholders’ security-related information needs. Because many transportation stakeholders have not heard of HSIN-CS, do not access the system, or encounter difficulties once they log in, they may not be receiving timely information via the information-sharing mechanism that DHS has established. DHS officials stated that our previous work has prompted ongoing efforts to address these concerns. However these efforts are primarily focused on working with public transit stakeholders to improve HSIN-CS for that mode. DHS officials stated that improvements to HSIN- CS and its portals for other modes is dependent on input and involvement from industry stakeholders. TSA also described its e-mail alerts as a key information-sharing mechanism. Fifty-seven percent of survey respondents (149 of 263 who answered the question) reported receiving a TSA e-mail alert. Sixty-nine percent (37 of 54) of rail stakeholders received e-mail alerts, compared with 58 percent (100 of 173) of aviation stakeholders, and 33 percent (12 of 36) of highway stakeholders. Overall, more than half of stakeholders were satisfied with the five dimensions of quality, ranging from 74 percent (115 of 154) of respondents satisfied with relevance to 64 percent (96 of 151) of respondents satisfied with the accuracy of the e-mail alerts. In general, of those that received an e-mail alert, highway stakeholders were the most satisfied and rail stakeholders were the least satisfied. It is not clear why stakeholders from different modes reported different levels of satisfaction, and stakeholders did not offer open-ended comments explaining their satisfaction levels. The approach that TSA uses to communicate security-related information to stakeholders relies on partnerships established among offices within the agency. A good internal control environment requires that the agency’s organizational structure clearly define key areas of authority and responsibility and establish appropriate lines of reporting. We have previously reported that collaborating agencies should work together to define and agree on their respective roles and responsibilities. In doing so, agencies can clarify who will do what, organize their joint and individual efforts, and facilitate decision making. TSA-OI officials told us that the TSNM offices for each transportation mode serve as the primary contact to stakeholders. However, the specific roles and responsibilities of each office in sharing security-related information with stakeholders are not clearly defined. While TSA-OI depends on the TSNM offices to provide security-related information directly to stakeholders in individual transportation modes, officials from TSA-OI also stated that the responsibility for disseminating transportation security information to intended targets is shared with TSA-OI. However, because of the different dynamics of each transportation mode, TSA-OI defers to the individual modal TSNM offices in deciding how to help industry stakeholders obtain TSA-OI information. TSA officials from five TSNM offices provided different interpretations of the Office of TSNM’s roles and responsibilities in disseminating TSA-OI products and other security-related information. Officials from three of these offices stated that the TSNM offices are the primary means for disseminating security-related information products, with two of the three stating that part of this responsibility is informing stakeholders of TSA’s Intel page on HSIN-CS. However, officials from two other TSNM offices stated that the role of the TSNM offices is limited to communicating policy and regulatory information rather than threat-related information. Additionally, stakeholders differed among and within modes in the extent to which they would contact the TSNM office to obtain security-related information. For example, one aviation stakeholder stated that it would call the TSNM office directly if it needed a product or information while another stated that they would contact their Federal Security Director at the local airport for the same information. Our survey results indicate that some stakeholders are not receiving TSA’s security-related information products and others are not aware of the mechanisms available to them. While officials from both TSA-OI and the Office of TSNM told us that the responsibility for ensuring that stakeholders are receiving security-related products lies within their offices, the roles and responsibilities are not documented and are open to interpretation. TSA officials told us that they do not currently have an information flow diagram or document describing or mandating information sharing between TSA-OI and the Office of TSNM because the two offices share information on a daily basis and discuss routing to internal and external stakeholders. Further, TSA officials stated that information flow regarding transportation security is dynamic and complex with varying levels of classification, audiences, and topics. While it is recognized that information products and mechanisms are selected and utilized as appropriate to the circumstances, clearly documenting the basic roles and responsibilities of its partners—especially TSNM offices— in sharing security-related information with transportation stakeholders and increasing awareness of information-sharing mechanisms could improve the effectiveness of TSA-OI’s information-sharing efforts and help ensure accountability. Additionally, key elements of TSA’s information approach are not described in its December 2010 information-sharing plan. The 9/11 Commission Act requires DHS to annually submit an information-sharing plan to Congress that describes how intelligence analysts within the department will coordinate their activities within the department and with other federal, state, and local agencies, and tribal governments, among other things. TSA is the lead agency in developing the TSISP and describes the plan as an annual report that establishes a foundation for sharing transportation security information between all entities that have a stake in protecting the nation’s transportation system. TSA is not required to share the plan with stakeholders but coordinates its updates with input from the mode-specific SCCs. TSA officials described the plan as overarching guidance for information-sharing activities within TSA. Additionally, the Transportation Systems Sector Specific Plan describes the TSISP as including the process for sharing critical intelligence and information throughout the sector. It states that the TSISP reflects a vertical and horizontal network of communications for timely distribution of accurate and pertinent information. The last update to the plan was December 2010. However, this plan does not describe key information- sharing functions and programs, as follows: The TSISP does not acknowledge that the Aviation Web Boards are the primary mechanism used for sharing security-related information with the aviation community. TSA officials stated that this is the primary tool used to share information with commercial aviation airports and passenger air carriers as well as air cargo carriers. Aviation stakeholders we interviewed confirmed that the Web Boards are their primary means of receiving information from TSA. TSA officials stated that a description of the Aviation Web Boards was intentionally removed from a draft of the plan at the request of the Commercial Aviation TSNM office. They did not offer an explanation for why the description was removed. The Field Intelligence Officer (FIO) program is expanding and is an integral part of TSA’s information-sharing environment. However, roles and responsibilities of FIOs are not described in detail in the 2010 TSISP. According to TSA, the FIOs serve as the principal advisor to Federal Security Directors and their staffs on all intelligence matters. Other responsibilities include developing and maintaining a working relationship with local, federal, state, and private entities responsible for transportation security, regardless of mode. While officers are based at the airports, they interact with the security officials from local rail, mass transit, highway, and port and pipeline (where applicable) modes to facilitate the sharing and exchange of relevant threat information. As of August 2011, approximately 40 FIOs were deployed, with a goal of 66 FIOs by the end of 2012. TSA-OI stated that it has several planned changes to its information- sharing strategy but has not yet issued them in a documented plan that identifies the specific roles and responsibilities of its internal partners, specific goals for information sharing, and how progress in meeting those goals is measured. Securing the nation’s vast and diverse transportation system is a challenging task that is complicated by the ever-changing and dynamic threat environment. As new threats emerge and vulnerabilities are identified, dissemination of timely and actionable information is critical to maintaining the security of our nation. While providing federal, state, local, tribal, and private sector partners with the information they need can be complicated, providing them with the right information at the right time can prevent catastrophic losses from terrorist activities targeted at the transportation modes. However, stakeholders cannot act on information that they do not receive or cannot access. At the same time, if the information stakeholders receive is not actionable, it is less valuable in helping them prioritize, manage, or adjust security operations. While specific actionable intelligence is not always available, providing these stakeholders with more actionable analysis would help allow them to adjust security measures or take other necessary actions to improve their security postures and counter past and present threats. While TSA has taken steps to ensure that security-related information is available to stakeholders when they need it through various mechanisms, additional actions could help to ensure that stakeholders are aware of these resources and can access them when needed. Given that DHS’s current outreach efforts have not resulted in widespread HSIN-CS awareness and use among transportation stakeholders, additional actions to improve system awareness and accessibility will help ensure that transportation security information users receive timely and useful security information. Additionally, developing outcome-oriented performance measures could help assess progress in improving the dissemination of key transportation security information to all appropriate stakeholders. Because TSA has not clearly defined and documented roles and responsibilities for disseminating security-related information and the full range of its information-sharing efforts, TSA may not be consistently providing security-related information products to external stakeholders and divisions within TSA may not be held fully accountable for performing their information-sharing activities. Clarifying the roles and responsibilities of TSA’s various offices in sharing security-related information with transportation stakeholders could improve the effectiveness of TSA’s information-sharing efforts and help ensure greater accountability. To help strengthen information sharing with transportation stakeholders and ensure that stakeholders receive security-related information in a timely manner, we recommend that the Secretary of Homeland Security direct the Assistant Secretary for the Transportation Security Administration to take the following five actions: To the extent possible, address the need expressed by stakeholders by providing more actionable analysis in TSA’s transportation security-related information products. In coordination with other DHS components, conduct targeted outreach efforts to aviation, rail, and highway stakeholders to increase the number of transportation stakeholders who are receiving security- related information products and are made aware of security information available through the HSIN-CS portal. Coordinate with other DHS components to improve the ability to readily locate information in TSA security-related information products on HSIN-CS. Establish outcome-oriented performance measures to help assess the results of efforts to provide useful and timely transportation security information through the HSIN-CS portal. Clearly define and document the specific information-sharing programs, activities, roles, and responsibilities for each TSA division and provide this information to the appropriate stakeholder groups. We provided a draft of this report and a draft copy of the accompanying e- supplement (GAO-12-67SP) to Amtrak and the Departments of Homeland Security and Transportation for comment. Amtrak did not provide written comments to include in our report. However, in an e-mail received October 28, 2011, the Amtrak audit liaison stated that Amtrak concurred with our recommendation concerning the need for TSA to provide more actionable analysis in its transportation security-related information products. DHS provided written comments on the draft report, which are reproduced in full in appendix II. DHS concurred with the findings and recommendations in the report and described the efforts the department has underway or planned to address our recommendations, as summarized below. The Department of Transportation’s Deputy Director of Audit Relations replied in an e-mail received on October 27, 2011, that the department had no comments on the report. Amtrak and the Departments of Homeland Security and Transportation did not provide comments on the e-supplement. In his e-mail, the Amtrak audit liaison noted that Amtrak recognizes the pressure that TSA is under to produce meaningful intelligence and information analysis to a diverse transportation industry where the information flow is dynamic and complex. However, Amtrak added that at the stakeholder level, the ability to quickly react and deploy to interdict a terrorist threat, planning cycle, or incident based upon information that is timely and actionable is crucial. According to Amtrak, improvements in this area could significantly improve private industry’s ability to plan, defend, deter, and detect terrorist activities. Amtrak views its relationship with TSA as a very important and critical one in addressing Amtrak’s security posture on a daily basis across the intercity rail system. Amtrak also noted that it maintains relationships with other federal, state, and international agencies to improve its intelligence and information-sharing capacity. According to Amtrak, the combination of all these resources allows Amtrak to stay abreast of intelligence trends and developing information and to sift quickly through data and look for rail-centric information. In its written comments, DHS stated that, since the conclusion of our review, many of TSA’s products now include analysis of threat levels, trends, tactics, techniques, and procedures. Since this new development occurred after our review, we did not evaluate the products referred to in the statement. We encourage TSA to continue these efforts and to work with stakeholder groups to ensure that the additional analysis and actionable information provided in these products meets their needs. DHS also stated that TSA will continue working with the DHS Office of Infrastructure Protection to help modal stakeholders understand the security information currently available on HSIN-CS and other systems. DHS provided several examples of other information sources it is using. While these may be appropriate systems for disseminating information to members of the intelligence or law enforcement communities, 272 of the 275 transportation stakeholders responding to our survey did not list any of these systems among their sources for security-related information. DHS stated that its strategy has evolved to consider stakeholders’ preferred methods of receiving security-related information. However, it notes that this change has taken place since the conclusion of our review. As such, we are not able to evaluate this statement. We encourage TSA to increase its outreach efforts to ensure that stakeholders are aware of these mechanisms and information and take further steps to ensure that stakeholders are receiving TSA’s information products through these sources. In addition, DHS stated that TSA plans to enhance the marketing of its information solutions, including HSIN-CS, and to align its partners with its information-sharing roles and responsibilities. While these are positive steps in encouraging information sharing with stakeholders, they do not address the concern stakeholders expressed regarding their ability to locate specific information on HSIN-CS. We continue to believe that improving the search function could enhance stakeholders’ use of HSIN-CS in locating TSA products. Further, DHS said that TSA has started to develop a system to measure and monitor how stakeholders receive information, frequency of use, and methods used for customer outreach and obtaining customer feedback. Finally, DHS said that TSA will commit to creating an internal document of the roles and responsibilities of TSNM and TSA-OI for information sharing and share this document with the appropriate stakeholder groups. Doing so could help clarify responsibilities and increase accountability. DHS also provided three technical clarifications in its written comments. First, DHS stated that TSA has already begun using multiple information systems to disseminate intelligence to stakeholders, and provided examples of these systems. However, as noted above, the examples provided were not identified as sources of information by 272 of the 275 transportation stakeholders who completed our survey. In addition, DHS stated that TSA’s 2011 update to the TSISP is undergoing internal review and will reflect its enhanced information-sharing strategy and changes made as a result of our review, such as describing the information- sharing roles and functions of its Field Intelligence Officers. Finally, TSA stated the context concerning our discussion of the roles and responsibilities of TSA offices regarding the sharing of specific information such as intelligence was unclear. As stated in this report, we interviewed officials from TSA-OI and the Commercial Airline, Commercial Airport, Air Cargo, Freight Rail, and Highway and Motor Carrier units within TSNM on the functions they perform in information sharing. We also stated that TSA officials from five TSNM offices provided different interpretations of the Office of TSNM’s roles and responsibilities in disseminating TSA-OI products and other security- related information. TSA noted in its letter that there are branches of the TSNM that do not interact with stakeholders. The statements in our report were based on discussions with officials from the TSNM modal offices that interact with stakeholders. We are sending copies of this report to the Secretaries of Homeland Security and Transportation, and the President and Chief Executive Officer of Amtrak. The report is also available at no charge on GAO’s website at http://www.gao.gov. Please contact me at (202) 512-4379 or lords@gao.gov if you have any questions regarding this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are acknowledged in appendix III. This report addresses the following questions: (1) To what extent are transportation stakeholders satisfied with the quality of the Transportation Security Administration’s (TSA) transportation security-related information products? (2) To what extent are stakeholders satisfied with the mechanisms used to disseminate these products? (3) To what extent has TSA defined its roles and responsibilities for sharing security-related information with stakeholders? To assess the extent to which stakeholders are satisfied with the security- related information products that they receive from TSA and the mechanisms used to obtain them, we conducted a web-based survey of transportation stakeholders from the aviation, freight and passenger rail, and highway modes. To develop the survey and to identify the primary security-related information-sharing products, mechanisms, and the stakeholders for whom TSA maintains contact information, we interviewed officials from TSA’s Office of Intelligence (TSA-OI) and officials from the Commercial Airline, Commercial Airport, Air Cargo, Freight Rail, and Highway and Motor Carrier Transportation Sector Network Management (TSNM) offices. representing air carriers, airports, air cargo carriers, freight and passenger rail, short line and regional railroads, state highway transportation officials, bus, truck, and motor coach operators, and airport law enforcement. While the information provided by industry association officials is not generalizable to all industry stakeholders, these associations provided industry perspectives on broad security issues facing their respective stakeholder groups. We also interviewed officials from industry associations We designed draft questionnaires in close collaboration with GAO survey specialists. We conducted pretests with seven security officials—at least one from each of the sectors we surveyed—in person and by telephone. We also obtained input on a draft questionnaire from industry associations. In September 2011, TSA announced that, as part of a headquarters realignment, TSA- OI will become part of a new Office of Intelligence and Analysis and the Office of TSNM will transition to the Office of Security Policy and Industry Engagement. We identified organizations and security officials at each organization to receive the survey using TSA’s security information product distribution lists and through interviews with aviation, passenger and freight rail, and highway industry organizations. We sent the survey to one security official at each of the organizations that we identified in our preliminary steps, which included commercial passenger air carriers, Category X and I commercial airports, air cargo carriers, Amtrak, Class I freight rail carriers, short line and regional railroads that carry toxic inhalation hazards or operate in high-threat urban areas, and state departments of transportation or emergency management. We sent the survey to the entire known population of organizations; no sampling was conducted. Each official was asked to respond on behalf of the entire organization and to consult with other officials or records if necessary to do so. We notified 339 officials on March 28, 2011, by e-mail that the survey was about to begin and updated contact information as needed. (We also learned at that time that 4 organizations had gone out of business or been consolidated, leaving 335 organizations as the total known population.) We launched our web-based survey on April 4, 2011, and asked for responses to be submitted by April 8, 2011. Log-in information was e- mailed to all contacts. We contacted by telephone and e-mailed those who had not completed the questionnaire at multiple points during the data collection period, and we closed the survey on May 18, 2011. A total of 275 organizations submitted a completed questionnaire with usable responses for an overall response rate of 82 percent, as shown in table 3. The final instrument, reproduced in an e-supplement we are issuing concurrent with this report—GAO-12-67SP—displays the counts of responses received for each question. The questionnaire asked those transportation stakeholders responsible for security operations to identify the modes of transportation they provide, the extent to which they receive and are satisfied or dissatisfied with TSA security-related products and briefings, the mechanisms they use to obtain security information, and their satisfaction with each of these mechanisms. For the purposes of this survey, we defined the five aspects of security- related information quality as: timeliness: the degree to which you received the information within the time it was needed; relevance: the degree to which the information was applicable to your organization; completeness: the degree to which the information contained all the necessary details; actionability: the degree to which the information enabled you to make adjustments to your security measures, if such a change was warranted; and accuracy: the degree to which the information was correct. While all known organizations were selected for our survey, and therefore our data are not subject to sampling errors, the practical difficulties of conducting any survey may introduce nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond to a question can introduce errors into the survey results. We included steps in both the data collection and data analysis stages to minimize such nonsampling errors. As we previously indicated, we collaborated with our survey specialists to design draft questionnaires, and versions of the questionnaire were pretested with seven members of the surveyed population. In addition, we provided a draft of the questionnaire to industry organizations for their review. From these pretests and reviews, we made revisions as necessary to reduce the likelihood of nonresponse and reporting errors on our questions. Our analysts answered respondent questions and resolved difficulties that respondents had in answering our questions. We examined the survey results and performed computer analyses to identify inconsistencies and other indications of error and addressed such issues, where possible. A second, independent analyst checked the accuracy of all computer analyses to minimize the likelihood of errors in data processing. To obtain additional narrative and supporting context from stakeholders, survey respondents were given multiple opportunities to provide additional open- ended comments throughout our survey. While the survey responses cannot be used to generalize the opinions and satisfaction of transportation stakeholders as a whole, the responses provide data for our defined population. We also conducted site visits, or held teleconferences, with security and management officials from a nonprobability sample of 18 aviation, rail, and highway transportation stakeholders across the nation to determine specific areas of satisfaction and dissatisfaction with TSA security-related information products and which mechanisms are most routinely used by these stakeholders to obtain security-related information. These stakeholders were selected to generally reflect the variety of public and private entities in terms of size, location, and transportation mode. Because we selected a nonprobability sample of transportation stakeholders to interview, the information obtained cannot be generalized to the overall population of stakeholders. However, the interviews provided illustrative examples of the perspectives of various stakeholders about TSA’s information-sharing products and mechanisms and corroborated information we gathered through other means. To determine the extent to which TSA has defined and documented information-sharing roles and responsibilities, we reviewed documents, when available, that described TSA’s information-sharing functions. Primarily, we reviewed the 2009 and 2010 Transportation Security Information Sharing Plans (TSISP). We compared the TSISPs to national plans and documents that describe recommended practices for information sharing such as the Information Sharing Council’s Information Sharing Environment Implementation Plan and the National Infrastructure Protection Plan. We also reviewed our own standards for internal controls. Because TSA does not have an information flow diagram or document describing or mandating information sharing between TSA-OI and the TSNM offices, we interviewed senior TSA officials from TSA-OI and each of the modal TSNM offices to discuss their roles and responsibilities in sharing information with public and private stakeholders. We compared the officials’ interpretations of their roles and responsibilities to identify the extent to which they were consistent across modes and offices. We conducted this performance audit from May 2010 through November 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, individuals making key contributions to this report include Jessica Lucas-Judy, Assistant Director; Kevin Heinz, Analyst in Charge; Adam Anguiano; Katherine Davis; Tracey King; Stan Kostyla; Landis Lindsey; Ying Long; Lauren Membreno; Michael Silver; and Meg Ullengren. Department of Homeland Security: Progress Made and Work Remaining in Implementing Homeland Security Missions 10 Years after 9/11. GAO-11-881. Washington, D.C.: September 7, 2011. Information Sharing Environment: Better Road Map Needed to Guide Implementation and Investments. GAO-11-455. Washington, D.C.: July 21, 2011. Rail Security: TSA Improved Risk Assessment but Could Further Improve Training and Information Sharing. GAO-11-688T. Washington, D.C.: June 14, 2011. High Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Public Transit Security Information Sharing: DHS Could Improve Information Sharing through Streamlining and Increased Outreach. GAO-10-895. Washington, D.C.: September 22, 2010. Information Sharing: Federal Agencies Are Sharing Border and Terrorism Information with Local and Tribal Law Enforcement Agencies, but Additional Efforts Are Needed. GAO-10-41. Washington, D.C.: December 18, 2009. Information Sharing Environment: Definition of the Results to Be Achieved in Improving Terrorism-Related Information Sharing Is Needed to Guide Implementation and Assess Progress. GAO-08-492. Washington, D.C.: June 25, 2008. Information Sharing: The Federal Government Needs to Establish Policies and Processes for Sharing Terrorism-Related and Sensitive but Unclassified Information. GAO-06-385. Washington, D.C.: March 17, 2006. Critical Infrastructure Protection: Improving Information Sharing with Infrastructure Sectors. GAO-04-780. Washington, D.C.: July 9, 2004.
The U.S. transportation system, comprised of aviation, freight rail, highway, maritime, mass transit and passenger rail, and pipelines, moves billions of passengers and millions of tons of goods each year. Disrupted terrorist attacks involving rail and air cargo in 2010 demonstrate the importance of effective information sharing with transportation security stakeholders. The Transportation Security Administration (TSA) is the lead agency responsible for communicating security-related information with all modes. In response to the Implementing Recommendations of the 9/11 Commission Act of 2007, GAO assessed 1) the satisfaction of transportation stakeholders with the quality of TSA's transportation security information products, 2) satisfaction with mechanisms used to disseminate them, and 3) the extent to which TSA's roles and responsibilities are clearly defined. GAO surveyed 335 aviation, rail, and highway stakeholders (with an 82 percent response rate); reviewed agency planning documents; and interviewed industry associations, transportation stakeholders, and Department of Homeland Security officials. An electronic supplement to this report--GAO-12-67SP--provides survey results. Transportation stakeholders who GAO surveyed were generally satisfied with TSA's security-related information products, but identified opportunities to improve the quality and availability of the disseminated information. TSA developed a series of products to share security-related information with transportation stakeholders such as annual modal threat assessments that provide an overview of threats to each transportation mode--including aviation, rail, and highway--and related infrastructure. Fifty-seven percent of the stakeholders (155 of 275 who answered this question) indicated that they were satisfied with the products they receive. However, stakeholders who receive these products were least satisfied with the actionability of the information--the degree to which the products enabled stakeholders to adjust their security measures. They noted that they prefer products with more analysis, such as trend analysis of incidents or suggestions for improving security arrangements. Further, not all stakeholders received the products. For example, 48 percent (128 of 264) of the stakeholders reported that they did not receive a security assessment in 2010, such as TSA's annual modal threat assessment. Improving the analysis and availability of security-related information products would help enhance stakeholders' ability to position themselves to protect against threats. Stakeholders who obtained security-related information through TSA's Web-based mechanisms were generally satisfied, but almost 60 percent (158 of 266) of stakeholders GAO surveyed had never heard of the Homeland Security Information Sharing Network Critical Sectors portal (HSIN-CS). DHS views HSIN as the primary mechanism for sharing security-related information with critical sectors, including transportation stakeholders. Forty-three percent of rail stakeholders, 28 percent of highway stakeholders, and 72 percent of aviation stakeholders--who consider TSA's aviation Web Boards as their primary information-sharing mechanism--had not heard of HSIN-CS. Among the 55 stakeholders that had logged on to HSIN-CS, concerns were raised with the ability to locate information using the mechanism. Increasing awareness and functionality of HSIN-CS could help ensure that stakeholders receive security information, including TSA products. Defining and documenting the roles and responsibilities for information sharing among TSA offices could help strengthen information-sharing efforts. Officials from TSA's Office of Intelligence consider TSA's Transportation Sector Network Management offices to be key conduits for providing security-related information directly to stakeholders. However, officials from these offices differed in their understanding of their roles. For instance, officials told GAO that their role was to communicate policy and regulatory information, rather than threat-related information. While TSA officials look to the current Transportation Security Information Sharing Plan for guidance, it does not include key elements of the approach that TSA uses to communicate security-related information to stakeholders. For example, it does not describe the roles of TSA's Field Intelligence Officers, who facilitate the exchange of relevant threat information with local and private entities responsible for transportation security. Clearly documenting roles and responsibilities for sharing security-related information with transportation stakeholders could improve the effectiveness of TSA's efforts and help ensure accountability. GAO recommends that TSA, among other actions, (1) address stakeholder needs regarding the quality of analysis in and availability of its products, (2) increase awareness and functionality of its information sharing mechanisms, and (3) define and document TSA's information sharing roles and responsibilities. DHS concurred with GAO's recommendations.
Job Corps was established as a national employment and training program in 1964 to address employment barriers faced by severely disadvantaged youths. Job Corps enrolls youths aged 16 to 24 who are economically disadvantaged, in need of additional education or training, and living under disorienting conditions such as a disruptive homelife. In program year 1996, nearly 80 percent of the participants were high school dropouts and almost two-thirds had never been employed full-time. Participating in Job Corps can lead to placement in a job or enrollment in further training or education. It can also lead to educational achievements such as attaining a high school diploma and improving reading or mathematics skills. Job Corps currently operates 113 centers throughout the United States, including Alaska, Hawaii, the District of Columbia, and Puerto Rico. Major corporations and nonprofit organizations manage and operate 85 Job Corps centers under contractual agreements with Labor. Contract center operators are selected through a competitive procurement process that takes into account proposed costs, an operator’s expertise, and prior program performance. In addition, the U.S. Department of the Interior and the U.S. Department of Agriculture operate 28 Job Corps centers, called civilian conservation centers, on public lands under interagency agreements with Labor. Each center provides participants with a wide range of services, including basic education, vocational skills training, social skills instruction, counseling, health care, room and board, and recreational activities. One feature that makes Job Corps unique is that, for the most part, it is a residential program. About 90 percent of the youths enrolled each year live at Job Corps centers and are provided services 24 hours a day, 7 days a week. The premise for boarding participants is that most come from a disruptive environment and, therefore, can benefit from receiving education and training in a different setting where a variety of support services is available around the clock. The comprehensive services Job Corps provides make it a relatively expensive program. According to Labor’s program year 1996 figures, the average cost per Job Corps participant was more than $15,000. Cost varies according to how long Job Corps participants remain in the program. Participants stay in the program for an average of about 7 months but may stay as long as 2 years. Labor estimates the cost for a participant who remains in the program for a year to be about $25,000. Vocational training is a critical element of the Job Corps program. This training is designed to offer individualized, self-paced, and open entry-open exit instruction to allow participants to progress at their own pace. Vocational training can be provided in any combination of three ways. Most vocational training is offered by instructors who are Job Corps center staff. Other vocational courses are taught by private providers under contract to the center. These private providers typically include vocational schools and community colleges. About a third of the vocational training expenditure is provided by national labor unions and business organizations under sole source contracts with Labor. In program year 1996, Job Corps’ operating costs totaled about $986 million, of which $144 million, or about 15 percent, was for vocational training (see table 1). Overall, Job Corps offers training in 100 different vocations. Although the number of vocations offered at any one Job Corps center varies, most centers offer training in 7 to 10 different vocations. Some centers, however, offer training in as few as 5 vocations while others offer training in as many as 31 different vocations. Some vocations are available at most centers, while others are available at only a single center. For example, more than 80 percent of the centers offer training in business clerical, culinary arts, building and apartment maintenance, and carpentry. Thirty-one vocations, including computer programmer, asphalt paving, barber, teacher aide, and cable TV installer, are offered only at a single center. Many centers also offer off-site advanced career training at such institutions as vocational schools, community colleges, and universities for participants who have been in the program for at least 6 months. Regardless of who provides the training, Job Corps policy requires that all vocational training programs use competency-based curricula that contain a series of skills, or competencies, that participants must attain. According to Labor officials, each vocational training program’s curriculum and set of required skills are regularly reviewed and updated by industry advisory groups consisting of business, industry, and training providers. Labor uses a series of nine measures to report on the performance of the program nationally and to assess the performance of individual Job Corps centers. The measures relate to placement—in a job, in education, or in military service—learning gains in mathematics and reading, earning a general equivalency diploma certificate, completing vocational training, placement in a job related to the training received, and placement wage. In program year 1996, Job Corps reported that 80 percent of the participants leaving the program were placed—70 percent in jobs or the military and 10 percent enrolled in education—and 62 percent of those who were placed in jobs or the military obtained a job related to their training. Job Corps also reported that 48 percent of those who left the program completed vocational training. Labor has several activities to improve Job Corps’ employer and community linkages to ensure that vocational training is appropriate for local labor markets and relevant to employers’ needs. These efforts include initiatives enacted by Job Corps’ national office and regional offices, as well as efforts by individual Job Corps centers. Since 1984, Labor has used industry advisory groups to review vocational course curricula to ensure that course content is relevant to the job market. Each year, Labor selects a number of vocational offerings for review by an Industry Advisory Group consisting of Job Corps instructors and academic program representatives as well as industry representatives from each vocational offering being reviewed. For example, recent industry representatives included computer operators and repair technicians, electronic assemblers, diesel and heavy equipment mechanics, health occupation workers, material handlers, tile setters, and clerical workers. The Industry Advisory Group recommends to Labor changes to Job Corps’ vocational training curricula, materials, and equipment. Vocational offerings are evaluated and updated on a 3-to-5-year cycle dictated by industry changes and the number of students participating in each vocational training program. In program year 1995, Labor introduced a school-to-work initiative at three Job Corps centers combining center-based training with actual worksite experience related to it. Labor expanded this initiative to an additional 30 centers in program year 1996 and to 30 more centers in program year 1997. Labor provided financial incentives and supportive services to encourage centers to participate in the school-to-work initiative. According to Labor officials, the school-to-work initiatives have resulted in extensive partnerships being established between the centers, area businesses, and local school systems. Through these partnerships, employers are providing worksite learning experiences, suggesting approaches for integrating curricula, developing assessment criteria for documenting skill mastery, and participating in career exposure activities. At one school-to-work Job Corps center that we visited, 35 participants from program year 1996 were involved in this initiative and all were placed—32 had jobs, 2 returned to school, and 1 joined the military. Furthermore, 70 percent of the jobs were directly related to the vocational training received in Job Corps. Labor also involves local business and community leaders in deciding which vocational training programs are to be offered at newly established Job Corps centers. For example, at the new center we visited, we found that 2 years prior to the awarding of the center’s contract, decisions on the vocations to be offered were made with input from local business and community leaders, including representatives of the mayor’s office, the private industry council, the school department, and local businesses. The result was that this center does not offer many of the traditional Job Corps vocational programs, such as clerical, culinary arts, landscaping, and building and apartment maintenance. Instead, it has nine vocational areas in such high-demand occupations as medical assistant, phlebotomy and EKG technician, and computer repair. At another new center, Labor officials stated that local labor market information along with input from local community and business leaders, including the local private industry council, union representatives, local school system, health groups, and chamber of commerce, ensured that the vocational training courses offered at that center would be appropriate and current given the local economy. Labor officials also informed us that changes to vocational training offerings at existing centers result from changes in labor market demand or poor performance of a particular vocational training program. Centers obtain approval for a change by completing the appropriate paperwork for a request for change and submitting it to either the regional office (if the change involves a center-operated or center-contracted vocational offering) or the national office (if the change involves a vocational course offered by a national labor union or business organization). Labor then assesses the request to change course offerings and reviews the placement analyses, wages reported, female participation rate in the course, local labor market information, and facility requirements. In addition, Labor requires the center to obtain statements from three employers stating that the vocational change is appropriate and relevant. All five of the centers we visited had recently made changes to their vocational course offerings. For example, one center added a physical therapy course after receiving numerous requests from clinics and hospitals within the community. The center was able to add this course by dropping a cosmetology course. Another center identified a local demand for qualified workers in retail sales and tourism. The center added training in these vocations while reducing the size of its clerical training program. In addition to national efforts, three of Labor’s regional offices have developed their own initiatives to improve linkages between Job Corps centers and employers. In one region, business leaders representing a variety of industries met with Labor and center staff to provide observations of the program and the participants they hire. The group—a business roundtable—set up a framework for obtaining employer input into the operation of the Job Corps program for the benefit of young people, employers, community leaders, and the Job Corps system nationwide. In an effort to bridge the gap between the needs of private industry and vocational training, the roundtable recommended actions and supported the implementation of new strategies to resolve employer issues that it identified and prioritized. As a direct result of this roundtable, concrete linkages were established. For example, a bank involved as a school-to-work program participant provided equipment and instructors to incorporate bank telling into the center’s clerical program. According to Labor officials, the initiative was successful, and the regional office is currently exploring the possibility of duplicating this effort in several other Job Corps centers. At another center within the region, an electronics firm reviewed the center’s electronics curriculum and suggested additional skills allowing program participants to qualify for higher-paying jobs. Another region has endorsed a major initiative between a Job Corps center and the Q-Lube Corporation whereby a building at the center was renovated to exactly meet the specifications of a Q-Lube facility. The renovation used student painters and carpenters from other vocational training courses and Job Corps provided additional funding for this course. Q-Lube donated the equipment to the center and also provided a trained instructor. The course offering is identical to the program curriculum Q-Lube teaches at non-Job Corps sites. According to Labor officials, since the implementation of this initiative, Q-Lube has become a major employer and training link within the region. The same regional office contacted a shipbuilding company advertising for 500 shipbuilders and worked with the company to develop a vocational training program in welding for Job Corps students that would be appropriate and relevant to the company’s needs. The company provided the two pieces of equipment needed for training purposes. Students were trained at the Job Corps center under conditions similar to those in the shipbuilding environment, tested by the company, and then provided additional training at the shipbuilding site. In addition, the company provided low-cost housing and full salary to students who passed the test before graduating from the center. The company was pleased with the students’ qualifications, attitudes, and work ethics and requested that the Job Corps program train another 100 students. The region is currently recruiting and training students for this vocation in an attempt to further meet the needs of the shipbuilding industry. A third regional office is involved in a project to increase the involvement of employers in all facets of Job Corps operations in their region, including curriculum development, customized training, work-based learning, mentoring, identifying workforce needs, and donating staff resources and equipment. The goal of this outreach campaign is to build substantial relationships between Job Corps and the employer community at several different but mutually supportive levels: center, state, regional, and national. Labor selected a contractor through a competitive process, assisted by several national groups, to research, test, and revise its proposed strategy for increasing employer involvement within the region. Initially, the project concentrated on three centers in different states within the region. The project will soon expand to include all states and Job Corps centers within the region. If successful, the project will be expanded throughout the Job Corps system. Job Corps centers have also independently established linkages with employers. These linkages include negotiating with employers to provide furniture and vocational training equipment and contracting with employers to train and hire program participants. For example, at one center a national employer has donated computers, copy machines, desks, chairs, and conference tables valued at approximately $50,000. At another center, an automobile maker has donated a four-wheel-drive sport utility vehicle for students in the auto repair vocational training course in an attempt to make the training more relevant to the vehicles that students would actually be working on. The center is currently working with the automobile maker to donate a car for the same purpose. Local automobile dealers are familiar with the center’s linkages to the national automobile maker and also have donated cars needing repair. In addition, local automobile dealers have trained students through the school-to-work program and have hired many of the Job Corps program participants. Another center holds monthly employer relations meetings in which approximately 200 local employers and community representatives attend a luncheon catered by the center’s culinary arts students. Speakers discuss local employment opportunities and donate funds to benefit Job Corps participants. The funds, which are managed by the center’s Community Relations Council, are used to provide tuition scholarships for program graduates continuing their education upon completion from the center. The scholarships range from $500 to $1,000 each and are awarded to program graduates who have pursued excellence and attained a higher measure of success than their fellow program participants. To date, about $10,000 has been raised for scholarships. A fourth center established an effective business relationship with a computer graphics firm in California. According to center officials, 31 Job Corps students enrolled in various vocational training programs, including building and apartment maintenance, clerical, electrical, and landscaping; participated in 12-week internships at the computer firm; and attended an anger management course that had been developed for the firm’s employees. These students earned $10 per hour within a work-based environment in which the firm’s staff provided on-the-job training and mentoring. The center placement official claims that the success of the internship program is evidenced by the 28 students who obtained primarily training-related jobs after terminating from the Job Corps program. Two performance indicators that Labor uses to evaluate Job Corps’ success are misleading, overstating the extent to which vocational training is completed and job placements are training-related. Labor reports that nationwide about 48 percent of all program participants complete their vocational training and that about 62 percent of the jobs obtained by program participants are related to the training they received. However, we found that nationally only about 14 percent of the program participants satisfied all their vocational training requirements and that about 41 percent of the reported training-related job placements at the five centers we visited were questionable. Having complete and accurate program performance information is important to evaluating program success and being able to identify areas needing improvement. Nationally, Job Corps reported that in program year 1996, 48 percent of its participants completed vocational training. This information is misleading. We found that only about 14 percent of the program year 1996 participants actually completed all the required tasks of their vocational training programs. Job Corps’ national data system uses three categories to identify a participant’s level of vocational training progress: trainee, completer, and advanced completer. A trainee is a participant who has not completed any vocational training component, a completer has accomplished at least one component of a vocational program, and an advanced completer has fully satisfied all required components of a vocational training program. Labor considers participants in the last two categories to be vocational training completers. Thus, Job Corps vocational completion statistics include participants who have only partially completed the required skills of a vocational training program. Each Job Corps vocational training program has a comprehensive list of duties and tasks that participants are expected to perform. For example, the clerical vocational training program has 140 duties and tasks that must be mastered to fully complete the program, food service has 109, building and apartment maintenance has 123, and carpentry has 75. Vocational training programs, however, can be divided into several components. For example, in food service, the first component entails making a sandwich and preparing a salad (covering 39 of the 109 tasks). The second component adds preparing breakfast dishes; heating convenience foods; preparing meats, poultry, fish, and pasta; and cooking vegetables. The final component adds preparing soups, sauces, and appetizers as well as food management skills, such as preparing a menu, setting a table, developing a food preparation schedule, and conducting safety inspections. Vocational training instructors assess participants’ performance for each duty and task, and Job Corps policy permits participants to be classified as vocational completers if they accomplish the duties and tasks associated with any one component of the vocational training program—regardless of whether they can perform all the duties and tasks required in the entire vocational training curriculum. Depending on the vocation, the percentage of tasks that a participant must accomplish to be considered a completer range from virtually all, as in the health occupations program, to about a quarter, as in the welding program (see table 2). Thus, Job Corps policy allows participants to be classified as vocational completers if they can perform some portion of a required curriculum. For example, in the food service vocational training program, accomplishing just the tasks associated with the salad and sandwich making component would qualify a participant as a vocational completer. At the centers that we visited that had a food service program, nearly half of the reported vocational completers had completed only this first component. Similarly, nearly 80 percent of the vocational completers in the carpentry program at five centers completed only the first of three components. In contrast, about 15 percent of the vocational completers of the centers’ health occupations program completed only the first of two components (see fig. 1). Overall at the five centers, 43 percent of the vocational completers completed only the first component of their vocational training programs. The reported percentage of vocational completers at the five centers we visited substantially overstated the percentage of participants who fully completed their vocational training programs. At these centers, about 51 percent of the 3,500 participants were considered to be vocational completers. However, only about 18 percent completed all their vocational training requirements. As shown in figure 2, the percentage of program year participants fully completing vocational training programs ranged from about 11 percent at one center to about 27 percent at another center. Nonetheless, these two centers had reported vocational completion rates of 65 percent and 73 percent, respectively. Closer examination of the participants who completed only the first component of their vocational training program showed that many spent a short period of time—less than 90 days—enrolled in vocational training. At the five centers that we visited, nearly 15 percent of the participants who had completed the first component of their vocational training spent fewer than 90 days in training. This ranged from about 9 percent at one center to about 20 percent at another center. Labor reported that in program year 1996, 62 percent of participants placed in employment found jobs that matched the training they received in Job Corps. Our review of this information at the five centers we visited, however, suggests that this report substantially overstates the program’s accomplishments. We found that the validity of about 41 percent of the reported job training matches at these centers was questionable. In a previous report, we expressed concern with Labor’s methodology for identifying training-related placements. We concluded that Labor gave its placement contractors wide latitude in deciding whether a job was a job training match and identified many jobs that appeared to bear little, if any, relationship to the training received. We also noted that placement contractors used some creativity when reporting job titles in order to obtain a job training match. Labor questioned the accuracy of claims made by placement contractors that job training matches could be obtained for participants trained as bank tellers, secretaries, and welders who obtained jobs in fast food restaurants. In checking reported job training match information, we reviewed all reported training-related job placements at the five centers we visited to assess the validity of reported job training matches. We verified the results by contacting a representative sample of employers who had hired the Job Corps participants. In this process, we questioned a significant number of the claimed matches. We questioned job training matches because either a job title did not seem appropriate for the employer listed (such as bank teller at a fast food restaurant) or the job title did not seem to relate to the vocational training (such as a job as an information clerk at a car rental agency after training as a home health aide). We then interviewed a random sample of 183 employers who hired Job Corps participants whose job placement was listed as related to the vocational training they received but that we questioned. Table 3 shows additional questionable examples of jobs reported as being training-related. At the five centers we visited, we questioned 598 of the 1,306 reported job training matches. The percentages of these questionable job training matches ranged from about 30 percent at one center to about 64 percent at another center (see fig. 3). Our discussions with employers yielded examples of jobs that, on the surface, were related to the training received, based on the reported job title, but were actually quite unrelated to this training. For example, one participant trained in welding was reported as obtaining a job as a welding machine operator at a temporary agency, but the employer informed us that this individual was actually hired to shuttle vehicles between airports. Another participant trained in auto repair was reportedly hired as a petroleum and gas laborer but was actually hired to clean residential homes. A third participant received clerical training and was reportedly hired as a sales correspondent but actually sorted bad tomatoes from good ones on a conveyor belt. All three of these Job Corps participants, therefore, were erroneously reported as having been placed in jobs related to their training. Labor’s monitoring of reported job training matches appears to be inadequate. Labor officials stated that Job Corps’ regional offices are responsible for monitoring all aspects of placement contractor performance but that there is no fixed schedule for such monitoring. They stated that regular desk reviews of all placement forms, for both accuracy and completeness, takes place as part of the process for paying vouchers submitted by placement contractors. Our findings suggest that there is reason to question whether this procedure is adequate to ensure that reported information is accurate. Labor has contracted with national labor and business organizations under sole source arrangements for more than 30 years. About a third of Job Corps’ vocational training is provided by such organizations contracted under sole source arrangements. Although Labor has failed to provide adequate support to justify sole source procurement for vocational training, it has nine sole source contracts with national labor and business organizations, totaling about $46 million (see table 4). Federal procurement regulations require several conditions to be met for an agency to award a noncompetitive contract. These include (1) establishing the need for services that can be provided by only one source, (2) documenting through a market survey or on some other basis that no other known entity can provide the required services, and (3) stating a plan of action the agency may take for removing barriers to competition in the future. Labor has offered three broad considerations in justifying its sole source awards rather than using competitive procedures in contracting with the national training contractors. The first is the contractors’ past relationship with Job Corps—that is, experience with Labor’s Employment and Training Administration, in general, and with Job Corps specifically and thorough knowledge of Job Corps’ procedures and operations. The second is organizational structure—that is, a large nationwide membership related to a trade and their strong relationship with national and local apprenticeship programs. The third is instructional capability—that is, a sufficiency of qualified and experienced instructors, the ability to provide training specifically developed for the learning level of Job Corps students, and the ability to recognize training as credit toward meeting the requirements of becoming a journey-level worker. In addition, Labor officials stated that a main reason it contracts on a sole source basis is that the contractors maintain an extensive nationwide placement network. With regard to Labor’s long-standing practice of awarding sole source contracts for a portion of Job Corps’ vocational training, our review of Labor’s current and proposed justification for its sole source contracts and our previous work on this issue raise questions about their use. Labor’s sole source justification essentially lists the qualities Labor expects in a contractor. It does not establish that the services contracted for can be provided by only one source. Furthermore, Labor acknowledged that its national data system has no information to indicate the extent to which national training contractors are directly responsible for placing Job Corps participants in jobs. Labor’s proposed justification for upcoming contracts has many of the weaknesses of the current justification. Job Corps is an expensive job training program that provides comprehensive services to a severely disadvantaged population. For more than 30 years, Job Corps has been assisting young people who need and can benefit from an unusually intensive program, operated primarily in a residential setting. Labor and the Congress need meaningful and accurate information if they are to effectively manage and oversee the Job Corps program. However, our work raises serious questions regarding Labor’s claims about Job Corps’ achievements. Labor’s reporting on the percentage of participants who are vocational completers includes many who have not actually completed their training; many have completed only one component of a vocational training program. Similarly, Labor’s reported statistics on the percentage of jobs obtained by participants that were related to the training they received are inaccurate. Reported job training matches include a significant number of jobs that have no apparent relationship to the training received and whose job titles have no apparent relationship to the employers’ business. In addition, Labor has continued its long-standing practice of awarding sole source contracts for a substantial portion of Job Corps’ vocational training—a practice we suggested it re-evaluate in 1995. To date, Labor has not provided adequate support to justify sole source procurement for vocational training services provided by the nine national labor and business organizations. Labor’s justification for sole source procurement does not explain or demonstrate the basis for Labor’s determination of need. Improvements are needed to ensure that the information used to assess Job Corps program performance is accurate and meaningful. Specifically, two of the measures used to judge the success of the Job Corps program—vocational completion and job training match—provide misleading information that overstates program outcomes. Therefore, we recommend that the Secretary of Labor more accurately define and report information on the extent to which program participants complete vocational training and develop a more accurate system of reporting training-related jobs and effectively monitor its implementation. In addition, because Labor has not presented adequate justification for its long-standing practice of contracting on a sole source basis with nine national labor and business organizations for vocational training, we recommend that the Secretary of Labor properly justify its use of noncompetitive procedures if it is to continue to award contracts for vocational training services. In so doing, the agency should assess whether vocational training could be served as well through contracts competed for locally or regionally. In comments on a draft of this report, Labor expressed concern about our conclusion that two performance measures—vocational training completion and job training matches—overstated Job Corps’ success and misrepresented its accomplishments. Nevertheless, Labor agreed to implement our recommendations for improving the information provided by these two measures. Labor emphasized that it did not intend to overstate Job Corps program performance in any area. Labor further noted that it places strong emphasis on performance results and data integrity and is therefore concerned about the findings contained in the report. With regard to vocational training completion, Labor stated that it was never its intention that all students master all competencies on an occupation’s training achievement record. Instead, a set of competencies for each occupational area was developed by Labor, together with industry groups, to identify appropriate competency levels needed to qualify for particular occupations. For example, Labor noted that to qualify as a full mechanic would require completion of all competencies in the automotive area, but a participant could qualify as a mechanic’s helper or brake repair mechanic by completing a subset of the full automotive training achievement record. Labor also noted that even though vocational completion may be an imperfect measure, it is a good predictor of placement, job training match, and wages. However, Labor stated that it understood and shared our concern that the terminology used to report this information may be subject to misinterpretation. Therefore, Labor said that it would take immediate action to clarify the definition of vocational completion in all subsequent Job Corps publications. In addition, Labor noted that because of the perspective gained through the recent oversight hearings and our report, it would review the extent to which the current definition may provide insufficient incentive to some students to obtain the maximum amount of training within the vocational training program. Labor noted that in direct response to these issues, it has initiated a comprehensive and detailed analysis of vocational completion and stated that it will develop a more precise and comprehensive description of student completion levels. We believe that the actions Labor is taking to more clearly identify what it means by a vocational training completer will avoid future confusion about what is being reported. The actions will also clarify that it is not Labor’s intent to have all Job Corps participants complete all aspects of a vocational curriculum but, rather, to complete to a level that is appropriate for each individual. Such levels, as Labor noted, would correspond to industry-agreed competencies that would qualify a participant for a specific job. In addition, as Labor clarifies and refines its measures, it is likely that more will be learned about the relationship between completing various levels of a vocational program and the degree of success a participant achieves. This is an important aspect of monitoring performance and could lead to program improvements. Regarding job training matches, Labor stated that it shares our concern about the validity of some of the matches identified in the report. Labor noted that it is currently changing to a different system for determining job training matches that will make the determination more manageable and easier to oversee. This new system is expected to be fully implemented by the close of this calendar year. In addition, Labor stated that it is developing more stringent quality control and oversight procedures to preclude questionable matches. We believe Labor’s proposed improvements to its assessment of whether job placements are related to the participants’ training and the monitoring of the reporting of these data will improve the validity and utility of this information. Regarding Labor’s use of sole source contracting with nine national labor unions and business organizations, Labor disagreed that it needed to do more to properly justify its use of noncompetitive procedures and expressed its belief that Job Corps’ training programs could not be served as well through locally or regionally competed procurements. Labor asserts that participants leaving national training contractor programs consistently achieve better outcomes, such as higher wages, than other participants. Labor also points out that it has received negligible responses to the last two invitations for interested organizations to submit capability statements for the administration and operation of vocational training programs and placement activities currently operated by national organizations. Labor contends that the continued strong performance of its sole source contracts and the lack of response to its attempts to solicit other qualified providers properly justify its decision to use noncompetitive procedures. Labor identified some changes, however, including that it will require the national contractors to report monthly on the number of participants who are placed directly into jobs and apprenticeships, and it has established higher performance standards for national training contractors. We continue to believe that Labor has not adequately justified its use of sole source contracts. Labor has been unable to determine the extent to which national training contractors are responsible for placing participants and thus for their reported better performance. However, Labor’s new requirement for these contractors to report on their placements should improve Labor’s ability to assess their performance. From our review of Labor’s last two invitations for organizations to submit capability statements for the administration and operation of vocational training programs and placement activities, we conclude that the agency did not clearly state the goods and services required and was overly restrictive with respect to contractor qualifications. Thus, we believe that the two published invitations Labor cites were inadequate to inform potentially capable entities of an opportunity to compete or to afford them a reasonable opportunity to provide credible responses. As a result, Labor has not determined the availability of other potential sources and, therefore, has not properly justified its use of noncompetitive procedures. In addition, Labor suggested two points of technical clarification regarding the approval process that Job Corps centers use to change vocational offerings and the involvement of the national office in an employment initiative in one Job Corps region. We modified the report where appropriate. (Labor’s entire comments are printed in app. III.) We are sending copies of this report to the Secretary of Labor, the Director of the Office of Management and Budget, relevant congressional committees, and others who are interested. Copies will be made available to others on request. If you or your staff have any questions concerning this report, please call me at (202) 512-7014 or Sigurd R. Nilsen at (202) 512-7003. Major contributors to this report are listed in appendix IV.
Pursuant to a congressional request, GAO reviewed Job Corps' vocational training component to describe the program's contracting policies and to assess contractor performance, focusing on: (1) how Job Corps ensures that vocational training is appropriate and relevant to employers' needs and the extent to which participants are completing vocational training and obtaining training-related jobs; and (2) Job Corps' process for contracting with vocational training providers. GAO noted that: (1) the Department of Labor has several activities to foster Job Corps' employer and community linkages to ensure the appropriateness of its vocational training to local labor markets and its relevance to employers' needs; (2) Labor has industry advisory groups that regularly review vocational course curricula to ensure their relevance to today's job market; (3) Labor has also introduced a school-to-work initiative designed to link Job Corps with local employers combining center-based training with actual worksite experience at more than half the Job Corps centers; (4) complementing these national efforts, three of Labor's regional offices have developed their own initiatives to improve linkages between Job Corps and local labor markets; (5) despite Labor's efforts to increase the effectiveness of its vocational training through employer and community linkages, Job Corps data on the extent to which participants complete vocational training and obtain training-related jobs are misleading and overstate the program's results; (6) although Job Corps reported that 48 percent of its program year 1996 participants completed their vocational training, GAO found that only 14 percent of the program participants actually completed all the requirements of their vocational training curricula; (7) the rest of the participants whom Job Corps considered to be vocational completers had performed only some of the duties and tasks of a specific vocational training program; (8) Labor also reported that 62 percent of the participants nationwide who obtained employment found jobs that matched the vocational training received in Job Corps; (9) at the five centers GAO visited, however, the validity of about 41 percent of the job placements reported by Labor to be training-related was questionable; (10) in looking at how training providers are selected, GAO found that about a third of Job Corps' vocational training has been provided under sole source contracts awarded to national labor and business organizations for more than 30 years, but in GAO's opinion, Labor has not adequately justified procuring these training services noncompetitively; (11) a principal reason Labor has cited for awarding these contracts on a sole source basis is that these organizations maintain an extensive nationwide placement network and are better able than nonnational organizations to place Job Corps participants who complete their training; and (12) Labor has provided no data, however, to show the extent to which these sole source contractors actually place Job Corps participants nationwide.
The International Space Station program has three key goals: (1) maintain a permanent human presence in space, (2) conduct world-class research in space, and (3) enhance international cooperation and U.S. leadership through international development and operations of the space station. Each of the partners is to provide hardware and crew, and each is expected to share operating costs and use of the station. On-orbit assembly of the space station began in November 1998 and, since October 2000, two to three crew members, who maintain and operate the station and conduct hands-on scientific research, have permanently occupied the space station. The space station is composed of numerous modules, including solar arrays for generating electricity, remote manipulator systems, and research facilities. The station is being designed as a laboratory in space for conducting experiments in near-zero gravity. Life sciences research on how humans adapt to long durations in space, biomedical research, and materials-processing research on new materials or processes are under way or planned. In addition, the station will be used for various earth observation activities. Figure 1 shows the International Space Station on-orbit. Since its inception, the station program has been plagued with cost and schedule overruns. When the space station’s current design was approved in 1993, NASA estimated that its cost would be $17.4 billion. By 1998, that estimate had increased to $26.4 billion. In January 2001, NASA announced that an additional $4 billion in funding over a 5-year period would be required to complete the station’s assembly and sustain its operations. By May 2001, that estimated cost growth increased to $4.8 billion. Since fiscal year 1985, the Congress has appropriated about $32 billion for the program. In an effort to control space station costs, the administration announced in its February 2001 Budget Blueprint, that it would cancel or defer some hardware and limit construction of the space station at a stage the administration calls “core complete.” The administration said that enhancements to the station might be possible if NASA demonstrates improved cost-estimating and program management, but the administration is only committed to the completion of the core complete configuration. In July 2001, the NASA Administrator appointed an independent International Space Station Management and Cost Evaluation Task Force to assess the financial management of the station program and make recommendations to get costs under control. The task force published its report in November 2001 and recommended that the program (1) extend crew rotations from 4 to 6 months and reduce the number of shuttle flights to 4 per year; (2) consolidate the number of contracts and reduce government staff in station operations and sustaining engineering; (3) establish an Associate Administrator for space station at NASA Headquarters, with total responsibility for engineering and research; and (4) prioritize research to maximize limited resources. NASA implemented most of the recommendations, and the task force reported in December 2002 that significant progress had been made in nearly all aspects of the program, including establishing a new management structure and strategy, program planning and performance monitoring processes, and metrics. NASA was postured to see results of this progress and to verify the sufficiency of its fiscal year 2003 budget to provide for the core complete version of the station when the Columbia accident occurred. In response to the task force’s recommendations, the Office of Management and Budget (OMB) imposed a 2-year “probation” period on NASA to provide time to reestablish the space station program’s credibility. Activities that are to take place during this period include establishing a technical baseline and a life-cycle cost estimate for the remainder of the program, prioritizing the core complete science program, and reaching agreement with the international partners on the station’s final configuration and capabilities. OMB, with input from NASA, is developing criteria that are to be used for measuring progress toward achieving a credible program. NASA provided its input to OMB in June 2003, but as of August 2003, OMB and NASA had not reached agreement on the success criteria. The grounding of the U.S. shuttle fleet has presented a number of operational challenges for the space station program. With the fleet grounded, NASA is heavily dependent on its international partners— especially Russia—for operations and logistics support for the space station. However, due to the limited payload capacity of the Russian space vehicles, on-orbit assembly has been halted. The program’s priority has shifted from station construction to maintenance and safety, but these areas have also presented significant challenges and could further delay assembly of the core complete configuration. While some on-board research is planned, it will be curtailed by the limited payload capacity of the Russian vehicles. The space shuttle fleet has been the primary means to launch key hardware to the station because of the shuttle’s greater payload capacity. At about 36,000 pounds, the shuttle’s payload capacity is roughly 7 times that of Russia’s Progress vehicle and almost 35 times the payload capacity of its Soyuz vehicle. With the shuttle fleet grounded, current space station operations are solely dependent on the Soyuz and Progress vehicles.Because the Soyuz and Progress vehicles’ payloads are significantly less than that of the U.S. shuttle fleet, operations are generally limited to transporting crew, food, potable water, and other items, as well as providing propellant resupply and reboosting the station to higher orbits. On-orbit assembly of the station has effectively ceased. Maintaining the readiness of ready-to-launch space station components has also presented a number of operational challenges, as in the following examples: A logistics module, which carries research facilities and life support items to the station, that was scheduled and ready for launch in March 2003 had to be opened and unpacked (see fig. 2). Several racks were removed to provide the proper preventative maintenance of the contents until they can be rescheduled on a future flight. In addition, crew-specific items had to be removed in anticipation of crew changes for the next shuttle flight. This module requires more than 2 months to be repacked and tested prior to launch. One of the solar array wings scheduled for launch in May 2003 was approaching its 45-month prelaunch storage limit. Due to the launch delay, the wing had to be removed from the truss section and replaced with a new wing (see fig. 3). The removed wing was shipped to the contractor for deployment testing, which NASA hoped would result in a lengthening of the prelaunch storage limit to at least 60 months. However, according to NASA officials, preliminary results were very positive, and the storage life certification could be extended to as much as 8 years or more. The performance of the batteries on the truss sections that were ready for launch has also raised concerns. Prolonged storage at ambient temperatures could shorten the overall life of the battery. According to NASA officials, a process has been developed to charge the batteries periodically without removing them from the trusses during storage, then to provide a charge capability on the launch pad just prior to launch. This process, however, will require a new device to be developed and expending resources not previously planned for this function. Station program managers are resolved to meet these challenges and have station components ready for flight when the next shuttle is ready for launch. In addition, NASA is using this longer storage time to determine the feasibility of adding new testing procedures. For example, NASA is developing tests to apply power to some elements and may also perform additional leak tests. The grounding of the shuttle fleet has also hampered NASA’s ability to correct known safety concerns on-board the station. For example, NASA has had to delay plans to fly additional shielding to the space station to adequately protect the on-orbit Russian Service Module from space debris. NASA’s analysis of the problem shows the probability of orbital space debris penetrating the module increases by 1.6 percent each year the shielding is not installed. NASA accepted this risk by issuing a waiver for the noncompliance with a safety requirement, but planned to have the shielding installed within 37 months of the module’s launch in July 2000. Six of the required 23 panels have been installed on the module, and NASA is negotiating with the Russian Aviation and Space Agency to manufacture the 17 remaining panels. NASA officials told us that they are studying alternatives for launching and installing the debris protection panels earlier than originally planned. In addition, there will be delays in analyzing the failure of an on-orbit gyro—one of four that maintain the station’s orbital stability and control. According to NASA, a shuttle flight planned for March of this year was to carry a replacement gyro to the station and return the failed unit for detailed analysis. Because the shuttle flight was canceled, the failed unit was not returned. Consequently, NASA is unable at this time to provide a definitive analysis of the reasons for the failure of the unit or to know if the problem applies to the remaining units. NASA had planned to assemble the core complete configuration of the station by February 2004. NASA officials have maintained that assembly delays will be at least a “month for month” slip from the previous schedule, depending on the frequency of flights when the shuttles resume operations. At best, then, the core complete configuration would not be assembled before sometime in fiscal year 2005. While the space station crew’s current responsibility is primarily to perform routine maintenance, the two-crew members will conduct some research on-board the station. An interim space station research plan developed by NASA details the amount and type of research that will be conducted. Further, NASA states that although the crew has been reduced from three to two members, more crew time will be available to carry out research tasks because no assembly or space walks are planned. Regardless, the limited payload capability of the Russian vehicles directly affects the extent of research that can be conducted, as illustrated in the following examples: Outfitting of U.S. research facilities halted: Lacking the shuttle fleet’s greater lift capability, the amount of research hardware transported to and from the station has been significantly limited. With the fleet grounded, three major research facilities—which, according to NASA, complete the outfitting of the U.S. laboratory—could not be launched in March of this year, as planned. As of August 2003, 7 of the 20 planned research facilities are on orbit. NASA had planned to add 7 more facilities by January 2008. At this time, it is unknown when the full configuration of the 20 research facilities will be on-board the station. Existing hardware failures: Because new and additional hardware cannot be transported, NASA has to rely more heavily on existing on-orbit science facilities—facilities that have already experienced some failures. For example, in November 2002, the Microgravity Science Glovebox— which provides an enclosed and sealed workspace for conducting experiments—failed and did not become operational until late March 2003. NASA officials state there also have been failures of the existing refrigerator-freezers on-board the station, which serve as the main cold storage units until a larger space station cold temperature facility becomes available. The larger cold temperature facility was one of three facilities that had been planned for launch in March 2003. Limited science material: Currently, there are no allocations for science materials to be transported to or from the space station by the Russian Soyuz and Progress vehicles. Based on the payload planning for these flights, however, there will be limited opportunities to launch small research projects. NASA officials state that the next two Progress flights could carry up to 40 kilograms and 100 kilograms, respectively, based on continuous payload planning. This would be much greater than the April 2003 Soyuz flight, which was able to carry 2.5 kilograms (about 5.5 pounds) of science material to the station for experiments in the current increment. As a result, research experiments for the current flight increment have been reduced. Specifically, only about two-thirds of new investigations and about three-quarters of ongoing investigations from previous increments will be accomplished on the current increment. Further, returning samples from these investigations will be delayed until the U.S. shuttle fleet returns to flight because of the Soyuz’s limited storage capacity. The investigations on the next increment are also in jeopardy as there is no planned up mass allocation for science material. Delays in transporting needed hardware and materials for research to the space station could be further constrained, depending on any safety modifications to the shuttle fleet based on recommendations of the Columbia Accident Investigation Board. If safety modifications to the shuttle increase the vehicle’s weight, the payload carrying capability for research could be adversely affected. For example, if NASA determines that the shuttle’s robotic arm is needed on future flights to address safety concerns, approximately 1,000 pounds of weight would be added, which would reduce the shuttle’s payload capacity for research equipment and other hardware. Since the program’s inception, we have repeatedly reported on the challenges NASA has faced in maintaining goals and objectives for the space station program. And while NASA has conducted reassessments and independent reviews of the program in efforts to institute corrective actions that would ensure proper cost controls, difficulties in controlling costs have persisted. NASA budgets and funds the space station program at essentially a fixed annual average level of about $1.7 billion a year based on full cost accounting. To date, NASA officials stated they have not completely estimated the potential increased costs and future budget impact incurred due to the grounding of the space shuttle fleet. However, they have identified a number of factors that will likely result in increased costs—including the continued maintenance and storage of ready-to- launch station components as well as the testing and recertification of some components and the need to extend contracts to complete development and assembly of the station. NASA officials told us that the agency is assessing these potential cost and schedule impacts and how to mitigate the impacts within existing resources. In fiscal year 2003, NASA received $1.85 billion in appropriated funds for the space station and has requested $1.71 billion for fiscal year 2004 (see table 1). The funding reduction in fiscal year 2004 was based on near completion of the hardware development for the U.S. core configuration and the transition to on-orbit operations. NASA estimates that after the last year of development, the annual cost to operate the station will average $1.5 billion over a 10-year useful life. This estimate does not include all funding requirements, such as costs associated with necessary upgrades to preclude on-orbit hardware obsolescence, launch costs, and other support costs that are captured in other portions of NASA’s budget. NASA officials told us that soon after the Columbia accident, they published ground rules and assumptions that stated there would be no significant changes to the station’s budget execution and would maintain budget requests at current levels until the shuttle returns to flight. At that point, NASA program officials stated they will begin to evaluate the impact that new developments, enhancements, inventories, and staffing needed to sustain and operate the space station will have on future budget submissions, including requests for supplemental appropriations, and the execution of the station funding, including program reserves. NASA’s strategy for the station program following the Columbia accident has been to continue developing hardware as planned, to deliver these components to Kennedy Space Center as scheduled, and to prepare them for launch when the shuttle fleet returns to flight. Through contingency planning efforts, NASA has identified additional costs to be incurred by the space station program office as a result of these continuing developmental operations. However, these additional costs are based on an assumption that the shuttle will return to flight within 12 months of the Columbia accident, an assumption that is subject to change based on more definitive information concerning the status of the shuttle fleet’s operations. NASA officials state they have not finalized plans or risk assessments for continued assembly and operation of the space station if the shuttle fleet is grounded for a longer period of time. NASA has also implemented a management decision analysis that anticipates additional costs to be incurred in keeping a crew on-board the station while the shuttle fleet is grounded. The analysis is based primarily on management decisions regarding crew rotation and payload issues that involve shifting cargo and the use of consumables, such as potable water. Other factors, according to NASA officials, that the station program office identified could also result in cost increases, but it has not fully quantified these costs: disassembly and reassembly of component parts; unpacking and repacking equipment from the logistics module that was storage of station components that are ready for launch; maintaining battery life; unfurling and testing solar array wings, which could be affected by additional travel to Russia to facilitate discussions on Soyuz and Progress vehicles’ schedules and payloads and export controls issues; additional resupply flights; and retention of some critical skills necessary to complete development and assembly of the station. In addition to the operational challenges facing NASA, funding and partner agreements present significant challenges. While long-term plans are not well defined at this time, alternative funding may be needed to sustain the station, let alone achieve the station’s intended goals. At the same time, NASA and its partners must develop a plan for assembling the partners’ modules and reaching agreement on the final station configuration. In addition, since the final on-orbit configuration is likely to be different from the configuration when the Intergovernmental Agreements were signed in 1998, NASA officials state the partners may have to adjust agreements that cover the partners’ responsibility for shared common operations costs. Depending on the duration of the shuttle fleet’s grounding, the space station program may need to consider funding alternatives to sustain the station. International agreements governing the space station partnership specify that the United States, Canada, Europe, and Japan are responsible for funding the operations and maintenance of the elements each contributes, the research activities it conducts, and a share of common operating costs. Under current planning, NASA will fund the entire cost of common supplies and ground operations, then be reimbursed by the other partners for their shares. Depending on contributions made by the partners while the shuttle fleet is grounded, the share that each partner contributes to the common operations costs may have to be adjusted and could result in NASA’s paying a larger share of those costs. For example, the European Automated Transfer Vehicle is scheduled to begin flying in September 2004. If that vehicle takes on a larger role in supporting the station than currently planned, the European’s share of common operations costs could be reduced with the other partners paying more. Station requirements dictate that some Progress launches be accelerated and, depending on how long the shuttle fleet is grounded, could require additional flights. Russia maintains that it can provide additional launches, and the Russian Aviation and Space Agency is negotiating with its government in an effort to obtain the necessary funding. If those negotiations are unsuccessful, the other partners may have to provide the needed funding. However, the U.S. may be prohibited from making certain payments due to a statutory restriction. NASA is engaged in discussions with the other partners on how to sustain operations if additional flights are required. Further, following the release of the Columbia Accident Investigation Board’s report and recommendations, NASA and the partnership must agree on a final configuration of the on-orbit station that will be acceptable to all parties. Prior to the Columbia accident, options for the final on-orbit configuration were being studied, and a decision was planned for December 2003. NASA officials told us the process has been delayed, and NASA now expects the partners to agree on a program action plan in October 2003 that will lead to an agreement on the final on-orbit configuration. During a July 2003 meeting, international partner space agency leaders from the U.S., Europe, Canada, Japan, and Russia expressed support of the space station program. The leaders recognized the Russian Aviation and Space Agency for its support of station operations, logistics, crew transportation, and crew rescue while the shuttle fleet is grounded. The partners also expressed their support of NASA’s return to flight strategy, the resumption of station assembly, and the opportunity to enhance the use of the station for conducting world-class research. This is one of the most challenging periods in the history of the international space station program. NASA officials acknowledge that the loss of the space shuttle Columbia poses cost and schedule risks that have direct implications on completing the development and assembly of the station and the research that is to be conducted on-board as well as on NASA’s budgets for fiscal year 2004 and beyond. However, NASA officials told us that that it is too soon to determine the magnitude and costs of delayed assembly and implications of any recommendations from of Columbia Accident Investigation Board to the space station. Until the shuttle return-to-flight date is known, it is difficult to determine how and when potential cost and schedule increases will impact the station program or the agency as a whole. In written comments on a draft of this report, NASA’s Deputy Administrator said that the agency agrees with the content and conclusions in the report. He said that the space station program is taking the steps necessary to be ready to resume assembly immediately upon the space shuttle’s return-to-flight and to eliminate or offset cost impacts. He also pointed out that the international partners continue to collaborate on how to best support near-term space station on-orbit operations until the space shuttle returns to flight. NASA offered some technical comments on the report, which have been incorporated as appropriate. To describe the current status of the space station program in terms of on- orbit assembly and research, we reviewed NASA’s plans for completing station assembly prior to the Columbia accident and compared those plans to the agency’s actions following the accident to continue on-board operations while the shuttle fleet is grounded. To assess the planned research program, we reviewed NASA’s efforts to prioritize research on- board the station as well as plans to continue research while the shuttle fleet is grounded. We also interviewed NASA officials regarding the agency’s efforts to maintain the station and continue research following the Columbia accident. To determine the cost implications for the program with the grounding of the shuttle fleet, we reviewed NASA’s fiscal year 2003 budget amendment and appropriations as well as the agency’s fiscal year 2004 budget request. We also reviewed NASA’s assessments of potential cost impacts to the program and plans for mitigating those potential impacts. In addition, we reviewed NASA’s plans/interactions with its international partners to secure support for the station while the shuttle fleet is grounded and to reach agreement on a final station configuration that will be acceptable to all partners. We interviewed NASA officials with responsibility for estimating and controlling space station costs, managing space station research, and dealing with the international partners. To identify program challenges facing the space station program, we reviewed actions being taken by NASA to ensure continued safe operations of the station, toured the Space Station Processing Facility to view flight-ready hardware in storage, and reviewed NASA’s actions in response to the International Space Station Management and Cost Evaluation task force report. We interviewed space station program officials to obtain their views on the challenges facing the program. To accomplish our work, we visited NASA headquarters, Washington, D.C.; Johnson Space Center, Texas; and Kennedy Space Center, Florida. We also attended two meetings of the NASA Advisory Council. We conducted our work from November 2002 through August 2003 in accordance with generally accepted government standards. Unless you publicly announce the contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to the NASA Administrator; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others on request. In addition, the report will be available on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-4841 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix III. NASA: Major Management Challenges and Program Risks. GAO-03-114. Washington, D.C.: January 2003. Space Station: Actions Under Way to Manage Cost, but Significant Challenges Remain. GAO-02-735. Washington, D.C.: July 17, 2002. NASA: Compliance With Cost Limits Cannot Be Verified. GAO-02-504R. Washington, D.C.: April 10, 2002. NASA: Leadership and Systems Needed to Effect Financial Management Improvements. GAO-02-551T. Washington, D.C.: March 20, 2002. NASA: International Space Station and Shuttle Support Cost Limits. GAO-01-100R. Washington, D.C.: August 31, 2001. Space Station: Inadequate Planning and Design Led to Propulsion Module Project Failure. GAO-01-633. Washington, D.C.: June 20, 2001. Space Station: Russian-Built Zarya and Service Module Compliance With Safety Requirements. GAO/NSIAD-00-96R. Washington, D.C.: April 28, 2000. Space Station: Russian Compliance with Safety Requirements. GAO/TNSIAD-00-128. Washington, D.C.: March 16, 2000. Space Station: Russian Commitment and Cost Control Problems. GAO/NSIAD-99-175. Washington, D.C.: August 17, 1999. Space Station: Cost to Operate After Assembly Is Uncertain. GAO/NSIAD-99-177. Washington, D.C.: August 6, 1999. Space Station: Status of Russian Involvement and Cost Control Efforts. GAO/T-NSIAD-99-117. Washington, D.C.: April 29, 1999. Space Station: U.S. Life-Cycle Funding Requirements. GAO/T-NSIAD-98-212. Washington, D.C.: June 24, 1998. International Space Station: U.S. Life-Cycle Funding Requirements. GAO/NSIAD-98-147. Washington, D.C.: May 22, 1998. Space Station: Cost Control Problems. GAO/T-NSIAD-98-54. Washington, D.C.: November 5, 1997. Space Station: Deteriorating Cost and Schedule Performance Under the Prime Contract. GAO/T-NSIAD-97-262. Washington, D.C.: September 18, 1997. Space Station: Cost Control Problems Are Worsening. GAO/NSIAD-97-213.Washington, D.C.: September. 16, 1997. NASA: Major Management Challenges. GAO/T-NSIAD-97-178. Washington, D.C.: July 24, 1997. Space Station: Cost Control Problems Continue to Worsen. GAO/T-NSIAD-97-177. Washington, D.C.: June 18, 1997. Space Station: Cost Control Difficulties Continue. GAO/T-NSIAD-96-210.Washington, D.C.: July 24, 1996. Space Station: Cost Control Difficulties Continue. GAO/NSIAD-96-135. Washington, D.C.: July 17, 1996. Space Station: Estimated Total U.S. Funding Requirements. GAO/NSIAD-95-163. Washington, D.C.: June 12, 1995. Space Station: Update on the Impact of the Expanded Russian Role. GAO/NSIAD-94-248. Washington, D.C.: July 29, 1994. Space Station: Impact of the Expanded Russian Role on Funding and Research. GAO/NSIAD-94-220. Washington, D.C.: June 21, 1994. Individuals making key contributions to this report included Jerry Herley, James Beard, Fred Felder, Lynn LaValle, Rick Cederholm, Josh Margraf, and Karen Sloan.
In 1998, the National Aeronautics and Space Administration (NASA) and its international partners--Canada, Europe, Japan, and Russia--began on-orbit assembly of the International Space Station, envisioned as a permanently orbiting laboratory for conducting scientific research under nearly weightless conditions. Since its inception, the program has experienced numerous problems, resulting in significant cost growth and assembly schedule slippages. Following the loss of Columbia in February 2003, NASA grounded the U.S. shuttle fleet, putting the immediate future of the space station in doubt, as the fleet, with its payload capacity, has been key to the station's development. If recent discoveries about the cause of the Columbia's disintegration require that the remaining shuttles be redesigned or modified, delays in the fleet's return to flight could be lengthy. In light of these uncertainties, concerns about the space station's cost and progress have grown. This report highlights the current status of the program in terms of on-orbit assembly and research; the cost implications for the program with the grounding of the shuttle fleet; and identifying significant program management challenges, especially as they relate to reaching agreements with the international partners. Although the effects of the Columbia accident on the space station are still being explored, it is clear that the station will cost more, take longer to complete, and have further delay in the achievement of key research objectives. Due to the limited payload capacity of Russia's Soyuz and Progress vehicles--which the program must now rely on to rotate crew and provide logistics support--the station is currently in a survival mode. Onorbit assembly is at a standstill, and the on-board crew has been reduced from three to two members. NASA officials maintain that delays in on-orbit assembly will be at least a "month for month" slip from the previous schedule. However, these delays have presented a number of operational challenges. For example, several key components that were ready for launch when the Columbia accident occurred have been idle at Kennedy Space Center and now require additional maintenance or recertification before they can be launched. Moreover, certain safety concerns on-board the station cannot be addressed until the shuttle fleet's return to flight. The grounding of the shuttle fleet has also further impeded the advancement of the program's science investigations. Specifically, the limited availability of research facilities and new science materials has constrained on-board research. NASA has yet to estimate the potential costs and future budget impacts that will result from the grounding of the shuttle fleet. Throughout the life of the program, however, maintaining goals and objectives for the space station has been a challenge for NASA. NASA has analyzed anticipated costs that the program will incur to keep a limited crew on board the station until the U.S. shuttles resume flight, and officials have stated that there would not be significant changes to the execution of the current budget and that the fiscal year 2004 budget request would remain at current levels. NASA plans to continue to develop hardware and deliver station elements to Kennedy Space Center to be prepared for launch as previously scheduled. However, a number of factors will likely result in increased costs, including costs to maintain and store station components and costs for extending contracts. Important decisions regarding funding and partner agreements still need to be made. For example, agreements that cover the partners' responsibility for shared common operations costs may need to be adjusted, an adjustment that could result in NASA's paying a larger share of these costs. In addition, logistics flights using Russian vehicles may need to be accelerated to ensure continued operations on-board the station. Russia has stated that additional flights are possible, but it could need additional funding from the other partners. However, the United States may be prohibited from providing certain payments due to a statutory restriction. NASA and its partners must also develop a plan for assembling the partners' modules and reaching agreement on the final station configuration. The partners were on a path to agree on final configuration by December 2003, but this process has been delayed by the Columbia accident.
Mortgage insurance, a commonly used credit enhancement, protects lenders against losses in the event of default. Lenders usually require mortgage insurance when a homebuyer has a down payment of less than 20 percent of the value of the home. FHA, VA, the USDA’s Rural Housing Service (RHS), and private mortgage insurers provide this insurance. In 2003, lenders originated $3.8 trillion of single-family mortgage loans, of which more than 60 percent were for refinancing. Of all the insured loans including refinancings originated in 2003, private companies insured about 64 percent, FHA insured about 26 percent, VA insured about 10 percent, and RHS insured a very small number. Private mortgage insurers generally offer first loss coverage— that is, they will pay all the losses from a foreclosure up to a stated percentage of the claim amount. Generally, these insurers limit the coverage that they offer to between 25 percent and 35 percent of the claim amount. The insurance offered by the government varies in the amount of lender incurred losses it will cover. For example, VA guarantees losses up to 25 percent to 50 percent of the loan, while FHA’s principal single-family insurance program insures almost 100 percent. FHA plays a particularly large role in certain market segments, including low-income and first-time homebuyers. During fiscal years 2001 to 2003, FHA insured a total of about 3.7 million mortgages with a total value of about $425 billion. FHA insures most of its mortgages for single-family housing under its Mutual Mortgage Insurance Fund. To cover lenders’ losses, FHA collects insurance premiums from borrowers. These premiums, along with proceeds from the sale of foreclosed properties, pay for claims that FHA pays lenders as a result of foreclosures. Fannie Mae and Freddie Mac are government-sponsored private corporations with stated public missions chartered by Congress to provide a continuous flow of funds to mortgage lenders and borrowers. Fannie Mae and Freddie Mac purchase mortgages from lenders across the country and finance their mortgage purchases through borrowing or issuing mortgage-backed securities that are sold to investors. They purchase single-family mortgages up to the “conforming loan limit,” which for 2005 was set at $359,650. Their purchase guidelines and underwriting standards have a dominant role in determining the types of loans that primary lenders will originate in the conventional conforming market. Members of the conventional mortgage market (such as private mortgage insurers, Fannie Mae, Freddie Mac, and large private lenders) have been increasingly active in supporting low and no down payment mortgage products. Many private mortgage insurers will now insure a mortgage up to 100 percent of the value of the housing being purchased. Fannie Mae and Freddie Mac, working together with the private mortgage insurers, have become more aggressive in developing high LTV products that target low-and moderate-income or first-time homebuyers while also developing high LTV products designed for use by borrowers across the income spectrum. Figure 1 shows the history of the introduction of low and no down payment mortgage products at three LTV levels. FHA and VA have been backing low and no down payment mortgages for many years, and Fannie Mae and Freddie Mac permitted conventional lenders to sell them mortgages with an LTV of 97 percent in 1994 and 1998, respectively. Freddie Mac and Fannie Mae’s no down payment mortgage products were introduced in 2000. As shown in figure 2, a greater proportion of the FHA-insured and VA-guaranteed mortgage loans had low down payments than was the case for loans purchased by Fannie Mae and Freddie Mac. Further, the number of loans FHA insured in 2000 that had LTVs greater than 95 percent exceeded the total number of loans with such LTVs that were guaranteed by VA and purchased by Fannie Mae and Freddie Mac combined. While relatively few loans purchased by Fannie Mae or Freddie Mac had low or no down payments, in recent years the GSEs have purchased relatively more of these loans than in the past. As shown in figure 3, both Fannie Mae and Freddie Mac, during the years 1997-2000, acquired a higher proportion of mortgages with a high LTV than in previous years. To do this, they increased the number of product options available to borrowers with limited down payment funds. The mortgage industry is increasingly using mortgage scoring and automated underwriting. During the 1990s, private mortgage insurers, the GSEs, and larger financial institutions developed automated underwriting systems. Mortgage scoring is a technology-based tool that relies on the statistical analysis of millions of previously originated mortgage loans to determine how key attributes such as the borrower’s credit history, the property characteristics, and the terms of the mortgage note affect future loan performance. Automated underwriting refers to the process of collecting and processing the data used in the underwriting process. FHA has developed and recently implemented a mortgage scoring tool, called the FHA TOTAL Scorecard, to be used in conjunction with existing automated underwriting systems. More than 60 percent of all mortgages were being underwritten by an automated underwriting system, as of 2002, and this percentage continues to rise. The mortgage industry also uses credit scoring models for estimating the credit risk of individuals—these methodologies are based on information such as payment patterns. Statistical analyses identifying the characteristics of borrowers who were most likely to make loan payments have been used to create a weight or score associated with each of the characteristics. According to Fair, Isaac and Company sources, credit scores are often called “FICO scores” because most credit scores are produced from software developed by Fair, Isaac and Company. FICO scores generally range from 300 to 850 with higher scores indicating better credit history. The lower the credit score, the more compensating factors lenders might require to approve a loan. These factors can include a higher down payment and greater borrower reserves. The characteristics and standards for low and no down payment mortgage products vary among mortgage institutions. Standards to determine a borrower’s eligibility differ from lender to lender. For example, one mortgage institution might have a limit on household income where another might not. Each of these mortgage products requires some form of borrower investment. Most mortgage institutions use automated systems to underwrite loans but differ on how they consider factors such as the borrower’s credit score and credit history. Finally, mortgage institutions also try to mitigate the increased risk associated with these products by employing tools like prepurchase counseling and greater insurance coverage. Each mortgage institution we studied limits in some way the mortgages or the borrowers that may be eligible for their low and no down payment products, but the specific limits and criteria differ among institutions. Fannie Mae and Freddie Mac are constrained in the size of the mortgages they may purchase. Specifically, the Housing and Community Development Act of 1980 requires a limit (conforming loan limit) on the size of mortgages that can be purchased by either Fannie Mae or Freddie Mac. In 2005, the conforming mortgage limit for Fannie Mae and Freddie Mac is $359,650 for most of the nation. FHA is also limited in the size of mortgages it may insure. The FHA loan limit varies by location and property type, depending on the cost of homes in an area and the number of units in a property. Thus, FHA’s loan limit may be as high as 87 percent of the conforming loan limit, or $312,895 in 2005; or as low as 48 percent of the conforming loan limit, or $172,632 in 2005. In addition, FHA also has higher limits in Alaska, Hawaii, Guam, and the U.S. Virgin Islands because these are considered to be high cost areas. Although VA does not have a mortgage limit, lenders generally limit VA mortgages to four times the VA guaranty amount, which is now set at 25 percent of the conforming loan limit. Since the maximum guaranty currently is legislatively set at $89,913, VA-guaranteed mortgages will rarely exceed $359,650. Moreover, while FHA does not restrict eligibility to borrowers with certain income, other mortgage institutions may limit eligibility by borrower income and other measures. Most state housing finance agencies target their low and no down payment products to first-time homebuyers. Some mortgage institutions providing affordable low and no down payment products also limit the loans to households with income at or below area median levels. For example, USDA’s RHS, in its section 502 Guaranteed Loan program, does not guarantee loans to individuals with incomes exceeding 115 percent of the area median income or 115 percent of the median family income of the United States. We also found that Web sites of many state housing finance agencies show that their mortgage products include income limits as well as sales price limits and in some cases designated “targeted areas” within a state. Table 1 illustrates some of the major similarities and differences in the eligibility criteria of FHA and other mortgage institutions. Fannie Mae and Freddie Mac affordable mortgage products primarily target low-to-moderate income and first-time homebuyers. Freddie Mac and RHS allow a borrower to purchase a home containing one unit, while FHA, VA, and Fannie Mae allow a borrower to purchase properties that have up to four units with one mortgage. VA stipulates that if the veteran must depend on rental income from the property to qualify for the mortgage, the borrower must show proof that he or she has the background or qualifications to be successful as a landlord and have enough cash reserves to make the mortgage payments for at least 6 months without help from the rental income. With regard to mortgage type, many mortgage institutions permit 30-year fixed-rate mortgages. Some also permit adjustable rate mortgages (ARM). Most low and no down payment mortgage products require some form of borrower investment, either a borrower contribution or cash reserve, as a way of reducing risk and assuring that the borrower has a stake in the property. Low down payment products offered by FHA, Fannie Mae, Freddie Mac and private insurers require a cash investment of at least 3 percent from the borrower. No down payment mortgage products offered by VA, RHS, Fannie Mae, Freddie Mac, and some private insurers require either no down payment or a minimum amount (such as $500 in Fannie Mae’s MyCommunityMortgage program). Many institutions permit down payment assistance. FHA stipulates that the gift donor may not be a person or entity with an interest in the sale of the property, such as the seller, real estate agent or broker, builder, or entity associated with them. FHA mortgagee letters state that “gifts from these sources (seller, builder, etc.) are considered inducements to purchase and must be subtracted from the sales price.” However, FHA allows nonprofit agencies that may receive contributions from the seller to provide down payment assistance to the borrower. In contrast, Fannie Mae, Freddie Mac, and some of the private insurers generally do not allow down payment funds, either directly or indirectly, from an interested or seller-related party to the transaction. Fannie Mae and Freddie Mac officials told us that such seller-related contributions could contribute to an overvaluation of the price of the property. Even where borrowers pay no down payment they very often must pay a minimum percentage of closing costs from their own funds. FHA requires that borrowers pay 3 percent of the total loan amount toward the purchase of the home. This contribution may be used for down payment or closing costs. Thus, FHA borrowers may finance closing costs, within limits. FHA borrowers may also finance their insurance premium. Unlike FHA, some mortgage institutions do not allow financing of the closing costs and the insurance premiums in the first mortgage. VA generally allows payment of all closing costs to be negotiated while restricting those that may be charged to the borrower. VA allows borrowers to finance their insurance premium, called the funding fee. In the section 502 Guaranteed Loan program for RHS, borrowers may pay closing costs but they are not required to do so and may be allowed to finance the closing costs and their insurance premium, called the Guarantee Fee. Freddie Mac in its no down payment product requires a 3 percent borrower contribution to be used for closing costs, financing costs, or prepaids and escrows, all of which can come from gifts or property seller contributions. FHA, RHS, VA, Fannie Mae, and Freddie Mac differ somewhat in terms of their maximum allowable LTV ratios and how they calculate this ratio. LTV ratios are important because of the direct relationship that exists between the amount of equity borrowers have in their homes and the likelihood of risk of default. The higher the LTV ratio, the less cash borrowers will have invested in their homes and the more likely it is that they may default on mortgage obligations, especially during times of economic hardship. The Omnibus Budget Reconciliation Act of 1990 (Pub. L. No. 101-508), established LTV limits for FHA-insured mortgages of 98.75 percent if the home value is $50,000 or less, or 97.75 percent if the home value is in excess of that. However, because FHA allows financing of the up-front insurance premium, borrowers can receive a mortgage with an effective LTV ratio of close to 100 percent. In table 2, we calculate the effective LTV ratio for selected low and no down payment products. The example assumes a $100,000 purchase price (appraisal value) and a 30-year fixed-rate mortgage. It also assumes average closing costs of about 2.1 percent of sales price. FHA has a formula to calculate the maximum loan amount based on a percentage of the purchase price of the home. FHA does not have a down payment requirement but instead has what FHA calls a minimum cash investment requirement. This investment requirement can be used to pay either the down payment and in some cases the closing costs. Not shown are the actual out-of-pocket expenses to the borrower which could vary based on the individual transaction and whether the investment requirement was split among the closing costs and down payment, as well as whether the borrower opted to finance their up-front premium. In addition, some of the affordable conventional mortgage products allow for subordinate financing in the form of secondary mortgages to pay for a down payment and/or closing costs. These secondary mortgages allow for a total effective LTV of up to 105 percent. When underwriting mortgages, FHA and other mortgage institutions require that lenders examine a borrower’s ability and willingness to repay the mortgage debt. Lenders for low and no down payment mortgages may use automated underwriting systems examining the borrower’s credit score or creditworthiness, qualifying ratios, and cash reserves. In some cases, they use manual underwriting to accommodate nontraditional credit histories. By screening the majority of applications with automated systems, underwriters have more time to review special cases with manual underwriting. Many mortgage institutions use credit scores in assessing mortgage applicants through their automated underwriting systems. For standard products, institutions tend to rely on automated underwriting, which develops a mortgage score based on various factors including credit score and, based on this, they make a decision on the loan. However, in some instances, mortgage institutions set credit score minimums for some low and no down payment products. In some instances, these credit score minimums exist within the automated underwriting system. In other instances, the credit score minimums exist only in products that are underwritten using manual underwriting. FHA does not require a credit score minimum, nor do VA and RHS. These three governmental agencies examine the overall pattern of credit behavior rather than rely on one particular credit score. All three agencies allow a good deal of judgment and interpretation on the part of the underwriters in determining the creditworthiness of the prospective borrower. Fannie Mae does not use externally derived credit scores for its loan products that use automated underwriting but instead relies on the credit history of the borrower. Based on a review of Web sites of private mortgage insurers, products with no down payment that are insured by these private mortgage insurers have minimum credit score requirements ranging from 660 to 700. Individual low and no down payment products that use credit score minimums use a variety of cutoff scores. Many mortgage industry sources consider borrowers with credit scores of 720 or higher as having excellent credit. One study that focused on issues related to homeownership and cited extensive interviews with leading experts in government and industry found that mortgage applicants with scores above 660 are likely to have acceptable credit. On the other hand, for applicants with FICO scores between 620 and 660, mortgage institutions typically perform more careful underwriting, scrutinizing many factors. FICO scores under 620 indicate higher risk and are unlikely to be approved by conventional lenders unless accompanied by compensating factors. Some of these mortgage institutions may, under some circumstances, accept a lower credit score, if the borrower provides additional compensating factors (such as 2 months cash reserve) that would indicate a lower risk on the part of the borrower. Mortgage institutions might also accept a lower credit score if they were receiving additional compensation for the risk, such as a mortgage originator receiving a higher interest rate or a mortgage insurer getting a higher insurance premium. Some mortgage institutions state in their underwriting guidance that FICO scores together with the LTV determine in part the borrower’s minimum contribution. For example, one private mortgage insurer allows borrowers with credit scores equal or greater than 700 to have a minimum borrower contribution of 0 percent on a 100 LTV loan. For this same insurer, a borrower with a credit score between 660 and 699 would have a minimum borrower contribution of 3 percent on a 100 LTV loan. Many mortgage institutions use two qualifying ratios as factors in determining whether a borrower will be able to meet the expenses involved in homeownership. The “housing-expense-to-income ratio” examines a borrower’s expected monthly housing expenses as a percentage of the borrower’s monthly income, and the “total-debt-to-income ratio” looks at a borrower’s expected monthly housing expenses plus long-term debt as a percent of the borrower’s monthly income. Lenders who do business with Fannie Mae or Freddie Mac place more emphasis on the total-debt-to-income ratio. Total debt includes monthly housing expenses and the total of other monthly obligations, such as auto loans, credit cards, alimony, or child support. The guidelines for manual underwriting are discussed below; automated underwriting systems weight the qualifying ratios, as well as numerous other factors, in assessing the borrower’s ability to meet the expenses involved in homeownership. Unless there are compensating factors, FHA monthly housing-expense- to-income ratio is set at a maximum of 29 percent, while the monthly “total-debt-to-income ratio” is, at most, 41 percent of the borrower’s stable monthly income. The requirements set by Fannie Mae, Freddie Mac, and the private insurers on the monthly housing expense-to-income ratio vary greatly. Some have set lower thresholds, such as Freddie Mac, which uses as a guideline that the monthly housing expense-to-income ratio should not be greater than 25 percent to 28 percent, with exceptions for some products. Others, such as some private insurers, have set higher thresholds than FHA has set, such as 33 percent. Some mortgage institutions set thresholds on the “total-debt-to-income” ratio that are lower than FHA’s threshold. Conventional mortgages that are manually underwritten to Fannie Mae or Freddie Mac standards are set at a benchmark total-debt-to-income ratio of 36 percent of the borrower’s stable monthly income, compared with FHA’s 41 percent. However, Fannie Mae and Freddie Mac state that they occasionally specify a higher allowable debt-to-income ratio for a particular mortgage loan if compensating factors are present. Cash reserves represent the amount of funds a borrower has after closing on the loan. Generally the reserves required of borrowers are expressed in terms of the numbers of monthly mortgage payments they may comprise. Conceptually they represent the ability of the borrower to repay the mortgage out of accumulated funds. Many mortgage institutions including FHA consider it a compensating factor that reduces the risk of delinquency. FHA, unlike conventional lenders who do business with the GSEs and the private insurers, does not require cash reserves for its low down payment product. VA and RHS also do not require cash reserves. Generally the GSEs and the private insurers with whom we spoke required cash reserves of either 1 or 2 months of monthly mortgage payments for low and no down payment products. Some of the mortgage institutions we spoke with used various tools to mitigate risk. For example, most mortgage institutions offering affordable low and no down payment mortgages to first-time homebuyers require prepurchase counseling, and some require postpurchase counseling. These include lenders working with Fannie Mae, Freddie Mac, private insurers, and state housing finance agencies. Homeownership counseling for first-time homebuyers takes a variety of forms. There are counseling programs administered by government agencies, lenders, nonprofit organizations, and the private insurers, among others. These programs are delivered through many different avenues including classroom, home study, individual counseling, and telephone. The content of the counseling programs also varies significantly across each of these administrative and delivery mechanisms, as does the timing of the counseling—which can be either prior to closing or postpurchase (when the borrower becomes delinquent on a payment). More specifically, Freddie Mac in each of its Affordable Gold products (intended for first-time homebuyers who generally earn 100 percent or less of area median income) requires that at least one qualifying borrower in the transaction must receive prepurchase counseling. Lenders must document the organization that administered the counseling and how the counseling was delivered. Freddie Mac exempts those borrowers who have cash reserves after closing equal to at least two monthly mortgage payments from the counseling requirement. Similarly, Fannie Mae in its MyCommunityMortgage, requires prepurchase counseling for first-time homebuyers when they are purchasing a one-unit property. If they are purchasing a two to four unit property, landlord counseling is required. Fannie Mae also requires postpurchase counseling for borrowers under certain low down payment programs who become delinquent on their payments early in the mortgage. Some private insurers require pre- and postpurchase counseling, but some only recommend it. For example, two private insurers require pre- and postpurchase counseling with all of its affordable low and no down payment products, and they provide most of this counseling themselves. On the other hand, another private insurer recommends, but does not require, prepurchase counseling for first-time homebuyers in its low and no down payment products. However, this insurer’s underwriting guidance states that prepurchase counseling is considered a positive underwriting factor. It also recommends postpurchase counseling, particularly for borrowers who are experiencing financial difficulties but have a good chance of overcoming their financial problems and maintaining homeownership. FHA, unlike most low and no down payment mortgage institutions serving affordable first-time homebuyers, does not require prepurchase counseling. VA also does not require prepurchase counseling, but considers it to be a compensating factor in improving creditworthiness. RHS encourages lenders to offer or provide for homeownership counseling and lenders may require first-time homebuyers to undergo such counseling if it is reasonably available in the local area. FHA, VA, RHS, and the private insurers also differ in the amount of insurance or guaranty they provide to protect lenders against the losses associated with mortgages that go to foreclosure. While FHA essentially protects against almost 100 percent of the losses associated with a foreclosed mortgage, VA, RHS and the private insurers protect against a portion of the loss. Private insurers generally provide protection to lenders for only a portion of losses. This protection is usually expressed as a percentage of the claim amount. For example, an insurer may provide insurance coverage of 30 percent. This means that the insurer will cover losses up to 30 percent of the claim amount. In exchange for offering this insurance, the insurer charges borrowers a premium. Some of the insurers with whom we spoke, as well as the GSEs, noted that they require higher insurance coverage for mortgages with lower down payments. For example, one insurer said that the amount of insurance coverage tends to be 35 percent for no down payment mortgages, in contrast to 30 percent for low down payment mortgages. Private insurers noted that they charge higher premiums or require more stringent underwriting when they provide higher insurance coverage. For example, one private insurer stated that its monthly premium rates to a borrower increase about 15 percent for every 5 percentage point increase in insurance coverage between 20 and 35 percent. Economic research we reviewed indicated that LTV ratios and credit scores are among the most important factors when estimating the risk level associated with individual mortgages. We identified and reviewed 45 papers that examined factors that could be informative. Of these, 37 examined if the LTV ratio was important and almost all of these papers (35) found the LTV ratio of a mortgage important and useful. Nineteen research papers evaluated how effective a borrower’s credit score was in predicting loan performance, and all but one reported that the credit score was important and useful. In addition, a number of the papers reported that other factors were useful when estimating the risk level. For example, characteristics of the borrower—such as qualifying ratios—were cited in several of the papers we reviewed. Finally, other research evaluated additional factors; however, we identified very few papers that investigated the same variables or corroborated these findings. Collectively, the research we reviewed appeared to concur that considering multiple factors was important and useful in estimating the risk level of individual mortgages. For example, some of the papers (7) reported that considering LTV ratio and credit score concurrently was important and useful when estimating the risk level of individual mortgages. Many studies found that a mortgage’s LTV ratio was an important factor when estimating the risk level associated with individual mortgages. In theory, LTV ratios are important because of the direct relationship that exists between the amount of equity borrowers have in their homes and the likelihood of risk of default. The higher the LTV ratio, the less cash borrowers will have invested in their homes and the more likely it is that they may default on mortgage obligations, especially during times of economic hardship (e.g., unemployment, divorce, home price depreciation). And, according to one study, “most models of mortgage loan performance emphasize the role of the borrower’s equity in the home in the decision to default.” We identified 45 papers that examined the relationship between default and one or more predictive variables; of these, 37 examined if LTV ratio was important and useful. Almost all of these papers (35) determined that LTV ratio was effective in predicting loan performance—specifically, when predicting delinquency, default, and foreclosure. Several papers reported that there was a strong positive relationship between LTV ratio and default. Specifically, one paper reported that the default rates for mortgages with an LTV ratio above 95 percent were three to four times higher than default rates for mortgages with an LTV ratio between 90 to 95 percent. Another paper found that, at the end of 5 years, the cumulative probability of default risks for mortgages with an LTV ratio less than 95 percent was 2.48 percent; however, the cumulative probability of default for mortgages with an LTV ratio greater than or equal to 95 percent was 3.53 percent. While the majority of the empirical research found that LTV ratio mattered, 4 of the research efforts did not find that LTV ratio is important when estimating the risk level associated with individual mortgages. For example, one paper found that, for subprime loans, delinquency rates were relatively unaffected by the LTV ratio. Generally, subprime loans are loans made to borrowers with past credit problems at a higher cost than conventional mortgage loans. Additionally, some (7) research efforts examined the relationship between the LTV ratio and severity (losses), and all found that there was a positive relationship between the LTV ratio and severity. For a detailed list of the economic research that addresses the relationship between LTV ratio and mortgage performance, see appendix II. Despite the relatively recent use of credit score information in the mortgage industry, several studies found that credit score was an important and useful factor when estimating the risk level associated with individual mortgages. In general, credit scores represent a borrower’s credit history. Credit histories consist of many items, including the number and age of credit accounts of different types, the incidence and severity of payment problems, and the length of time since any payment problems occurred. The credit score reflects a borrower’s historic performance and is an indication of the borrower’s ability and willingness to manage debt payments. Of the 45 papers we reviewed, 19 evaluated how effective a borrower’s credit score was in predicting loan performance. Eighteen research efforts evaluated how effective a borrower’s credit score was in predicting delinquency, default, and foreclosure; all of these efforts found that a borrower’s credit score was important. Generally, the papers reported that higher credit scores were associated with lower levels of defaults. Specifically, one study found that a mortgage with a credit score of 728 (indicating an applicant with excellent credit) had a default probability of 1.26 percent, while a mortgage with a credit score of 642 had a default probability of 3.41 percent—or more than two times higher. Additionally, four research efforts examined the relationship between credit score and severity (losses), and three reported that there was a negative relationship between credit score and severity. For example, one study found that credit scores were also helpful in predicting the amount of losses resulting from foreclosed mortgages. In particular, the paper reported the loss rate for defaulted mortgages with high credit scores was lower than foreclosed mortgages with low credit scores. For a list of the economic research, that we reviewed, that addresses the relationship between credit score and mortgage performance, see appendix II. Many of the papers we reviewed identified factors that, in addition to LTV ratios and credit scores, were important and useful determinants of credit risk for home mortgages. Of these, the most widely analyzed factor—accumulation of equity in the home—was a subject of 26 studies we reviewed. Some factors were the subject of far fewer papers. Yet other factors were the subject of a single paper only. The most widely assessed factors included borrower characteristics such as accumulation of equity in the home, qualifying ratios, and income. Additionally, characteristics of the area in which the property was located included variables such as unemployment rates and income levels. Finally, characteristics of the mortgage included variables such as mortgage age and term of the mortgage (e.g., 15 year vs. 30 year). The extent to which the authors agreed on the importance of the other factors varied. For example, nearly all of the papers that looked at equity accumulation (a factor which is not known at the time of loan origination), the unemployment rate of the area in which the property is located, and mortgage age, found that these factors were important. However, the research was less certain as to the importance of qualifying ratios and income. That is, several of the papers found that the qualifying ratios and income were important in estimating risk; however, some found that qualifying ratios and income were not an important factor. The economic research we reviewed also indicated that considering factors in combination was helpful in estimating the risk level of individual mortgages. Of all 45 papers we reviewed, more than half conducted multivariate analyses. For example, seven studies found that using credit score information in combination with the LTV ratio was helpful in estimating the risk level of individual mortgages. Specifically, one study found that the “foreclosure rate is particularly high for borrowers with both low credit scores and high LTV ratios—almost 50 times higher than that for borrowers with both high credit scores and low LTV ratios.” Other studies examined several aspects of a mortgage concurrently. For example, in one study, the authors controlled for certain loan characteristics, such as credit history and LTV, and they found that borrower income is useful in estimating risk levels of mortgages. In another study, the authors controlled for house price appreciation (10 percent) and unemployment rates (8 percent) and examined loan performance—after 15 years—in terms of LTV ratio and a borrower’s relative income. Regardless of income, default was higher for zero down payment mortgages. Specifically, under these conditions, the authors reported that zero down payment mortgages of borrowers with incomes below 60 percent of the metropolitan statistical area’s (MSA) median level would have cumulative default rates about twice as high as mortgages that required a 10 percent down payment made to borrowers with similar incomes. Similarly, the zero down payment mortgages of borrowers with incomes greater than one-and-a-half times the MSA’s median level would have cumulative default rates about 50 percentgreater than mortgages that required a 10 percent down payment made to borrowers with similar incomes. Consistent with studies we reviewed, our analysis of FHA and conventional mortgage data indicated that mortgages with high LTV ratios (smaller down payments) and low credit scores generally are riskier than mortgages with low LTV ratios and high credit scores. For example, FHA-insured mortgages with LTV ratios greater than 80 percent and low credit scores (below 660) had a default rate above the FHA average default rate. Similarly, conventional mortgages with LTV ratios greater than 80 percent and somewhat low credit scores (below 700) had a default rate above the conventional average default rate. While this analysis is useful in determining the extent to which LTV ratios and credit scores can help predict the risk level associated with individual mortgages, care should be taken when comparing the FHA with the conventional relative default. In particular, the relative default rates are derived from different calendar years (that is, a sample of FHA mortgages insured in 1992, 1994, and 1996 and conventional mortgages originated in 1997, 1998, and 1999 and purchased by Fannie Mae or Freddie Mac). Also the average default rate for FHA-insured mortgages is higher than the average default rate for conventional mortgages. When considering LTV alone, FHA-insured mortgages with higher LTV ratios (smaller down payments) generally perform worse than FHA-insured mortgages with lower LTV ratios. As figure 4 illustrates, our analysis indicates that the incidence of default increases as LTV ratios increase. When considering the LTV ratio alone, the default rate for sampled FHA-insured mortgages, with an LTV of 70 percent or less, is no more than half the average FHA default rate. In contrast, the default rate for mortgages with LTV ratios greater than 90 percent, as a group, surpasses the average FHA default rate. For the highest LTV ratio group—greater than 97 to 100 percent—the default rate for these mortgages is about 1.75 times the average FHA default rate. FHA-insured mortgages with lower credit scores generally perform worse than FHA-insured mortgages with higher credit scores, regardless of LTV ratio. As figure 5 illustrates, our analysis indicated that the incidence of default increases as credit scores decrease. Considering the credit score, the default rate for sampled FHA-insured mortgages with credit scores 700 and above is no more than half the average FHA default rate for all sampled mortgages. The default rate for mortgages with a credit score below 660, as a group, surpasses the average FHA default rate. For the lowest credit score group—less than 620—the default rate for these mortgages is almost twice the average FHA default rate. As expected, FHA-insured mortgages with both high LTV ratios (smaller down payments) and low credit scores generally perform worse than mortgages with both low LTV ratios and high credit scores. Our analysis indicates that the incidence of default increases as LTV ratios increase and credit scores decrease. As figure 6 illustrates, mortgages with lower LTV ratios and higher credit scores (those at the bottom of the figure) have lower default rates than mortgages with higher LTV ratios and lower credit scores (at the top of the figure). FHA-insured mortgages with LTV ratios greater than 80 percent and low credit scores (below 660) had a default rate above the FHA average default rate. FHA-insured mortgages, with LTV ratios greater than 90 percent and credit scores below 620, had a default rate more than double the FHA average. Generally, the performance relationships that exist for FHA-insured mortgages also exist for conventional mortgages originated in the late 1990s. As figure 7 illustrates, our analysis indicates that conventional mortgages with higher LTV ratios (smaller down payments) generally perform worse than conventional mortgages with lower LTV ratios. When considering LTV ratio alone, the default rate for the group of conventional mortgages with LTV ratios below 80 percent was no more than half the average conventional default rate. Generally, the default rates then increase with higher categories of the LTV ratio. In fact, the default rate for conventional mortgages with an LTV ratio greater than 90, but less than 97 percent, as a group, is more than twice the average conventional default rate. One notable exception to this general pattern is that conventional mortgages in the highest LTV ratio category (that is, greater than 97 to 100 percent) appear to have a lower risk of default than do conventional mortgages in some of the lower LTV ratio categories. According to GSE officials, this may be explained by a number of possible factors. The GSEs had just begun to purchase an increasing number of mortgages with very high LTV ratios during the late 1990s and the GSEs took steps to limit the risks associated with these mortgages. For example, some of these loans were part of negotiated deals with individual lenders. These negotiated transactions may have required the use of manual underwriting and minimum credit scores, and the GSEs may have used specific servicers for these loans. GSE officials told us that lenders and servicers operating as part of negotiated deals with them tend to be more conservative in their approach to these loans. GSE officials also told us that the borrowers during this time period would have been the very best segment of the applicant pool. Agency officials indicate that, for more recent loans where volumes are higher and lenders are reaching deeper into the applicant pool, default rates on loans in these categories are higher than they were in the 1997–1999 period and are now consistent with the relationship we would expect between LTV and default rates. We discuss these practices in greater depth later in the report. When considering credit score alone, conventional mortgages with lower credit scores generally perform worse than conventional mortgages with higher credit scores. As figure 8 illustrates, our analysis indicates that the incidence of default generally increases as credit scores decrease. The average default rate for mortgages with credit scores of 740 and higher is no more than 20 percent that of the average default rate for conventional loans and loan performance declines for each lower category of credit score. In fact, the default rate for mortgages with a credit score below 700, as a group, surpasses the average default rate. Ultimately, the average default rate for the lowest credit score category (below 620) is more than 4 times the average conventional default rate. As expected, conventional mortgages with both high LTV ratios (smaller down payments) and low credit scores generally perform worse than mortgages with both lower LTV ratios (larger down payments) and higher credit scores. Our analysis indicates that the incidence of default generally increases as LTV ratios increase and credit scores decrease. As figure 9 illustrates, mortgages with lower LTV ratios and higher credit scores (those at the bottom of the figure) have much smaller default rates than mortgages with higher LTV ratios and lower credit scores (at the top of the figure). Specifically, as a group, mortgages with LTV ratios greater than 80 percent and credit scores below 700 have default rates greater than the average conventional default rate. Further, conventional mortgages with LTV ratios greater than 80 percent and credit scores below 660 had a default rate more than twice the conventional average. One notable exception to this general pattern is that the group of conventional mortgages with the highest LTV ratios (that is, greater than 97 to 100 percent) appears to have a lower risk of default than do the group of conventional mortgages with lower LTV ratios for loans originated during these years. For example, of conventional loans with credit scores of 740 and higher, those that had LTV ratios greater than 97 percent, as a group, performed better than those with LTV ratios greater than 90 to 97 percent. Similarly, of conventional loans with credit scores below 620, those with the highest LTV ratio performed better than those with LTV ratios greater than 90 to 97 percent. This anomaly, where the highest LTV mortgages appear to perform better than the lower LTV loans, may reflect that the GSEs had just begun to purchase an increasing number of mortgages with very high LTV ratios in the years we analyzed and that the GSEs took steps to limit the risks associated with these mortgages. Likewise, lenders may perform more rigorous underwriting when first originating a new loan product. While this analysis is useful in determining the extent to which LTV ratios and credit scores are helpful in predicting the risk level associated with individual mortgages insured by FHA and for mortgages purchased by the GSEs during specific years, there are several reasons why care should be taken when comparing the FHA with the conventional relative default rates. The relative default rates are derived from different years (that is, FHA mortgages insured in 1992, 1994, and 1996; and conventional mortgages originated in 1997, 1998, and 1999 and purchased by Fannie Mae or Freddie Mac). Also, the actual average default rate for FHA-insured mortgages is higher than the actual average default rate for conventional mortgages. Finally, the distribution among LTV categories for FHA-insured loans and conventional loans differs. Generally, over half of the loans that the GSEs purchase have LTV ratios at or below 80 percent. In comparison, loans insured by FHA generally have LTV ratios greater than 95 percent. Mortgage institutions we spoke with used a number of similar practices in designing and implementing new products, including low and no down payment products. Some of these practices could be helpful to FHA in its design and implementation of new products. When considering new products, mortgage institutions focused their initial efforts on identifying other products with similar enough characteristics to their new product so that data on these products could be used to understand the potential issues and performance for the proposed product. Some mortgage institutions, including FHA, said they may acquire external loan performance data and other data when designing new products. Moreover, mortgage institutions often establish additional requirements for new products such as additional credit enhancements or underwriting requirements. FHA has less flexibility in imposing additional credit enhancements but it does have the authority to seek co-insurance, which it is not currently using. FHA makes adjustments to underwriting criteria and to its premiums, but is not currently using any credit score thresholds. Mortgage institutions also use different means to limit how widely they make available a new product, particularly during its early years. FHA does sometimes use practices for limiting a new product but usually does not pilot products on its own initiative, and FHA officials question the circumstances in which they can limit the availability of a program and told us they do not have the resources to manage programs with limited availability. According to officials of mortgage institutions, including FHA, they also often put in place more substantial monitoring and oversight mechanisms for their new products including lender oversight, but we have previously reported that FHA could improve oversight of its lenders. Mortgage institutions, such as Fannie Mae, Freddie Mac, the private mortgage insurers, and FHA first identify what information, including data, they already have that would allow them to understand the performance of a potential product. When these institutions do not have sufficient data, they may purchase external data that allows them to conduct their own analysis of loans that are related to a type of loan product that they are considering. For example, Freddie Mac purchased structured transactions of Alt A and subprime loans in order to learn more about the underwriting characteristics and performance of high LTV and low credit score loans. Freddie Mac officials reported that these data were very helpful to them in considering how to best structure some of their high LTV products. Moreover, the accounting standards related to the Federal Credit Reform Act of 1990, which requires federal agencies to estimate the budget cost of federal credit programs, suggest that federal agencies making changes to programs should consider external sources of data. FHA officials told us that FHA has purchased such loan performance data. According to FHA officials, FHA relies more heavily on data that it has collected internally from the approximately 1 million loans it endorses each year and its single-family data warehouse, which contains data on approximately 30 million loans. FHA officials stated that, when possible, they use these internal data to create a proxy for how a loan product with certain characteristics might perform. FHA officials said they used these data to create a “virtual zero down loan” when FHA was considering how it might implement a proposed no down payment product. The mortgage institutions with whom we spoke noted that any loan performance data they develop or produce when implementing new products are also used to enhance their automated underwriting systems. The data improve the statistical models used in their automated underwriting systems. In May 2004, FHA implemented a statistical model for evaluating mortgage risk that may be used in lenders’ automated underwriting systems, called the FHA TOTAL Scorecard. In developing the TOTAL Scorecard, FHA purchased external data (credit score data), which they merged with their existing FHA data to try to better understand the loan performance of FHA-insured loans. Some mortgage institutions require additional credit enhancements—mechanisms for transferring risk from one party to another—on low and no down payment products and set stricter underwriting requirements for these products. Mortgage institutions such as Fannie Mae and Freddie Mac mitigate the risk of low and no down payment products by requiring additional credit enhancements such as higher mortgage insurance coverage. Fannie Mae and Freddie Mac require credit enhancements on all loans they purchase that have LTVs above 80 percent. Typically, this takes the form of private mortgage insurance. Fannie Mae and Freddie Mac also require higher levels of private mortgage insurance coverage for loans that have higher LTV ratios. For example, Fannie Mae and Freddie Mac require insurance coverage of 35 percent for loans that have an LTV greater than 95 percent. This means that, for any individual loan that forecloses, the mortgage insurer will pay the losses on the loan up to 35 percent of the claim amount. Fannie Mae and Freddie Mac require lower insurance coverage for loans with LTVs below 95 percent. Fannie Mae and Freddie Mac believe that the higher-LTV loans represent a greater risk to them and they seek to partially mitigate this risk by requiring higher mortgage insurance coverage. Although FHA is required to provide up to 100 percent coverage of the loans it insures, FHA may engage in co-insurance of its single-family loans. Under co-insurance, FHA could require lenders to share in the risks of insuring mortgages by assuming some percentage of the losses on the loans that they originated (lenders may use private mortgage insurance). FHA has used co-insurance before, primarily in its multifamily programs, but does not currently use co-insurance at all. FHA officials told us they tried to put together a co-insurance agreement with Fannie Mae and Freddie Mac and, while they were able to come to agreement on the sharing of premiums, they could not reach agreement on the sharing of losses and it was never implemented. FHA could also benefit from other means of mitigating risk such as stricter underwriting or increasing fees. Fannie Mae officials also stated that they would charge higher guarantee fees on low and no down payment loans if they were not able to require the higher insurance coverage. Fannie Mae and Freddie Mac charge guarantee fees to lenders in exchange for converting whole loans into mortgage-backed securities, which transfer the credit risk from the lender to Fannie Mae or Freddie Mac. Within statutory limits, the HUD Secretary has the authority to set up-front and annual premiums that are charged to borrowers who have FHA-insured loans. In fact, in the administration’s 2005 budget proposal for a zero down payment product, it included higher premiums for these loans. The Secretary has the authority to establish an up-front premium, which may be up to 2.25 percent of the amount of the original insured principal obligation of the mortgage. Within statutory limits, the Secretary may also require payment of an annual premium. Under the Administrative Procedures Act, the Secretary would generally follow a process in which the change to premiums would include issuing a proposed rule, receiving public comments, and then issuing a final rule. Additionally, mortgage institutions such as Fannie Mae and Freddie Mac sometimes introduce stricter underwriting standards as part of the development of new low and no down payment products (or products about which they do not fully understand the risks). Institutions can do this in a number of ways, including requiring a higher credit score threshold for certain products, or requiring greater borrower reserves or more documentation of income or assets from the borrower. Freddie Mac officials stated that they believed limits on allowing ARMs or multiple-unit properties were also reasonable, at least initially. Once the mortgage institution has learned enough about the risks that were previously not understood, it can change the underwriting requirements for these new products to align with its standard products. Although FHA sometimes has certain standards set for it through legislation, there exists some flexibility in how it implements a newly authorized product or changes to an existing product. The HUD Secretary has latitude within statutory limitations in changing underwriting requirements for new and existing products and has done this many times. Examples included the decrease in what is included as borrower’s debts and an expansion of the definition of what can be included as borrower’s effective income when lenders calculate qualifying ratios. In the context of the new zero down product, the Federal Housing Commissioner at HUD has stated that all loans being considered for a zero down loan would go through FHA’s TOTAL Scorecard, and borrowers would be required to receive prepurchase counseling. Fannie Mae and Freddie Mac sometimes use pilots, or limited offerings of new products, to build experience with a new product type or to learn about particular variables that can help them better understand the factors that contribute to risk for these products. Freddie Mac and Fannie Mae also sometimes set volume limits for the percentage of their business that could be low and no down payment lending. Fannie Mae and Freddie Mac officials provided numerous examples of products that they now offer as standard products but which began as part of underwriting experiments. These include the Fannie Mae Flexible 97® product, as well as the Freddie Mac 100 product. FHA has utilized pilots or demonstrations as well when making changes to its single-family mortgage insurance but generally does this in response to legislation that requires a pilot and not on its own initiative. One example in which FHA might have opted to do a pilot, or otherwise limited volumes, for a product is with allowing nonprofit down payment assistance. Concerns have been raised about the performance of FHA loans that have down payment assistance. FHA might have benefited from setting some limits on this type of assistance such that they could study its implications before allowing its broader use. FHA’s Home Equity Conversion Mortgage (HECM) insurance program is an example of an FHA program that started out as a pilot. HECM was initiated by Congress in 1987 and is designed to provide elderly homeowners a financial vehicle to tap the equity in their homes without selling or moving from their homes. Homeowners borrow against equity in their home and receive payments from their lenders (sometimes called a “reverse mortgage”). Through statute, HECM started out as a demonstration program that authorized FHA to insure 2,500 reverse mortgages. Through subsequent legislation, FHA was authorized to insure 25,000 reverse mortgages, then 50,000, and then finally 150,000 when Congress made the program permanent in 1998. Under the National Housing Act, the HECM program was required to undergo a series of evaluations and it has been evaluated four times since its inception. FHA officials told us that administering this demonstration for only 2,500 loans was difficult because of the challenges of selecting only a limited number of lenders and borrowers. FHA ultimately had to limit loans to lenders drawn through a lottery. The appropriate size for a pilot program depends on several factors. For example, the precise number of loans needed to detect a difference in performance between standard loans and loans of a new product type depends in part on how great the differences are in loan performance. If delinquencies early in the life of a mortgage were about 10 percent for FHA’s standard high LTV loans, and FHA wished to determine whether loans in the pilot had delinquency rates no more than 20 percent greater that the standard loans (delinquency no more than 12 percent), a sample size of about 1,000 loans would be a sufficient size to detect this difference with 95 percent confidence. If delinquency rates are different, or FHA’s desired degree of precision were different, a different sample size would be appropriate. FHA officials with whom we spoke told us they could use pilots or otherwise limit availability when implementing a new product or making changes to an existing product, but they also questioned their authority and the circumstances under which they would do so. FHA officials also said that they lacked sufficient resources to be able to appropriately manage a pilot. Some mortgage institutions may also limit the initial implementation of a new product by limiting the origination and servicing of the product to their better lenders and servicers, respectively. Mortgage institutions may also limit servicing on the loans to servicers with particular product expertise, regardless of who originates the loans. Fannie Mae and Freddie Mac both reported that these were important steps in introducing a new product and noted that lenders tend to take a more conservative approach when first implementing a new product. FHA officials agreed that they could, under certain circumstances, envision piloting or limiting the ways in which a new or changed product would be available but pointed to the practical limitations in doing so. FHA approves the sellers and services that are authorized to support FHA’s single-family product. FHA officials told us they face challenges in offering any of their programs only in certain regions of the country or in limiting programs to certain approved lenders or servicers. They generally offer their products on a national basis and, when they do not, specific regions of the county or lenders may question why they are not able to receive the same benefit (even on a demonstration or pilot basis). These officials did, though, provide examples in which their products had been initially limited to particular regions of the country or to particular lenders, including the rollout of the HECMs and their TOTAL Scorecard. Mortgage institutions, including FHA, may take several steps related to increased monitoring of new products and then make changes based on what they learn. Fannie Mae and Freddie Mac officials described processes in which they monitor actual versus expected loan performance for new products, sometimes including enhanced monitoring of early loan performance. FHA officials told us they also monitor more closely loans underwritten under revised guidelines. Specifically, FHA officials told us that FHA routinely conducts a review of underwriting for approximately 6 to 7 percent of loans it insures. FHA officials told us that, as part of the review, it may place greater emphasis on reviewing those aspects of the insurance product that are the subject of a recent change. Some mortgage institutions, such as Fannie Mae, told us that they may conduct rigorous quality control sampling of new acquisitions, early payment defaults, and nonperforming loans. Depending on the scale of a new initiative, and its perceived risk, these quality control reviews could include a review of up to 100 percent of the loans that are part of the new product. Fannie Mae and Freddie Mac also reported that they conduct more regular reviews at seller/servicer sites for new products. In some cases, Fannie Mae and Freddie Mac have staff who conduct on-site audits at the sellers and servicers to provide this extra layer of oversight. FHA officials also reported that they have staff that conduct reviews of lenders that they have identified as representing higher risk to FHA programs. However, we recently reported that HUD’s oversight of lenders could be improved and identified a number of recommendations for improving this oversight. Mortgage institutions may issue a lender bulletin, announcement, or seller/servicer guidelines to clarify instructions for new products or changes to existing products. FHA does this through the mortgagee letters it issues to all of its approved lenders. Mortgage institutions may also issue a lender bulletin, announcement, or seller/servicer guidelines to communicate required additional controls, practices, procedures, reporting, and remitting. Importantly, changes can be made to the structure of a product, including the automated underwriting systems used to approve individual loans, based on information learned from monitoring of new products or from other sources. FHA officials told us that they routinely analyze the changing performance of loans they insure as part of the annual process for estimating and re-estimating subsidy costs. The Federal Credit Reform Act of 1990 requires that federal government programs that make direct loans or loan guarantees (including insuring loans) account for the full cost of their programs on an annual budgetary basis. Specifically, federal agencies must develop subsidy estimates of the net cost of their programs that include estimates of the net costs and revenues over the projected lives of the loans made in each fiscal year. FHA’s Mutual Mortgage Insurance Fund has historically been self-sufficient (not requiring subsidy). When preparing cost estimates for loan guarantee programs, agencies are expected to develop a plan to establish the appropriate information, models, and documentation to better understand the new product and to be able to make changes based on what they learn. FHA officials state that they have a process in which changes to their model are made to reflect the incorporation of new programs and policies and that they review the performance of a new program in the context of their annual development of subsidy estimates, as well as their annual actuarial study. While credit score is an effective predictor of default, LTV remains an effective predictor of default. Loans with lower or no down payments carry greater risk. Without any compensating measures such as offsetting credit enhancements and increased risk monitoring and oversight of lenders, introducing a new FHA no down payment product would expose FHA to greater credit risk. The administration’s proposal for a zero down product included increased premiums to help compensate for an increase in the cost of the FHA program, and the Federal Housing Commissioner stated that borrowers would be required to go through prepurchase counseling. The extent to which increased cost for one program could effect the overall performance of FHA’s Mutual Mortgage Insurance (MMI) fund depends, in part, on the scale of any new product, its relative cost, and how the new product affects demand for FHA’s existing products. Although FHA appears to follow many key practices used by mortgage institutions in designing and implementing new products, several practices not currently or consistently followed by FHA stand out as appropriate means to manage the risks associated with introducing new products or significantly changing existing products. Moreover, these practices can be viewed as part of a framework used by some mortgage institutions for managing the risks associated with new or changed products. The framework includes techniques such as limiting the availability of a new product until it is better understood and establishing stricter underwriting standards—all of which would help FHA to manage risk associated with any new product it may introduce. For example, FHA could set volume limits or limit the initial number of lenders participating in the product. Further, changes in FHA’s premiums, an important practice used by FHA, within statutory limits, permits FHA to potentially offset additional costs stemming from a new product that entails greater risk or not well understood risk. FHA officials believe that the agency does not have sufficient resources to implement products with limited volumes, such as through a pilot program. However, when FHA introduces new products or makes significant changes to existing products with risks that are not well understood, such actions could introduce significant risks when implemented broadly. Products that would introduce significant risks can impose significant costs. We believe that FHA could mitigate these costs by using techniques such as piloting. If Congress authorizes FHA to insure no down payment products or any other new single-family insurance products, Congress may want to consider a number of means to mitigate the additional risks that these loans may pose. Such means may include limiting the initial availability of such a new product, requiring higher premiums, requiring stricter underwriting standards, or requiring enhanced monitoring. Such risk mitigation techniques would serve to help protect the Mutual Mortgage Insurance Fund while allowing FHA the time to learn more about the performance of loans using this new product. Limits on the initial availability of the new product would be consistent with the approach Congress took in implementing the HECM program. The limits could also come in the form of an FHA requirement to limit the new product to better performing lenders and servicers as part of a demonstration program or to limit the time period during which the product is first offered. If Congress provides the authority for FHA to implement a no down payment mortgage product or other products about which the risks are not well understood, we recommend that the Secretary of HUD direct the Assistant Secretary for HUD-Federal Housing Commissioner to consider the following three actions: incorporating stricter underwriting criteria such as appropriate credit score thresholds or borrower reserve requirements, piloting the initial product or limiting its initial availability and asking Congress for the authority if HUD officials determine they currently do not have this authority, and utilizing other techniques for mitigating risks including use of credit enhancements and prepurchase counseling. Regardless of any new products Congress may authorize, when making significant changes to its existing products or establishing new products, we recommend that the Secretary of HUD direct the Assistant Secretary for HUD-Federal Housing Commissioner to consider the following two actions: limiting the initial availability of the product and when doing so, the Commissioner should establish the conditions under which piloting should be used, the techniques for limiting the initial availability of a product, and the methods of enhanced monitoring that would be connected to predetermined measures of success or failure for the product.; and asking Congress for the authority to offer its new products or significant changes to existing products on a limited basis, such as through pilots, if HUD officials determine they currently lack sufficient authority. We provided a draft of this report to HUD, Fannie Mae, Freddie Mac, USDA, and VA. We received written comments from HUD, which are reprinted in appendix III. We also received technical comments from HUD, Fannie Mae, Freddie Mac, and USDA, which have been incorporated where appropriate. VA did not have comments on the draft. HUD stated that it is in basic agreement with GAO that all policy options, implications, and implementation methods should be evaluated when considering or proposing a new FHA product. HUD also stated that in designing its zero down payment program it considered the items that we recommended it consider, including piloting. HUD stated that it adopted the prepurchase counseling requirement as a component of a proposed zero down program and that it determined that structuring the mortgage insurance premium in such a way as to minimize risk represents the most appropriate tool for managing the risk of this proposed program. However, it is not clear under what circumstances HUD believes that piloting or limiting the availability of a changed or new product would be appropriate or possible. As we noted in our draft report, HUD officials told us that they face challenges in administering a pilot program because of the difficulty of selecting only a limited number of lenders and borrowers. HUD officials also held that they may not have the authority to limit products and that they lacked sufficient resources to adequately manage products as part of a pilot or with limited volumes. We believe that HUD needs to further consider piloting or limiting volume of new or changed products because, as we state in the report, it is a practice followed by others in the mortgage industry and could assist HUD in mitigating the risks and costs associated with new or changed products, while still allowing HUD to meet its goal of providing homeownership opportunities. Difficulties in selecting a limited number of lenders and questions about a lack of authority could both be addressed by seeking clear authority from Congress on these matters, if HUD officials determine they currently lack sufficient authority. As we note in our report, when considering the resources necessary to implement products with limited volumes, if FHA does not use pilots or limit the availability of certain new or changed products, FHA may face costs due to the significant risks that can be associated with products that are implemented broadly and about which the risks are not well understood. We do not believe that implementing products with initial limits is appropriate or necessary in all cases. To ensure that piloting or limiting the initial availability is given sufficient consideration, we continue to recommend that HUD consider establishing the conditions under which piloting should be used and the techniques for limiting the initial availability of a product, as well as the methods of enhanced monitoring that would be connected to predetermined measures of success or failure for the product. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate Congressional Committees and the Secretaries of Housing and Urban Development, Agriculture, and Veterans Affairs. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or shearw@gao.gov or Mathew Scirè, Assistant Director, at (202) 512-6794 or sciremj@gao.gov. Key contributors to this report are listed in appendix IV. To describe key characteristics and standards of mortgage products, we interviewed officials at the Federal Housing Administration (FHA), U.S. Department of Agriculture (USDA), and U.S. Department of Veterans Affairs (VA); as well as staff at a conventional mortgage providers (Bank of America); private mortgage insurers (for example, The PMI Group, Inc.; Mortgage Guarantee Insurance Corporation); government-sponsored enterprises (GSE) (such as Fannie Mae and Freddie Mac); Office of Federal Housing Enterprise Oversight (OFHEO); various state housing finance agencies; and nonprofit down payment assistance providers (for example, Nehemiah Corporation of America and Ameridream, Inc.). We reviewed descriptions of various mortgage products and compared the standards used across entities including FHA, USDA, and VA regulations and program guidance and the GSEs seller/servicer guides. We reviewed Web sites of state housing finance agencies and if we identified zero down payment programs, we corroborated some of the Web site information through interviews of agency officials. To report on the volume of mortgage products, we reviewed relevant reports including reports from the U.S. Department of Housing and Urban Development (HUD). To determine what economic research indicates about the variables that are most important when estimating the risk level associated with individual mortgages, we conducted a literature search. To identify recent and relevant papers, we used various Internet search engines (such as Online Computer Library Center, FirstSearch: EconLit; HUD USER) and inquired with various mortgage industry participants (for instance, FHA, Fannie Mae, Freddie Mac, and Nehemiah). Research we reviewed includes articles, reports, and papers that were made available to us from economic journals, the Internet, libraries, or were provided to us by various entities (e.g., HUD, Fannie Mae, Freddie Mac). For the purposes of this report, we refer to these documents as “papers.” To facilitate the search we developed several criteria. For example, we used the following search terms: mortgage, performance, default, LTV ratio, credit score, and down payment assistance. We excluded the following terms from our search: multifamily and commercial. We limited our search to papers published or issued from 1999 to 2004; however, we did include some papers relevant to our inquiry that were published or issued prior to 1999 that we determined were significant to our research objectives. We identified 151 papers. There may be some relevant research that our search did not identify. For the papers we identified, we conducted a multistep review. Initially, we determined which papers to include in our analysis. Papers included in the analysis were those that (1) were relevant to our inquiry, (2) included empirical analysis, and (3) utilized satisfactory methodologies. Papers that were not relevant were excluded from our analysis (for example, subject of paper was off-point—car loans; or analysis of loans in foreign country). Additionally, we determined if the paper included empirical analysis. If the paper did not include empirical analysis, we did not include it. However, we did review the paper to determine if it talked about papers that we had not yet identified that appeared to have empirical analysis. If the paper did identify an additional paper that appeared to be relevant to our inquiry, we attempted to obtain it. Finally, we excluded papers with weak methodologies. GAO economists conducted the evaluations of economic models. During this review, we excluded 106 papers leaving 45 for the second-stage review. Many of the papers we excluded were excluded for lack of relevance or because they did not include empirical analysis. The second review consisted of documenting the findings of the papers that were relevant, had empirical analysis, and used satisfactory methodologies. To facilitate this analysis, we developed and maintained an Access database to document our analysis—cataloging the specific factors these papers identified as being important to estimating the risk level associated with individual mortgages. Finally, for these papers, we synthesized the literature by determining how many papers found each variable to be important. For a bibliography of the 45 papers included in our analysis, see appendix II. To examine the relationship between mortgage performance and two key underwriting variables, loan-to-value (LTV) ratio and credit score, we calculated 4-year default rates for several categories of mortgages with various LTV ratios and credit scores. We selected 4-year default rates because it best balanced the competing goals of having recent loans and the greatest number of years of default experience. To perform this analysis, we first obtained mortgage volume and performance data from three mortgage institutions: FHA (government mortgages) and Fannie Mae and Freddie Mac (conventional mortgages). The FHA mortgage data consist of a stratified random sample of over 400,000 FHA-insured mortgages originated in calendar years 1992, 1994, and 1996. We used these data because they are the only significant data set of FHA loans that includes credit scores and that had at least 4 years of loan performance activity. The data come from a sample built by FHA for research purposes. The Fannie Mae and Freddie Mac data consist of all purchase-money mortgages originated in calendar years 1997, 1998, and 1999 and purchased by Fannie Mae or Freddie Mac. The data provided by Fannie Mae and Freddie Mac exclude government-insured mortgages. We selected these loan years because they include loans that aged at least 4 years and because, during these years, the GSEs began to purchase an increasing number of loans with higher LTVs. The GSEs provided us data that they considered to be proprietary. Although we limited the reporting of our analysis to that which was considered nonproprietary, this did not limit our overall findings for this objective. A comparison of results from the FHA and the conventional mortgage performance analysis should be done with care, for a number of reasons because the data are from different years, FHA and the GSEs calculated LTVs differently, and FHA’s average 4–year default rate is higher than for the GSEs. For this analysis, we used the LTV ratio contained in the data system for each mortgage institution. FHA defines the LTV ratio as the original mortgage balance, excluding the financed mortgage insurance premium, divided by the appraised value of the house. For the GSEs, LTV ratio is defined as the original mortgage balance divided by the lesser of the sale price of the house or the appraised value of the house. For this analysis, the credit score is the Fair Isaac score contained in each institution’s data system. The mortgage institutions obtain credit scores in various ways. FHA has only recently begun to collect credit scores in its single-family data warehouse. However, for research purposes, FHA purchased historic credit score information for the sample of mortgages originated in 1992, 1994, and 1996. On the other hand, Fannie Mae obtains credit score information in two ways. For some mortgages, the lender obtained the borrower’s credit score information when it originated the mortgage, and upon Fannie Mae’s purchase of the mortgage, the lender provides this credit score information to Fannie Mae. In some cases, however, lenders do not obtain borrower’s credit scores; when Fannie Mae purchases the mortgage, it obtains a credit score for the borrower. For some mortgages, the institutions indicated that a credit score for a particular mortgage was unknown. Within the FHA data, about 8 percent of the mortgages had unknown scores; within the GSE data, about 3 percent of the mortgages had unknown scores. We included mortgages with unknown credit scores in our analysis and presented the loan performance results. We carried out several actions to ensure that data provided by FHA, Fannie Mae, and Freddie Mac were sufficiently reliable for use in our analysis. For the FHA sample data, we met with FHA staff involved in generating the sample data set. We also discussed data quality procedures with appropriate FHA staff. Based on these discussions in which FHA officials described their policies and procedures and the results of external audits of their data systems, we determined that the FHA data were sufficiently reliable to use in our analysis. FHA officials indicated that their data systems contain data entry edit checks and that data submitted by lenders was reviewed by FHA. FHA’s data system was audited by external auditors, and no major issues concerning data quality were raised. We also discussed data quality procedures with appropriate Fannie Mae and Freddie Mac staff. These procedures included data entry edit checks, exception reports, and checks for reasonableness. Additionally, we reviewed reports from audits of Fannie Mae and Freddie Mac. These audits included an assessment of the Fannie Mae information systems that generated the data used in this report. The audits also assessed Freddie Mac information systems that generated the data used in this report. We also compared the data with similar publicly available data. Based on these discussions and reviews of audit reports, we determined that the data Fannie Mae and Freddie Mac provided were sufficiently reliable to use in our analysis. With these data, we generated FHA and conventional 4-year default rates for several combinations of LTV ratios and credit scores. To do this, we defined default as a credit event that includes foreclosed mortgages, as well as mortgages that did not experience foreclosure, but that would typically lead to a credit loss, such as a “short sale” or a “deed-in-lieu of foreclosure” termination of the mortgage; selected six LTV ratio categories; selected six credit score categories; combined Fannie Mae and Freddie Mac mortgage volume and combined mortgage volume and performance data for the sample (of mortgages insured by FHA in 1992, 1994, and 1996; and conventional mortgages originated in 1997, 1998, and 1999 and purchased by Fannie Mae or Freddie Mac); calculated the average 4-year default rate for FHA (weighted average) and for all conventional loans separately by dividing the total dollar amount of mortgages experiencing a credit event by the total dollar amount of mortgages originated (for FHA) or purchased (for conventional); calculated the average 4-year default rates for sampled FHA loans and for conventional loans that fell within each LTV ratio and credit score category; and calculated the relative 4-year default rates for each LTV ratio and credit score categories for FHA loans and for conventional loans by dividing the average 4-year default rate for each specific LTV and credit score category by the average 4-year default rate for sampled FHA loans and all conventional loans, respectively. For example, if the merged average 4-year default rate for FHA loans within a particular LTV ratio and credit score category was 3 percent, and the average 4-year default rate for all FHA loans was 2 percent, the relative 4- year default rate for FHA loans within this particular category would be 3 divided by 2, or 1.5 times the average FHA default rate. We do not present relative default rates for categories with small numbers of mortgages because the performance information may not be reliable when there are too few observations. In the figures, these instances are noted as “N/A.” For the FHA analysis, we used a cutoff of about 1,000 mortgages to determine whether there were sufficient observations to reliably measure the relative default rate. For the conventional analysis, we used a cutoff of about 3,000 mortgages to determine whether the relative default rate was reliable. We chose a higher cutoff for the GSE analysis because the GSEs have a lower default rate, and analysis of less frequent events requires a larger sample size. To determine what lessons FHA might learn from others that support low and no down payment lending we obtained testimonial information from the mortgage industry (for example, FHA, GSEs, private mortgage insurers, and a private lender) about the steps they take to design and implement low and no down payment lending. We selected these entities based on the parallels to FHA, as well as their significance in the mortgage industry. Where available, we reviewed industry and academic information relevant to these steps in carrying out low and no down payment lending. We performed our audit work from January 2004 to December 2004 in accordance with generally accepted government auditing standards. factor(s) Brent W. Ambrose and Charles A. Capone. “The Hazard Rates of First and Second Defaults,” Journal of Real Estate Finance and Economics, vol. 20 no. 3 (May 2000). Brent W. Ambrose and Charles A. Capone. “Modeling the Conditional Probability of Foreclosure in the Context of Single-Family Mortgage Default Resolutions,” Real Estate Economics, vol. 26 no. 3 (1998). Richard Anderson and James VanderHoff. “Mortgage Default Rates and Borrower Race,” The Journal of Real Estate Research, vol. 18 no. 2 (Sep/Oct 1999). Robert B. Avery, Raphael W. Bostic, Paul S. Calem, and Glenn B. Canner. “Credit Risk, Credit Scoring, and the Performance of Home Mortgages,” Federal Reserve Bulletin (July 1996). James A. Berkovec, Glenn B. Canner, Stuart A. Gabriel and Timothy H. Hannan. “Race, Redlining, and Residential Mortgage Loan Performance,” Journal of Real Estate Finance and Economics, vol. 9 no. 1 (July 1994). Paul S. Calem and Susan M. Wachter. “Community Reinvestment and Credit Risk: Evidence from an Affordable-Home-Loan Program,” Real Estate Economics, vol. 27 no. 1 (1999). Paul S. Calem and James Follain. The Asset-Correlation Parameter in Basel II for Mortgages on Single-Family Residences, a report prepared as background for public comment on the Advance Notice of Proposed Rulemaking on the Proposed New Basel Capital Accord, November 6, 2003. Paul S. Calem and Michael LaCour-Little. Risk-based Capital Requirements for Mortgage Loans, November 2001. Charles A. Calhoun and Yongheng Deng. “A Dynamic Analysis of Fixed- and Adjustable-Rate Mortgage Terminations,” Journal of Real Estate Finance and Economics, vol. 24 no. 1/2 (Jan 2002). Dennis R. Capozza, Dick Kazarian, and Thomas A. Thomson. “Mortgage Default in Local Markets,” Real Estate Economics, vol. 25 no. 4 (Winter 1997). Richard L. Cooperstein, F. Stevens Redburn, and Harry G. Meyers. “Modelling Mortgage Terminations in Turbulent Times,” AREUEA Journal, vol. 19 no. 4 (1991). Robert F. Cotterman. Analysis of FHA Single-Family Default and Loss Rates, a report prepared for the Office of Policy Development and Research, U.S. Department of Housing and Urban Development, March 25, 2004. Robert F. Cotterman. New Evidence on the Relationship Between Race and Mortgage Default: The Importance of Credit History Data, a report prepared for the Office of Policy Development and Research, U.S. Department of Housing and Urban Development, May 23, 2002. Robert F. Cotterman. Neighborhood Effects in Mortgage Default Risk, a report prepared for the Office of Policy Development and Research, U.S. Department of Housing and Urban Development, March 2001. Robert F. Cotterman. Assessing Problems of Default in Local Mortgage Markets, a report prepared for the Office of Policy Development and Research, U.S. Department of Housing and Urban Development, March 2001. Amy Crews Cutts and Robert Van Order. “On the Economics of Subprime Lending.” Freddie Mac Working Paper Series # 04-01(January 2004) (http://freddiemac.com/corporate/reports/). X factor(s) Ralph DeFranco. “Modeling Residential Mortgage Termination and Severity Using Loan Level Data.” (Ph.D diss., University of California, Berkeley, 2002). Yongheng Deng, John M. Quigley. Woodhead Behavior and the Pricing of Residential Mortgages, (December 2002). Yongheng Deng and Stuart Gabriel. Enhancing Mortgage Credit Availability Among Underserved and Higher Credit-Risk Populations: An Assessment of Default and Prepayment Option Exercise Among FHA-Insured Borrowers, a report prepared for the U.S. Department of Housing and Urban Development, August 2002. Yongheng Deng and Stuart Gabriel. Modeling the Performance of FHA-Insured Loans: Borrowers Heterogeneity and the Exercise of Mortgage Default and Prepayment Options, a report submitted to the Office of Policy Development and Research, U.S. Department of Housing and Urban Development, May 2002. Yongheng Deng, John M. Quigley, and Robert Van Order. “Mortgage Terminations, Heterogeneity and the Exercise of Mortgage Options,” Econometrica, vol. 68 no. 2 (March 2000). Yongheng Deng, John M. Quigley, and Robert Van Order, Mortgage Default and Low Downpayment Loans: The Costs of Public Subsidy, National Bureau of Economic Research: Working Paper No. 5184 (Cambridge, Mass.: July 1995). Peter J. Elmer and Steven A Seelig. “Insolvency, Trigger Events, and Consumer Risk Posture in the Theory of Single-Family Mortgage Default,” Journal of Housing Research, vol. 10 no. 1 (1999). Robert M. Feinberg and David Nickerson. “Crime and Residential Mortgage Default: An Empirical Analysis.” Applied Economics Letters, vol. 9 (2002). Dan Feshbach and Michael Simpson. “Tools for Boosting Portfolio Performance,” Mortgage Banking, October 1999. Dan Feshbach and Pat Schwinn. “A Tactical Approach to Credit Scores,” Mortgage Banking, February 1999. Gerson M. Goldberg and John P. Harding. “Investment Characteristics of Low- and Moderate- Income Mortgage Loans,” Journal of Housing Economics, vol. 12 (2003). Government Accountability Office. Mortgage Financing: Changes in the Performance of FHA- Insured Loans, GAO-02-773. Washington, D.C. July 10, 2002. Government Accountability Office. Mortgage Financing: FHA’s Fund Has Grown, but Options for Drawing on the Fund Have Uncertain Outcomes, GAO-01-460. Washington, D.C. February 28, 2001. Valentina Hartarska, Claudio Gonzalez-Vega, and David Dobos. Credit Counseling and the Incidence of Default on Housing Loans by Low-Income Households, a paper prepared as part of a collaborative research program between Ohio State University and Paul Taylor and Associates, of Columbus, Ohio. (February 2002). Abdighani Hirad and Peter M. Zorn (corresponding author). A Little Knowledge Is a Good Thing: Empirical Evidence of the Effectiveness of Pre-Purchase Homeownership Counseling, May 22, 2001. The Department of Housing and Urban Development, Office of Inspector General. Follow-up of Down Payment Assistance Programs Operated by Private Nonprofit Entities. 2002-SE-0001, Seattle, Washington, September 25, 2002. X factor(s) The Department of Housing and Urban Development, Office of Inspector General. Final Report of Nationwide Audit: Down Payment Assistance Programs (Office of Insured Single Family Housing), 2000-SE-121-0001, Seattle, Washington, March 31, 2000. Michael Lacour-Little and Stephen Malpezzi. “Appraisal Quality and Residential Mortgage Default: Evidence From Alaska,” Journal of Real Estate Finance and Economics, vol. 27 no. 2 (2003). Andrey D. Pavlov. “Competing Risks of Mortgage Termination: Who Refinances, Who Moves, and Who Defaults,” Journal of Real Estate Finance and Economics, vol. 23 no. 2 (September 2001). Anthony Pennington-Cross. “Credit History and the Performance of Prime and Nonprime Mortgages,” Journal of Real Estate Finance and Economics, vol. 27 no. 3 (2003). Anthony Pennington-Cross. “Subprime and Prime Mortgages: Loss Distributions.” Office of Federal Housing Enterprise Oversight Working Paper Series 03-01 (May 27, 2003). Anthony Pennington-Cross. “Patterns of Default and Prepayment for Prime and Nonprime Mortgages.” Office of Federal Housing Enterprise Oversight Working Paper 02-1 (March 2002). Roberto G. Quercia, Michael A. Stegman, Walter R. Davis, and Eric Stein. Community Reinvestment Lending: A Description and Contrast of Loan Products and Their Performance, a report prepared for the Joint Center for Housing Studies’ Symposium on Low-Income Homeownership as an Asset-Building Strategy, September 2000. Roberto G. Quercia, George W. McCarthy, and Michael A. Stegman. “Mortgage Default Among Rural, Low-Income Borrowers,” Journal of Housing Research vol. 6 no. 2 (1995). Stephen L. Ross. “Mortgage Lending, Sample Selection and Default,” Real Estate Economics, vol. 28 no. 4 (Winter 2000). Robert A. Van Order and Peter M. Zorn. The Performance of Low Income and Minority Mortgages: A Tale of Two Options, August 2001. Robert Van Order and Peter Zorn. Performance of Low-Income and Minority Mortgages, a report prepared for the Joint Center for Housing Studies’ Symposium on Low-Income Homeownership as an Asset-Building Strategy, September 2001. Robert Van Order and Peter Zorn. “Income, Location, and Default: Some Implications for Community Lending,” Real Estate Economics, vol. 28 no. 3 (2000). Economic Systems Inc., ORC Macro, and The Hay Group. Evaluation of VA’s Home Loan Guaranty Program: Final Report. A report prepared for the Department of Veterans Affairs. (July 2004). In addition to those individuals named above, Anne Cangi, Rudy Chatloss, Bert Japikse, Austin Kelly, Marc Molino, Andy Pauline, Roberto Piñero, and Mitch Rachlis made key contributions to this report.
The U.S. Department of Housing and Urban Development (HUD), through its Federal Housing Administration (FHA), insures billions of dollars in home mortgage loans made by private lenders. FHA insures low down payment loans and a number of parties have made proposals to either eliminate or otherwise change FHA's borrower contribution requirements. GAO was asked to (1) identify the key characteristics of existing low and no down payment products, (2) review relevant literature on the importance of loan-to-value (LTV) ratios and credit scores to loan performance, (3) report on the performance of low and no down payment mortgages supported by FHA and others, and (4) identify lessons for FHA from others in terms of designing and implementing low and no down payment products. FHA and many other mortgage institutions provide many low and no down payment products with requirements that vary in terms of eligibility, borrower investment, underwriting, and risk mitigation. While these products are similar, there are some important differences, including that FHA has lower loan limits, allows closing costs and the up-front insurance premium to be financed in the mortgage, and permits the down payment funds to come from nonprofits that receive funds from sellers. FHA also differs in that it does not require prepurchase counseling. A substantial amount of research GAO reviewed indicates that LTV ratio and credit score are among the most important factors when estimating the risk level associated with individual mortgages. GAO's analysis of the performance of low and no down payment mortgages supported by FHA and others corroborates key findings in the literature. Generally, mortgages with higher LTV ratios (smaller down payments) and lower credit scores are riskier than mortgages with lower LTV ratios and higher credit scores. Some practices of other mortgage institutions offer a framework that could help FHA manage the risks associated with introducing new products or making significant changes to existing products. Mortgage institutions may impose limits on the volume of the new products they will permit and on who can sell and service these products. FHA officials question the circumstances in which they can limit volumes for their products and believe they do not have sufficient resources to manage a product with limited volumes. Mortgage institutions sometimes require additional credit enhancements, such as higher insurance coverage; and sometimes require stricter underwriting, such as credit score thresholds, when introducing a new low or no down payment product. FHA is authorized to require an additional credit enhancement by sharing risk through co-insurance but does not currently use this authority. FHA has used stricter underwriting criteria but this has not included credit score thresholds.
Influenza pandemic—caused by a novel strain of influenza virus for which there is little resistance and which therefore is highly transmissible among humans—continues to be a real and significant threat facing the United States and the world. While some scientists and public health experts believe that the next influenza pandemic could be caused by a highly pathogenic strain of the H5N1 avian influenza virus (also known as “bird flu”) that is currently circulating in parts of Asia, Europe, and Africa, it is unknown when an influenza pandemic will occur, where it will begin, or whether an H5N1 virus or another strain would be the cause. Influenza pandemic poses a grave threat to global public health at a time when the United Nations’ World Health Organization (WHO) has said that infectious diseases are spreading faster than at any time in history. Influenza pandemics have spread worldwide within months, and a future pandemic is expected to spread even more quickly given modern travel patterns. Unlike incidents that are discretely bounded in space or time (e.g., most natural or man-made disasters), an influenza pandemic is not a singular event, but is likely to come in waves, each lasting weeks or months, and pass through communities of all sizes across the nation and the world simultaneously. While a pandemic will not directly damage physical infrastructure such as power lines or computer systems, it threatens the operation of critical systems by potentially removing the essential personnel needed to operate them from the workplace for weeks or months. In a severe pandemic, absences attributable to illnesses, the need to care for ill family members, and fear of infection may, according to the Centers for Disease Control and Prevention (CDC), reach a projected 40 percent during the peak weeks of a community outbreak, with lower rates of absence during the weeks before and after the peak. In addition, an influenza pandemic could result in 200,000 to 2 million deaths in the United States, depending on its severity. In addition to the profound human costs in terms of illnesses and deaths, the economic and societal repercussions of a pandemic could be significant. In its December 2005 report on possible macroeconomic effects and policy issues related to a potential influenza pandemic, CBO stated that a severe influenza pandemic, similar to the 1918-1919 pandemic, might cause a decline in U.S. gross domestic product of about 4.25 percent. CBO updated its report in July 2006 to include some estimates from medical experts that suggest that CBO may have initially underestimated the economic impact. The report also noted that these medical experts stressed the uncertainty about the exact characteristics of the potential virus and suggested that the worst-case scenario could be much worse than the severe scenario that CBO considered, especially if the H5N1 virus acquires the ability to spread efficiently among humans without losing its extreme virulence. In addition, in September 2008, the World Bank reported that a severe pandemic could cause a 4.8 percent drop in world economic activity, which would cost the world economy more than $3 trillion. WHO has developed six phases of pandemic alert, each divided into three periods, as a system of informing the world of the seriousness of the pandemic threat. As seen in figure 2, according to WHO the world is currently in Phase 3 where a new influenza virus subtype is causing disease in humans, but is not yet spreading efficiently and sustainably among humans. The Homeland Security Council (HSC) took an active approach to this potential disaster by, among other things, issuing the National Pandemic Strategy in November 2005, and the National Pandemic Implementation Plan in May 2006. The National Pandemic Strategy is intended to provide a high-level overview of the approach that the federal government will take to prepare for and respond to an influenza pandemic. It also provides expectations for nonfederal entities—including state, local, and tribal governments; the private sector; international partners; and individuals— to prepare themselves and their communities. The National Pandemic Implementation Plan is intended to lay out broad implementation requirements and responsibilities among the appropriate federal agencies and clearly define expectations for nonfederal entities. The National Pandemic Implementation Plan contains 324 action items related to these requirements, responsibilities, and expectations, most of which are to be completed before or by May 2009. HSC publicly reported on the status of the action items that were to be completed by 6 months, 1 year and 2 years in December 2006, July 2007, and October 2008 respectively. HSC indicated in its October 2008 progress report that 75 percent of the action items have been completed. As previously mentioned, we have ongoing work assessing the status of implementing this plan. Our prior work evaluating catastrophic event preparedness, response, and recovery has shown that in the event of a catastrophic disaster, the leadership roles, responsibilities, and lines of authority for the response at all levels must be clearly defined and effectively communicated to facilitate rapid and effective decision making, especially in preparing for and in the early hours and days after the event. However, federal government leadership roles and responsibilities for preparing for and responding to a pandemic continue to evolve and will require further clarification and testing before the relationships of the many leadership positions are well understood. Such clarity in leadership is even more crucial now given the change in administration and the associated transition of senior federal officials. Most of these federal leadership roles involve shared responsibilities between HHS and DHS, and it is not clear how these would work in practice. According to the National Pandemic Strategy and Plan, the Secretary of Health and Human Services is to lead the federal medical response to a pandemic, and the Secretary of Homeland Security will lead the overall domestic incident management and federal coordination. In addition, under the Post-Katrina Emergency Management Reform Act of 2006, the Administrator of the Federal Emergency Management Agency (FEMA) was designated as the principal domestic emergency management advisor to the President, the HSC, and the Secretary of Homeland Security, adding further complexity to the leadership structure in the case of a pandemic. To assist in planning and coordinating efforts to respond to a pandemic, in December 2006 the Secretary of Homeland Security predesignated a national Principal Federal Official (PFO) for influenza pandemic and established five pandemic regions each with a regional PFO and Federal Coordinating Officers (FCO) for influenza pandemic. PFOs are responsible for facilitating federal domestic incident planning and coordination, and FCOs are responsible for coordinating federal resources support in a presidentially-declared major disaster or emergency. However, the relationship of these roles to each other as well as with other leadership roles in a pandemic is unclear. Moreover, as we testified in July 2007, state and local first responders were still uncertain about the need for both FCOs and PFOs and how they would work together in disaster response. Accordingly, we recommended in our August 2007 report on federal leadership roles and the National Pandemic Strategy that DHS and HHS develop rigorous testing, training, and exercises for influenza pandemic to ensure that federal leadership roles and responsibilities for a pandemic are clearly defined and understood and that leaders are able to effectively execute shared responsibilities to address emerging challenges. In response to our recommendation, HHS and DHS officials stated in January 2009 that several influenza pandemic exercises had been conducted since November 2007 that involved both agencies and other federal officials, but it is unclear whether these exercises rigorously tested federal leadership roles in a pandemic. With respect to control of an outbreak in poultry, which would be instrumental to reducing the risk of a human pandemic, both USDA and DHS may become involved, depending on the level of the outbreak. USDA is responsible for acting to prevent, control, and eradicate foreign animal diseases in domestic livestock and poultry, in coordination with a number of other entities, including states. The Secretary of Homeland Security assumes responsibility for coordinating the federal response under certain circumstances, such as an outbreak serious enough for the President to declare an emergency or a major disaster. In a June 2007 report on USDA’s planning for avian influenza, we found that USDA was not planning for DHS to assume the lead coordinating role if an outbreak among poultry occurred that is sufficient in scope to warrant these declarations. To address challenges that limit the national ability to quickly and effectively respond to highly pathogenic avian influenza, we recommended that the Secretaries of Agriculture and Homeland Security clarify their respective roles and how they will work together in the event of a declared presidential emergency or major disaster, and test the effectiveness of this coordination during exercises. Both USDA and DHS agreed that they should develop additional clarity and better define their coordination roles in these circumstances, and have taken preliminary steps to do so. For example, according to USDA and DHS officials, the two agencies meet on a regular basis to discuss such coordination issues. Roles and responsibilities for influenza pandemic preparedness can also be unclear within individual federal agencies. In two reports on DOD and its combatant commands’ pandemic preparedness efforts, we noted that while DOD and the combatant commands had taken numerous actions to prepare for a pandemic, roles and responsibilities for pandemic preparedness within the department and the commands had not been clearly defined or communicated. Our September 2006 report on DOD’s pandemic preparedness noted that neither the Secretary nor the Deputy Secretary of Defense had clearly and fully defined and communicated lead and supporting roles and responsibilities with clear lines of authority for DOD’s influenza pandemic planning, and we recommended that DOD do so. In response, DOD communicated departmentwide that the Deputy Secretary of Defense had designated the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs, working with the Assistant Secretary of Defense for Health Affairs, to lead DOD’s pandemic efforts. Similarly, in a June 2007 report, we recommended that DOD take steps to clarify U.S. Northern Command’s roles and responsibilities for pandemic planning and preparedness efforts. In response, DOD clarified U.S. Northern Command’s roles and responsibilities in guidance and plans. In addition to concerns about clarifying federal roles and responsibilities for a pandemic and how shared leadership roles would work in practice, private sector officials have told us that they are unclear about the respective roles and responsibilities of the federal and state governments during a pandemic emergency. The National Pandemic Implementation Plan states that in the event of an influenza pandemic, the distributed nature and sheer burden of the disease across the nation would mean that the federal government’s support to any particular community is likely to be limited, with the primary response to a pandemic coming from states and local communities. Further, federal and private sector representatives we interviewed at the time of our October 2007 report identified several key challenges they face in coordinating federal and private sector efforts to protect the nation’s critical infrastructure in the event of an influenza pandemic. One of these was a lack of clarity regarding the roles and responsibilities of federal and state governments on issues such as state border closures and influenza pandemic vaccine distribution. Coordination Mechanisms Mechanisms and networks for collaboration and coordination on pandemic preparedness between federal and state governments and the private sector exist, but they could be better utilized. In some instances, the federal and private sectors are working together through a set of coordinating councils, including sector-specific and cross-sector councils. To help protect the nation’s critical infrastructure, DHS created these coordinating councils as the primary means of coordinating government and private sector efforts for industry sectors such as energy, food and agriculture, telecommunications, transportation and water. Our October 2007 report found that DHS has used these critical infrastructure coordinating councils primarily to share pandemic information across sectors and government levels rather than to address many of the challenges identified by sector representatives, such as clarifying the roles and responsibilities between federal and state governments. We recommended in the October 2007 report that DHS encourage the councils to consider and address the range of coordination challenges in a potential influenza pandemic between the public and private sectors for critical infrastructure. DHS concurred with our recommendation and DHS officials informed us in February 2009 that the department is working on initiatives to address it, such as developing pandemic contingency plan guidance tailored to each of the critical infrastructure sectors, and holding a series of “webinars” with a number of the sectors. Federal executive boards (FEB) bring together federal agency and community leaders in major metropolitan areas outside of Washington, D.C., to discuss issues of common interest, including an influenza pandemic. The Office of Personnel Management (OPM), which provides direction to the FEBs, and the FEBs have designated emergency preparedness, security, and safety as an FEB core function. The FEB’s emergency support role with its regional focus may make the boards a valuable asset in pandemic preparedness and response. As a natural outgrowth of their general civic activities and through activities such as hosting emergency preparedness training, some of the boards have established relationships with, for example, federal, state, and local governments; emergency management officials; first responders; and health officials in their communities. In a May 2007 report on the FEBs’ ability to contribute to emergency operations, we found that many of the selected FEBs included in our review were building capacity for influenza pandemic response within their member agencies and community organizations by hosting influenza pandemic training and exercises. We recommended that, since FEBs are well positioned within local communities to bring together federal agency and community leaders, the Director of OPM work with FEMA to formally define the FEBs’ role in emergency planning and response. As a result of our recommendation, FEBs were included in the National Response Framework (NRF) in January 2008 as one of the regional support structures that have the potential to contribute to development of situational awareness during an emergency. OPM and FEMA also signed a memorandum of understanding in August 2008 in which FEBs and FEMA agreed to work collaboratively in carrying out their respective roles in the promotion of the national emergency response system. International disease surveillance and detection efforts serve to address the threat posed by infectious diseases, such as an influenza pandemic, before they develop into widespread outbreaks. Such efforts also provide national and international public health authorities with information for planning and managing efforts to control diseases such as an influenza pandemic. However, as we have reported in the past, domestic and international disease surveillance efforts need improvement. For example, some state public health departments’ initiatives to enhance disease reporting have been incomplete, and there is a need for national standards and interoperability in information collection and sharing to detect outbreaks. Globally, in December 2007 we reported that the United States and its international partners are involved in efforts to improve global influenza surveillance, including diagnostic capabilities, so that pandemic strains can be quickly detected. Yet, international capacity for influenza surveillance still has many weaknesses, particularly in developing countries. For example, some countries experiencing H5N1 human influenza outbreaks, like Indonesia, had at times not promptly shared human virus samples with the international community, thus further weakening international surveillance efforts. Efforts are also being made both within the United States and internationally to improve surveillance and detection for highly pathogenic avian influenza. As stated earlier, controlling an outbreak in poultry would be instrumental to reducing the risk of a human pandemic. Within the United States, USDA is taking many important measures to help the nation prepare for outbreaks of highly pathogenic avian influenza. In a June 2007 report on avian influenza, we stated that USDA had developed several surveillance programs to detect highly pathogenic avian influenza, including a long-standing voluntary program that systematically tests samples of birds from participating poultry operators’ flocks for the virus. Further, we also stated that USDA’s Animal and Plant Health Inspection Service (APHIS) is working with the Department of the Interior, state wildlife agencies, and others to increase surveillance of wild birds in Alaska and the 48 contiguous states in addition to working with states and industry to conduct surveillance of birds at auctions, swap meets, flea markets, and public exhibitions. APHIS has also formed the National Avian Influenza Surveillance System, designed to link existing avian influenza surveillance data from USDA, other federal and state agencies, and industry. However, in the United States, federal and state officials generally do not know the numbers and locations of backyard birds so controlling an outbreak of highly pathogenic avian influenza among these birds remains particularly difficult. We recommended that the Secretary of Agriculture work with states to determine how to overcome potential problems associated with unresolved issues, such as the difficulty in locating backyard birds and disposing of carcasses and materials. USDA agreed with our recommendation and efforts are underway. For example, according to USDA officials, the agency has developed online tools to help states make effective decisions about carcass disposal. In addition, USDA has created a secure Internet site that contains draft guidance for disease response, including highly pathogenic avian influenza, and it includes a discussion about many of the unresolved issues. International surveillance networks for influenza in birds and other animals have also been established and efforts are under way to improve data sharing among scientists. However, global surveillance of the disease among domestic animals has serious shortfalls. The World Organisation for Animal Health (OIE) and the Food and Agriculture Organization (FAO) collaborate to obtain and confirm information on suspected highly pathogenic H5N1 cases. According to the October 2008 report by the UNSIC and the World Bank on the state of pandemic readiness, data obtained from national authorities indicate that 75 percent of countries report having a surveillance system that is operational and capable of detecting highly pathogenic avian influenza. In addition, estimates of risk for disease transmission from one country to another, as well as among regions within countries, are difficult to make because of uncertainties about how factors such as trade in poultry and other birds and wild bird migration affect the movement of the disease. Assessments by U.S. agencies and international organizations identified widespread risks of the emergence of influenza pandemic, and the United States identified priority countries for assistance. Our June 2007 report on international efforts to assess and respond to an influenza pandemic risk noted that the bulk of U.S. and other donors’ country-specific commitments had been made to countries that the United States had designated as priorities, with funding concentrated among certain of these countries. We reported that through 2006, the United States had committed about $377 million to improve global preparedness for avian and influenza pandemic, 27 percent of the $1.4 billion committed by all donors, which is the greatest share of funds of all donors. Since we issued our June 2007 report, the UNSIC and the World Bank reported that as of April 2008, the United States had committed $629 million, which is approximately 31 percent of the $2.05 billion committed by all donors, for avian and pandemic influenza efforts. Figure 3 shows the distribution of committed global and U.S. funding across major recipient countries as of December 2006. Of the top 15 recipients of committed international funds, 11 were U.S. priority countries. More recent data on U.S. funding patterns show similar focuses on certain countries, with Indonesia the largest recipient, followed by Vietnam and Cambodia. However, we reported that gaps in available information from other countries limited the capacity for comprehensive, well-informed comparisons of risk levels by country. For example, in 2007 we reported that the United States Agency for International Development’s (USAID) environmental risk assessment of areas at greatest risk for avian influenza outbreaks included a limited understanding of the role of poultry trade or wild birds. USAID, the Department of State, and the United Nations had also gathered information that was not sufficiently detailed or complete enough to permit well-informed country comparisons. Despite these limitations, the HSC has used available information to designate priority countries for assistance. The UNSIC and the World Bank stated in the 2008 report that reports from national authorities responding to a UNSIC survey indicate that 68 percent of countries had conducted a risk assessment. As we previously reported in June 2007, adopting a risk management approach can help manage the uncertainties in an influenza pandemic and identify the most appropriate course of action. However, the FAO’s detailed evaluation concluded that very few countries have a surveillance plan that is based on an “elaborated” risk-analysis. By their very nature, catastrophic events involve extraordinary levels of mass casualties, damage, or disruption that can overwhelm state and local responders—making sound planning for catastrophic events crucial. Strong advance planning, both within and among federal, state, and local governments and other organizations, as well as robust training and exercise programs to test these plans in advance of a real disaster, are essential to best position the nation to prepare for, respond to, and recover from major catastrophes such as an influenza pandemic. Capabilities are built upon the appropriate combination of people, skills, processes, and assets. Ensuring that needed capabilities are available requires effective planning and coordination as well as training and exercises in which the capabilities are realistically tested, problems identified and lessons learned, and subsequently addressed in partnership with other federal, state, and local stakeholders. We have also noted that an incomplete understanding of roles and responsibilities under the National Response Plan has often led to misunderstandings, problems, and delays—an area where training could be helpful. Key officials must actively and personally participate so that they are better prepared to deal with real life situations. In addition, as we previously reported on the federal response to Hurricane Katrina, lessons learned from exercises must be incorporated and used to improve emergency plans. A number of countries in addition to the United States have developed pandemic plans, along with state and local governments, and the private sector. We reported in June 2007 that the U.S. government has worked with its international partners to develop an overall global strategy that is compatible with the U.S. approach. These steps included the appointment of a UNSIC and periodic global conferences to review progress and refine the strategy. Other countries, including Belgium, Japan, Sweden, and the United Kingdom, have developed influenza pandemic plans and frameworks. In July 2006, Belgium issued the Belgian pandemic flu preparedness plan which provides basic information on various topics such as leadership, antivirals, vaccines, surveillance, logistics, and public communication. Similar to Belgium’s pandemic plan, Japan used WHO’s six influenza pandemic phases in drafting government policies and response efforts in its Pandemic Influenza Preparedness Action Plan of the Japanese Government issued in November 2005. Sweden’s National Audit Office reported in its February 2008 audit that Sweden’s Preparedness planning for pandemic influenza – National Actions is focused only on infection control services and the health sector and does not cover the rest of society. To address this, the government of Sweden agreed to further develop its plan by March 2010. Further, the Sweden’s National Audit Office found that there is very limited knowledge of the extent to which municipalities can provide essential services in the event of an influenza pandemic. Within the United Kingdom, the government issued The National Framework for Responding to an Influenza Pandemic and the Scottish National Framework for Responding to an Influenza Pandemic in November 2007 and March 2007, respectively. Both frameworks provide information and guidance to assist and support public and private organizations across all sectors in understanding the nature of the challenges and in making the appropriate preparations for an influenza pandemic. According to a UNSIC global survey, 141 countries, or 97 percent of those that responded, have pandemic preparedness plans. However, further analysis conducted by the UNSIC’s Pandemic Influenza Contingency Team and other institutions suggested that the quality and comprehensiveness of these plans continue to vary significantly between countries. UNSIC and the World Bank also found that there had been a moderate increase in the number of countries that have undertaken simulation exercises. Specifically, where testing has occurred, 25 percent of respondents (37 of 145 countries), reported that testing took place at both the national and local levels. In addition, 37 percent of respondents (45 of 120 countries) have incorporated the lessons learned from simulations into plan revisions. In our August 2007 report on the National Pandemic Strategy and Implementation Plan, we found that while these documents are an important first step in guiding national preparedness, they do not fully address all six characteristics of an effective national strategy, as identified in our work. The documents fully address only one of the six characteristics, by reflecting a clear description and understanding of problems to be addressed. Further, the National Pandemic Strategy and Implementation Plan do not address one characteristic at all, containing no discussion of what it will cost, where resources will be targeted to achieve the maximum benefits, and how it will balance benefits, risks, and costs. Moreover, the documents do not provide a picture of priorities or how adjustments might be made in view of resource constraints. Although the remaining four characteristics are partially addressed, important gaps exist that could hinder the ability of key stakeholders to effectively execute their responsibilities. For example, state and local jurisdictions that will play crucial roles in preparing for and responding to a pandemic were not directly involved in developing the National Pandemic Implementation Plan, even though it relies on these stakeholders’ efforts. Stakeholder involvement during the planning process is important to ensure that the federal government’s and nonfederal entities’ responsibilities are clearly understood and agreed upon. Further, relationships and priorities among actions were not clearly described, performance measures were not always linked to results, and insufficient information was provided about how the documents are integrated with other response related plans, such as the NRF. We recommended that the HSC establish a process for updating the National Pandemic Implementation Plan and that the updated plan should address these and other gaps. HSC did not comment on our recommendation and has not indicated if it plans to implement it. Concerning federal government planning for an outbreak in animals, we reported in 2007 that although USDA had also taken important steps to prepare for outbreaks of highly pathogenic avian influenza, there were still gaps in its planning. We noted that USDA was drafting response plans for highly pathogenic avian influenza and was also working with the HSC and other key federal agencies to produce an “interagency playbook” intended to clarify how primary federal responders would initially interact to respond to six scenarios of detection of highly pathogenic H5N1. USDA had also begun preliminary exercises to test aspects of these plans with federal, state, local, and industry partners. However, USDA response plans did not identify the capabilities needed to carry out the tasks associated with an outbreak scenario—that is, the entities responsible for carrying them out, the resources needed, and the source of those resources. To address these gaps, we recommended that the Secretary of Agriculture identify these capabilities, use this information to develop a response plan that identifies the critical tasks for responding to the selected outbreak scenario and, for each task, identifies the responsible entities, the location of resources needed, time frames, and completion status, and test these capabilities in ongoing exercises to identify gaps and ways to overcome those gaps. USDA concurred, and officials told us that it has created a draft preparedness and response plan that identifies federal, state, and local actions, timelines, and responsibilities for responding to highly pathogenic avian influenza, but the plan has not been issued yet. At the state and local levels, we reported in June 2008 that, according to CDC, all 50 states and the 3 localities that received federal pandemic funds have developed influenza pandemic plans and conducted pandemic exercises in accordance with federal funding guidance. All of the 10 localities that we reviewed had also developed plans and conducted exercises. Further, all of the 10 localities and the five states that we reviewed had incorporated lessons learned from pandemic exercises into their planning. However, an HHS-led interagency assessment of states’ plans found on average that states had “many major gaps” in their influenza pandemic plans in 16 of 22 priority areas, such as school closure policies and community containment, which are community-level interventions designed to reduce the transmission of a pandemic virus. The remaining 6 priority areas were rated as having “a few major gaps.” Since we issued our report in June 2008, HHS led another interagency assessment of state influenza pandemic plans. HHS reported in January 2009 that, based on this assessment, although states have made important progress toward preparing for combating an influenza pandemic, most states still have major gaps in their pandemic plans. As we had reported in June 2008, HHS, in coordination with DHS and other federal agencies, had convened a series of regional workshops for states in five influenza pandemic regions across the country. Because these workshops could be a useful model for sharing information and building relationships, we recommended that HHS and DHS, in coordination with other federal agencies, convene additional meetings with states to address the gaps in the states’ pandemic plans. HHS and DHS generally concurred with our recommendation, but have not yet held these additional meetings. HHS and DHS recently indicated that while no additional meetings are planned at this time, states will have to continuously update their pandemic plans and submit them for review. We have also reported on the need for more guidance from the federal government to help states and localities in their planning. In June 2008, we reported that although the federal government has provided a variety of guidance, officials of the states and localities we reviewed told us that they would welcome additional guidance from the federal government in a number of areas, such as community containment, to help them to better plan and exercise for an influenza pandemic. State and local officials have identified similar concerns. An October 2007 Kansas City Auditor’s Office report on influenza pandemic preparedness in the city noted that Kansas City Health Department officials would like the federal government to provide additional guidance on some of the same issues we found, including clarifying community interventions such as school closings. In addition, according to the National Governors Association’s (NGA) September 2008 issue brief on states’ pandemic preparedness, states are concerned about a wide range of school-related issues, including when to close schools or dismiss students, how to maintain curriculum continuity during closures, and how to identify the appropriate time at which classes could resume. In addition, NGA reported that states generally have very little awareness of the status of disease outbreaks, either in real time or in near real time, to allow them to know precisely when to recommend a school closure or reopening in a particular area. NGA reported that states wanted more guidance in the following areas: (1) workforce policies for the health care, public safety, and private sectors; (2) schools; (3) situational awareness such as information on the arrival or departure of a disease in a particular state, county, or community; (4) public involvement; and (5) public-private sector engagement. The private sector has also been planning for an influenza pandemic, but many challenges remain. To better protect critical infrastructure, federal agencies and the private sector have worked together across a number of sectors to plan for a pandemic, including developing general pandemic preparedness guidance, such as checklists for continuity of business operations during a pandemic. However, federal and private sector representatives have acknowledged that sustaining preparedness and readiness efforts for an influenza pandemic is a major challenge, primarily because of the uncertainty associated with a pandemic, limited financial and human resources, and the need to balance pandemic preparedness with other, more immediate, priorities, such as responding to outbreaks of foodborne illnesses in the food sector and, now, the effects of the financial crisis. In our March 2007 report on preparedness for an influenza pandemic in one of these critical infrastructure sectors—financial markets—we found that despite significant progress in preparing markets to withstand potential disease pandemics, securities and banking regulators could take additional steps to improve the readiness of the securities markets. Although the seven organizations that we reviewed—which included exchanges, clearing organizations, and payment-system processors—were working on planning and preparation efforts to reduce the likelihood that a worldwide influenza pandemic would disrupt their critical operations, only one of the seven had completed a formal plan. To increase the likelihood that the securities markets will be able to function during a pandemic, we recommended that the Chairman, Federal Reserve; the Comptroller of the Currency; and the Chairman, Securities and Exchange Commission (SEC), consider taking additional actions to ensure that market participants adequately prepare for a pandemic outbreak. In response to our recommendation, the Federal Reserve and the Office of the Comptroller of the Currency, in conjunction with the Federal Financial Institutions Examination Council, and the SEC directed all banking organizations under their supervision to ensure that the pandemic plans the financial institutions have in place are adequate to maintain critical operations during a severe outbreak. SEC issued similar requirements to the major securities industry market organizations. Improving the nation’s response capability to catastrophic disasters, such as an influenza pandemic, is essential. Following a mass casualty event of injured or ill victims, health care systems would need the ability to adequately care for a large number of patients or patients with unusual or highly specialized medical needs. The ability of local or regional health care systems to deliver services consistent with established standards of care could be compromised, at least in the short term, because the volume of patients would far exceed the available hospital beds, medical personnel, pharmaceuticals, equipment, and supplies. Providing such care would require the allocation of scarce resources. In contrast to discrete events such as hurricanes and most terrorist attacks, the widespread and iterative nature of a pandemic—likely to occur in waves as it spreads simultaneously through different communities and regions—presents continuing challenges in preparing for a medical surge in a mass casualty event such as a pandemic. Under such conditions, emergency management approaches that have been used in the past to increase capacity when responding to other types of disasters, such as assistance from other states or the deployment of military resources, may not be viable options since these groups may need to hold onto resources in order to meet their own needs should they be affected by the disease. We reported in June 2007 that state officials informed us that the Emergency Management Assistance Compact (EMAC), a collaborative arrangement among member states that provides a legal framework for requesting resources and that has been used in emergencies such as Hurricane Katrina, would not work in an influenza pandemic. State officials reported their reluctance to send personnel into an infected area, expressed their concern that resources would not be available, and believed that personnel would be reluctant to volunteer to go to another state. Further, NGA reported in its September 2008 issue brief on state pandemic preparedness that EMAC is seen as unreliable during a pandemic because states would likely be unwilling to share scarce resources or deploy personnel into a location where the disease is active and thus expose those individuals to a high-risk environment. HHS estimates that in a severe influenza pandemic, almost 10 million people would require hospitalization, which would exceed the current capacity of U.S. hospitals and necessitate difficult choices regarding rationing of resources. HHS also estimates that almost 1.5 million of these people would require care in an intensive care unit and about 740,000 people would require mechanical ventilation. In our September 2008 report on HHS’s influenza pandemic planning efforts, we reported that although HHS has initiated efforts to improve the surge capacity of health care providers, these efforts will be challenged during a severe pandemic because of the widespread nature of such an event, the existing shortages of health care providers, and the potential high absentee rate of providers. Given the uncertain effectiveness of efforts to increase surge capacity, HHS has developed guidance to assist health care facilities in planning for altered standards of care; that is, for providing care while allocating scarce equipment, supplies, and personnel in a way that saves the largest number of lives in mass casualty events. As we reported in June 2008, 7 out of 20 states reviewed had adopted or were drafting altered standards of care for specific medical issues. Three of the 7 states had adopted some altered standards of care guidelines. We also found that 18 of the 20 states reviewed were selecting alternate care sites, which deliver medical care outside of a hospital setting for patients who would normally be treated as inpatients. In addition, we reported that the federal government has provided funding, guidance, and other assistance to help states prepare for medical surge in a mass casualty event, such as an influenza pandemic. Further, the federal government has provided guidance for states to use when preparing for medical surge, including Reopening Shuttered Hospitals to Expand Surge Capacity, which contains a checklist that states can use to identify entities that could provide more resources in preparing for a medical surge and also provided other assistance such as conferences and electronic bulletin boards for states to use in preparing for medical surge. Some state officials reported, however, that they had not begun work on altered standards of care guidelines, or had not completed drafting guidelines, because of the difficulty of addressing the medical, ethical, and legal issues involved. We recommended that HHS serve as a clearinghouse for sharing among the states altered standards of care guidelines developed by individual states or medical experts. HHS did not comment on the recommendation, and it has not indicated if it plans to implement it. Further, in our June 2008 report on state and local planning and exercising efforts for an influenza pandemic, we found that state and local officials reported that they wanted federal influenza pandemic guidance on facilitating medical surge, which was also one of the areas that the HHS-led assessment rated as having “many major gaps” nationally among states’ influenza pandemic plans. In fiscal year 2006, Congress appropriated $5.62 billion in supplemental funding to HHS for, among other things, (1) monitoring disease spread to support rapid response, (2) developing vaccines and vaccine production capacity, (3) stockpiling antivirals and other countermeasures, (4) upgrading state and local capacity, and (5) upgrading laboratories and research at CDC. Figure 4 shows that the majority of this supplemental funding—about 77 percent—was allocated for developing antivirals and vaccines for a pandemic, and purchasing medical supplies. Also, a portion of the funding for state and local preparedness—$170 million—was allocated for state antiviral purchases for their state stockpiles. According to HHS’s Pandemic Influenza Implementation Plan, HHS seeks to ensure the availability of antiviral treatment courses for at least 25 percent of the U.S. population or at least 81 million treatment courses. As of May 2008, both HHS and states had stockpiled a total of 72 million treatment courses. Specifically, HHS had stockpiled 44 million courses of antivirals for treatment in the HHS-managed Strategic National Stockpile, which is a national repository of medical supplies that is designed to supplement stockpiles from state and local jurisdictions in the event of a public health emergency, and had reserved an additional 6 million courses from its federally stockpiled antivirals for containment of an initial outbreak. HHS also subsidized the purchase of 31 million treatment courses by state and local jurisdictions for storage in their own stockpiles, of which 22 million treatment courses had been stockpiled. In our December 2007 report on using antivirals and vaccines to forestall a pandemic, we found that the availability of antivirals and vaccines in a pandemic could be inadequate to meet demand due to limited production, distribution, and administration capacity. As we reported, WHO estimated that the quantity of antivirals required to forestall a pandemic would be enough treatment courses for 25 percent of the population. In addition, there would need to be enough preventative courses to last 20 days for the remaining 75 percent of the population in the outbreak contamination zone. Further, due to the time required to detect the virus and develop and manufacture a targeted vaccine for a pandemic, pandemic vaccines are likely to play little or no role in efforts to stop or contain a pandemic at least in its initial phases. According to a September 2008 CBO report on the United States’ policy regarding pandemic vaccines, if an influenza pandemic were to occur today, it would be impossible to vaccinate the entire population of about 300 million people within the following 6 months because current capacity for domestic production would be completely inadequate. The United States, its international partners, and the pharmaceutical industry are investing substantial resources to address constraints on the availability and effectiveness of antivirals and vaccines, but some of these efforts face limitations. We reported in September 2008 that HHS was making large investments in domestic vaccine manufacturing capacity by supporting vaccine research with contracts that require manufacturers to establish vaccine-producing facilities within U.S. borders. Through these contracts, one U.S. facility has expanded its manufacturing capacity and a second facility was recently established in the United States. Further, according to a January 2009 report by HHS, the department awarded $120 million to vaccine manufacturers to retrofit their existing U.S. vaccine manufacturing facilities for egg-based vaccines while also planning to build domestic cell-based vaccine production facilities within the U.S. by awarding approximately $500 million in contracts in fiscal year 2009. CBO also reported that HHS is not only encouraging the expansion and refurbishing of existing facilities but also funding the development of new adjuvants, substances that can be added to influenza vaccines to reduce the amount of active ingredient (also called antigen) needed per dose of vaccine. By using adjuvants for egg-based and cell-based vaccines, domestic manufacturers could produce more doses in existing facilities, which means that fewer new facilities would be needed to manufacture cell-based formulations and smaller stockpiles could be used to protect a larger population. However, increasing production capacity of vaccines and antivirals will take several years, as new facilities are built and necessary materials acquired. Also, weaknesses within the international influenza surveillance system impede the detection of strains, which could limit the ability to promptly administer or develop effective antivirals and vaccines to treat and prevent cases of infection to prevent its spread. The delayed use of antivirals and the emergence of antiviral resistance in influenza strains could limit their effectiveness. In addition, limited support for clinical trials could hinder their ability to improve understanding of the use of antivirals and vaccines against a pandemic strain. In light of this anticipated limitation in supply, HHS released guidance on prioritizing target groups for a pandemic vaccine. Because of the uncertainties surrounding the availability of a pandemic vaccine, in September 2008, we recommended that the Secretary of Health and Human Services expeditiously finalize guidance to assist state and local jurisdictions to determine how to effectively use limited supplies of antivirals, and the pre-pandemic vaccine, which is developed prior to an outbreak using strains that have the potential to cause an influenza pandemic. In December 2008, HHS released final guidance on antiviral drug use during an influenza pandemic. In addition, HHS officials informed us in February 2009 that it is drafting guidance on pre-pandemic influenza vaccination. In addition to antiviral and vaccine stockpiles for an influenza pandemic for the general population, our June 2007 report on avian influenza planning concluded that USDA had significant gaps in its planning for providing antivirals to individuals responsible for responding to an outbreak of highly pathogenic avian influenza. USDA has coordinated with DHS and other federal agencies to create a National Veterinary Stockpile. This stockpile is intended to be the nation’s repository of animal vaccines, personal protective equipment, and other critical veterinary products to respond to the most dangerous foreign animal diseases, including highly pathogenic avian influenza. However, at the time of the report, USDA had not yet estimated the amount of antiviral medication that it would need in the event of a highly pathogenic avian outbreak or resolved how to provide such supplies within the first 24 hours of an outbreak. According to Occupational Safety and Health Administration guidelines, poultry workers responding to an outbreak of highly pathogenic avian influenza should take antiviral medication daily. Further, the National Veterinary Stockpile is required to contain sufficient amounts of antiviral medication to respond to the most damaging animal diseases that affect human health and the economy and has not yet obtained any antiviral medication for highly pathogenic avian influenza. However, HHS officials told National Veterinary Stockpile officials that the antiviral medication in the Strategic National Stockpile was reserved only for use during a human pandemic. We therefore recommended that the Secretary of Agriculture determine the amount of antiviral medication that USDA would need in order to protect animal health responders, given various highly pathogenic avian influenza scenarios, and determine how to obtain and provide supplies within 24 hours of an outbreak. In commenting on our recommendation, USDA officials told us that the National Veterinary Stockpile now contains enough antiviral medication to protect 3,000 animal health responders for 40 days. However, USDA officials told us that they have yet to determine the number of individuals that would need medicine based on a calculation of those exposed to the virus under a specific scenario. Further, USDA officials said that a contract for additional medication for the stockpile has not yet been secured, which would better ensure that medications are available in the event of an outbreak of highly pathogenic avian influenza. Our work evaluating public health and natural disaster catastrophe preparedness, response, and recovery has shown that insufficient collaboration among federal, state, and local governments created challenges for sharing public health information and developing interoperable communications for first responders. In 2005, we designated establishing appropriate and effective information-sharing mechanisms to improve homeland security as a high-risk area. Over the past several years, we have identified potential information-sharing barriers, critical success factors, and other key management issues that should be considered to facilitate information sharing among and between government entities and the private sector. Citizens should be given an accurate portrayal of risk, without overstating the threat or providing false assurances of security. Risk communication principles have been used in a variety of public warning contexts, from alerting the public to severe weather, to less commonplace warnings of infectious disease outbreaks. In general, these principles seek to maximize public safety by ensuring the public has sufficient information to determine what actions to take to prevent or respond to emergencies. Appropriately warning the public of threats can help save lives and reduce costs of disasters. Federal, state and local officials and risk management experts who participated in an April 2008 Comptroller General’s forum on strengthening the use of risk management principles in homeland security identified and ranked the challenges in applying these principles. Improving risk communication to the public was one of the top three challenges identified by the forum participants. Our prior work identified several instances when risk communication proved less than effective. For example, during the 2004-2005 flu season, demand for the flu vaccine exceeded supply, and information about future vaccine availability was uncertain (as could happen in a future pandemic). Although CDC communicated regularly through a variety of media as the situation evolved, state and local officials identified several communications lessons. These included the need for consistency among federal, state, and local communications, the importance of using diverse media to reach different audiences, and the importance of disseminating clear, updated information when responding to changing circumstances. Another example, from our October 1999 report on DOD’s anthrax vaccine immunization program, illustrated the importance of providing accurate and sufficient information to personnel. Although DOD and the military services used a variety of measures to educate military personnel about the program, military personnel wanted more information on the program, and over one-half of respondents that participated in our survey said that the information they received was less than moderately helpful or that they did not receive any information. The National Pandemic Implementation Plan emphasizes that government and public health officials must communicate clearly and continuously with the public throughout a pandemic. The plan recognizes that timely, accurate, credible, and coordinated messages will be necessary. The federal government has undertaken a number of communications efforts to provide information on a possible pandemic and how to prepare for it. HHS (including CDC), DHS, and other federal agencies have provided a variety of influenza pandemic information and guidance for states and local communities through Web sites and meetings with states. These efforts included: establishing an influenza pandemic Web site (www.pandemicflu.gov); including pandemic information on another Web site, Lessons Learned Information Sharing System (LLIS) (www.llis.dhs.gov), which is a national network of lessons learned and best practices for emergency responders and homeland security officials; sponsoring state pandemic summits with all 50 states and additional disseminating pandemic preparedness checklists for workplaces, individuals and families, schools, health care, community organizations, and state and local governments; and providing additional guidance for the public, such as on pandemic vaccine targeting and allocation and pre-pandemic community planning. There are established coordination networks that are being used to provide information to state and local governments and to the private sector about pandemic planning and preparedness. For example, the FEBs are charged with providing timely and relevant information to support emergency preparedness and response coordination, and OPM expects the boards to establish notification networks and communications plans to be used in emergency and nonemergency situations. The boards are also expected to disseminate relevant information received from OPM and other agencies regarding emergency preparedness information and to relay local emergency situation information to parties such as OPM, FEB members, media, and state and local government authorities. FEB representatives generally viewed the boards as an important communications link between Washington and the field and among field agencies. Each of the selected boards we reviewed reported conducting communications activities as a key part of its emergency support service. In addition, critical infrastructure coordinating councils have been also primarily used as a means to share information and develop pandemic- specific guidance across the industry sectors, such as banking and finance and telecommunications, and across levels of government. However, as noted earlier, state and local officials from all of the states and localities we interviewed wanted additional federal influenza pandemic guidance from the federal government on specific topics, such as implementing community interventions, fatality management, and facilitating medical surge. Although the federal government has issued some guidance, it may not have reached state and local officials or may not have addressed the particular concerns or circumstances of the state and local officials we interviewed. In addition, private sector officials have told us that they would like clarification about the respective roles and responsibilities of the federal and state governments during an influenza pandemic emergency, such as in state border closures and influenza pandemic vaccine distribution. As indicated earlier, in August 2007 we reported that although the National Pandemic Strategy and Implementation Plan identified the overarching goals and objectives for pandemic planning, the documents had some gaps. Most of the implementation plan’s performance measures consist of actions to be completed, such as disseminating guidance, but the measures are not always clearly linked with intended results. This lack of clear linkages makes it difficult to ascertain whether progress has in fact been made toward achieving the national goals and objectives described in the National Pandemic Strategy and Implementation Plan. Without a clear linkage to anticipated results, these measures of activities do not give an indication of whether the purpose of the activity is achieved. For example, most of the action items’ performance measures consist of actions to be completed, such as guidance developed and disseminated. Further, 18 of the action items have no measure of performance associated with them. In addition, the National Pandemic Implementation Plan does not establish priorities among its 324 action items, which becomes especially important as agencies and other parties strive to effectively manage scarce resources and ensure that the most important steps are accomplished. This is further complicated by the lack of a description of the financial resources needed to implement the action items, which is one of six characteristics of an effective national strategy. We also found that some action items, particularly those that are to be completed by state, local, and tribal governments or the private sector, do not identify an entity responsible for carrying out the action. Although the plan specifies actions to be carried out by states, local jurisdictions, and other entities, including the private sector, it gives no indication of how these actions will be monitored, how their completion will be ensured, or who will be responsible for making sure that these actions are completed. Also, it appears that HSC’s determination of completeness has not been accurately applied for all of the action items. Several of the action items that were reported by the HSC as being completed were still in progress. For example, our June 2007 report on U.S. agencies’ international efforts to forestall an influenza pandemic found that eight of the plan’s international-related action items included in the HSC’s progress report as completed either did not directly address the associated performance measure or did not indicate that the completion deadline had been met. As stated earlier, we are currently assessing the implementation of the plan. We have also reported that, although DOD instituted reporting requirements for its components responsible for implementing 31 action items tasked to DOD in the National Pandemic Implementation Plan, there were not similar oversight mechanisms in place for pandemic-related tasks that were not specifically part of the National Plan. For example, DOD did not require DOD components to report on their development or revision of their continuity of operations plans in preparation for an influenza pandemic. Over time, a lack of clear lines of authority, oversight mechanisms, and goals and performance measures could hamper the leadership’s abilities to ensure that planning efforts across the department are progressing as intended as DOD continues its influenza pandemic planning and preparedness efforts. Additionally, without clear departmentwide goals, it would be difficult for all DOD components to develop effective plans and guidance. In response to our recommendation, DOD designated an official to lead DOD’s pandemic efforts, established a Pandemic Influenza Task Force, and communicated this information throughout the department. DOD also assigned responsibility to the U.S. Northern Command for directing, planning, and synchronizing DOD’s global response to an influenza pandemic and disseminated this information throughout the department. There have been some other instances where performance and accountability has been strengthened. The FEBs have recently established performance measures for their emergency support role. In our May 2007 report, we recommended that OPM continue its efforts to establish performance measures and accountability for the emergency support responsibilities of the FEBs before, during, and after an emergency event that affects the federal workforce outside Washington, D.C. In response to our recommendation, the FEB strategic plan for fiscal years 2008 through 2012 includes operational goals with associated measures for its emergency preparedness, security, and employee safety line of business. The data intended to support these measures include methods such as stakeholder and participant surveys, participant lists, and emergency preparedness test results. In providing funding to states and certain localities to help them to prepare for a pandemic, HHS has instituted a number of accountability requirements. As described above, HHS received $5.62 billion in supplemental appropriations specifically available for pandemic influenza- related purposes in fiscal year 2006. As shown in figure 4, a total of $770 million, or about 14 percent of the supplemental appropriations, went to states and localities for preparedness activities. Of the $770 million, $600 million was specifically provided by Congress for state and local planning and exercising. The HHS pandemic funding was administered by CDC and required all 50 states and 3 localities to, among other things, develop influenza pandemic plans and conduct influenza pandemic exercises. According to CDC officials, all 50 states and the localities that received direct funding have met these requirements. Strengthening preparedness for large-scale public health emergencies, including the possibility of an influenza pandemic, is one of the issues that we identified as among those needing the urgent attention of the new administration and Congress during this transition period. Although much has been done, many challenges remain, as is evidenced by the fact that almost half of the recommendations that we have made over the past 3 years have still not been fully implemented. Given the change in administration and the associated transition of senior federal officials, it will be essential for this administration to continue to exercise and test the shared leadership roles that have been established between HHS and DHS, as well as the relative roles, responsibilities, and authorities for a pandemic among the federal government, state and local governments and the private sector. In the area of critical infrastructure protection, DHS should continue to work with other federal agencies and private sector members of the critical infrastructure coordinating councils to help address the challenges required to coordinate between the federal and private sectors before and during a pandemic. These challenges include clarifying roles and responsibilities of federal and state governments. DHS and HHS should also, in coordination with other federal agencies, continue to work with states and local governments to help them address identified gaps in their pandemic planning. To help improve international disease surveillance and detection efforts, the United States should continue to work with international organizations and other countries to help address gaps in available information, which limit the capacity for comprehensive, well-informed comparisons of risk levels by countries. Continued leadership focus on pandemic preparedness is particularly crucial now as the attention on influenza pandemic may be waning as attention shifts to other more immediate national priorities. In addition, as leadership changes across the executive branch, the new administration should recognize that the threat of an influenza pandemic remains unchanged and should therefore continue to maintain momentum in preparing the nation for a possible influenza pandemic. As agreed with your office, we plan no further distribution of this report until 30 days from its date, unless you publicly announce its contents earlier. At that time, we will send copies to other interested parties. In addition, this report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any further questions about this report, please contact me at (202) 512-6543 or steinhardtb@gao.gov, or Sarah Veale, Assistant Director, at (202) 512-6890 or veales@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix III. The Secretary of Health and Human Services should expeditiously finalize guidance to assist state and local jurisdictions to determine how to effectively use limited supplies of antivirals and pre-pandemic vaccine in a pandemic, including prioritizing target groups for pre-pandemic vaccine. In December 2008, HHS released final guidance on antiviral drug use during an influenza pandemic. HHS officials informed us that they are drafting the guidance on pre-pandemic influenza vaccination. The Secretaries of Health and Human Services and Homeland Security should, in coordination with other federal agencies, convene additional meetings of the states in the five federal influenza pandemic regions to help them address identified gaps in their planning. HHS and DHS officials indicated that while no additional meetings are planned at this time, states will have to continuously update their pandemic plans and submit them for review. The Secretary of Homeland Security should work with sector- specific agencies and lead efforts to encourage the government and private sector members of the councils to consider and help address the challenges that will require coordination between the federal and private sectors involved with critical infrastructure and within the various sectors, in advance of, as well as during, a pandemic. DHS officials informed us that the department is working on initiatives, such as developing pandemic contingency plan guidance tailored to each of the critical infrastructure sectors, and holding a series of “webinars” with a number of the sectors. Influenza Pandemic: Further Efforts Are Needed to Ensure Clearer Federal Leadership Roles and an Effective National Strategy, GAO-07-781, August 14, 2007 (1) HHS and DHS officials stated that several influenza pandemic exercises had been conducted since November 2007 that involved both agencies and other federal officials, but it is unclear whether these exercises rigorously tested federal leadership roles in a pandemic. Influenza Pandemic: Opportunities Exist to Clarify Federal Leadership Roles and Improve Pandemic Planning, GAO-07-1257T, September 26, 2007 (1) The Secretaries of Homeland Security and Health and Human Services should work together to develop and conduct rigorous testing, training, and exercises for an influenza pandemic to ensure that the federal leadership roles are clearly defined and understood and that leaders are able to effectively execute shared responsibilities to address emerging challenges. Once the leadership roles have been clarified through testing, training, and exercising, the Secretaries of Homeland Security and Health and Human Services should ensure that these roles are clearly understood by state, local, and tribal governments; the private and nonprofit sectors; and the international community. (2) The Homeland Security Council should establish a specific process and time frame for updating the National Pandemic Implementation Plan. The process should involve key nonfederal stakeholders and incorporate lessons learned from exercises and other sources. The National Pandemic Implementation Plan should also be improved by including the following information in the next update: (A) resources and investments needed to complete the action items and where they should be targeted, (B) a process and schedule for monitoring and publicly reporting on progress made on completing the action items, (C) clearer linkages with other strategies and plans, and (D) clearer descriptions of relationships or priorities among action items and greater use of outcome-focused performance measures. (2) HSC did not comment on the recommendation and has not indicated if it plans to implement it. Avian Influenza: USDA Has Taken Important Steps to Prepare for Outbreaks, but Better Planning Could Improve Response, GAO-07-652, June 11, 2007 (1) The Secretaries of Agriculture and Homeland Security should develop a memorandum of understanding that describes how USDA and DHS will work together in the event of a declared presidential emergency or major disaster, or an Incident of National Significance, and test the effectiveness of this coordination during exercises. (1) Both USDA and DHS officials told us that they have taken preliminary steps to develop additional clarity and better define their coordination roles. For example the two agencies meet on a regular basis to discuss such coordination. (2) The Secretary of Agriculture should, in consultation with other federal agencies, states, and the poultry industry identify the capabilities necessary to respond to a probable scenario or scenarios for an outbreak of highly pathogenic avian influenza. The Secretary of Agriculture should also use this information to develop a response plan that identifies the critical tasks for responding to the selected outbreak scenario and, for each task, identifies the responsible entities, the location of resources needed, time frames, and completion status. Finally, the Secretary of Agriculture should test these capabilities in ongoing exercises to identify gaps and ways to overcome those gaps. (2) USDA officials told us that it has created a draft preparedness and response plan that identifies federal, state, and local actions, timelines, and responsibilities for responding to highly pathogenic avian influenza, but the plan has not been issued yet. (3) The Secretary of Agriculture should develop standard criteria for the components of state response plans for highly pathogenic avian influenza, enabling states to develop more complete plans and enabling USDA officials to more effectively review them. (3) USDA told us that it has drafted large volumes of guidance documents that are available on a secure Web site. However, the guidance is still under review and it is not clear what standard criteria from these documents USDA officials and states should apply when developing and reviewing plans. (4) The Secretary of Agriculture should focus additional work with states on how to overcome potential problems associated with unresolved issues, such as the difficulty in locating backyard birds and disposing of carcasses and materials. (4) USDA officials have told us that the agency has developed online tools to help states make effective decisions about carcass disposal. In addition, USDA has created a secure Internet site that contains draft guidance for disease response, including highly pathogenic avian influenza, and it includes a discussion about many of the unresolved issues. (5) The Secretary of Agriculture should determine the amount of antiviral medication that USDA would need in order to protect animal health responders, given various highly pathogenic avian influenza scenarios. The Secretary of Agriculture should also determine how to obtain and provide supplies within 24 hours of an outbreak. (5) USDA officials told us that the National Veterinary Stockpile now contains enough antiviral medication to protect 3,000 animal health responders for 40 days. However, USDA has yet to determine the number of individuals that would need medicine based on a calculation of those exposed to the virus under a specific scenario. Further, USDA officials told us that a contract for additional medication for the stockpile has not yet been secured, which would better ensure that medications are available in the event of an outbreak of highly pathogenic avian influenza. Appendix II: Implemented Recommendations from GAO’s Work on an Influenza Pandemic as of February 2009 (1) The Secretary of Defense should instruct the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs to issue guidance that specifies which of the tasks assigned to DOD in the plan and other pandemic planning tasks apply to the individual combatant commands, military services, and other organizations within DOD, as well as what constitutes fulfillment of these actions. (1) The 14 national implementation plan tasks assigned to the Joint Staff as the lead organization within DOD, which includes tasks to be performed by the combatant commands, have been completed. According to DOD, the department’s Global Pandemic Influenza Planning Team developed recommendations for the division of responsibilities, which were included in U.S. Northern Command’s global synchronization plan for pandemic influenza. Additionally, DOD assigned pandemic influenza-related tasks to the combatant commands in its 2008 Joint Strategic Capabilities Plan. (2) The Secretary of Defense should instruct the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs to issue guidance that specifies U.S. Northern Command’s roles and responsibilities as global synchronizer relative to the roles and responsibilities of the various organizations leading and supporting the department’s influenza pandemic planning. (2) Revisions to DOD’s 2008 Joint Strategic Capabilities Plan, as well as guidance from the Secretary of Defense during a periodic review of U.S. Northern Command’s pandemic influenza global synchronization plan, clarified and better defined U.S. Northern Command’s role as global synchronizer. (3) The Secretary of Defense should instruct the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs to work with the Under Secretary of Defense (Comptroller) to identify the sources and types of resources combatant commands need to accomplish their influenza pandemic planning and preparedness activities. (3) DOD, through U.S. Northern Command as the global synchronizer for pandemic influenza planning, collected information from the combatant commands on funding requirements related to pandemic influenza preparedness and submitted this information through DOD’s formal budget and funding process. Through this process, five of the combatant commands (U.S. Northern Command, U.S. European Command, U.S. Pacific Command, U.S. Central Command, and U.S. Transportation Command) obtained about $25 million for fiscal years 2009 through 2013 for pandemic influenza planning and exercises. Future pandemic influenza-related funding requirements will be addressed through DOD’s established budget process. (4) The Secretary of Defense should instruct the Joint Staff to work with the combatant commands to develop options to mitigate the effects of factors that are beyond the combatant commands’ control. (4) The combatant commands are increasingly inviting representatives from the United Nations, including the World Health Organization and the Food and Agriculture Organization; host and neighboring nations; and other federal government agencies to exercises and conferences to share information and fill information gaps. Additionally, U.S. Northern Command and U.S. Pacific Command, along with the military services and installations, are increasingly working and planning with state, local, and tribal representatives. DOD views updating and reviewing plans to ensure that they are current as a continuous process driven by changes in policy, science, and environmental factors. The Chairman, Federal Reserve, the Comptroller of the Currency, and the Chairman, Securities and Exchange Commission, should consider taking additional actions to ensure that market participants adequately prepare for an outbreak, including issuing formal expectations that business continuity plans for a pandemic should include measures likely to be effective even during severe outbreaks, and setting a date by which market participants should have such plans. In December 2007, the Federal Reserve, in conjunction with the Federal Financial Institutions Examination Council, issued an Interagency Statement on Pandemic Planning to each Federal Reserve Bank and to all banking organizations supervised by the Federal Reserve. The statement directed those banks to ensure the pandemic plans they have in place are adequate to maintain critical operations during a severe outbreak. In December 2007, the Office of the Comptroller of the Currency, in conjunction with the Federal Financial Institutions Examination Council, also issued an Interagency Statement on Pandemic Planning to the national banks, outlining the same requirements for pandemic plans as the guidance issued by the Federal Reserve. In July and August of 2007, the Securities and Exchange Commission’s Market Regulation Division issued letters to the major clearing organizations and exchanges—those covered by the Commission’s 2003 Policy Statement on Business Continuity Planning for Trading Markets—that directed these organizations to confirm by year-end 2007 that their pandemic plans are adequate to maintain critical operations during a severe outbreak. (1) OPM should initiate discussion with the Department of Homeland Security and other responsible stakeholders to consider the feasibility of integrating the federal executive board’s (FEB) emergency support responsibilities into the established emergency response framework, such as the National Response Plan. (1) In January 2008, the FEBs were included in the National Response Framework section on regional support structures that have the potential to contribute to development of situational awareness during an emergency. In addition, in August 2007, the FEBs were integrated into the National Continuity Policy Implementation Plan issued by the White House Homeland Security Council. (2) OPM should continue its efforts to establish performance measures and accountability for the emergency support responsibilities of the FEBs before, during, and after an emergency event that affects the federal workforce outside Washington, D.C. (2) The FEB strategic plan for fiscal years 2008 through 2012 includes operational goals with associated measures for its emergency preparedness, security, and employee safety line of business. The data intended to support these measures includes methods such as stakeholder and participant surveys, participant lists, and emergency preparedness test results. (3) OPM, as part of its strategic planning process for the FEBs, should develop a proposal for an alternative to the current voluntary contribution mechanism that would address the uncertainty of funding sources for the boards. (3) In November 2008, OPM submitted a legislative proposal to provide for interagency funding of FEB operations nationwide. (4) OPM should work with FEMA to develop a memorandum of understanding, or some similar mechanism that formally defines the FEB role in emergency planning and response. (4) In addition to integrating the FEBs into national emergency plans, FEMA and OPM signed a memorandum of agreement on August 1, 2008. Among other things, the memorandum states that the federal executive boards and FEMA will work together in carrying out their respective roles in the promotion of the National Incident Management System and the National Response Framework. (1) The Secretary of Defense should instruct the Assistant Secretary of Defense for Homeland Defense, as the individual accountable for DOD’s influenza pandemic planning and preparedness efforts, to clearly and fully define and communicate departmentwide the roles and responsibilities of the organizations that will be involved in DOD’s efforts, with clear lines of authority; the oversight mechanisms, including reporting requirements, for all aspects of DOD’s influenza pandemic planning efforts, to include those tasks that are outside of the national implementation plan; and the goals and performance measures for DOD’s planning and preparedness efforts. (1) The Deputy Secretary of Defense verbally designated the Assistant Secretary of Defense for Homeland Defense, working with the Assistant Secretary of Defense for Health Affairs, to lead DOD’s pandemic influenza efforts and established a Pandemic Influenza Task Force. This information was communicated throughout the department when the Principal Deputy to the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs issued DOD’s Implementation Plan for Pandemic Influenza within the department in a July 2006 memo. Additionally, U.S. Northern Command was designated as the lead combatant command for directing, planning, and synchronizing DOD’s global response to an influenza pandemic; this information was disseminated throughout the department in November 2006. (2) The Secretary of Defense should instruct the Assistant Secretary of Defense for Homeland Defense to work with the Under Secretary of Defense (Comptroller) to establish a framework for requesting funding for the department’s preparedness efforts. The framework should include the appropriate funding mechanism and controls to ensure that needed funding for DOD’s influenza pandemic preparedness efforts is tied to the department’s goals. (2) The Office of the Under Secretary of Defense (Comptroller) is utilizing established protocols for programming funds related to pandemic influenza preparedness for DOD. Funding requests for preparedness efforts were submitted as part of the department’s fiscal year 2009 integrated program and budget review, and long-term funding requests will be included in future budget requests. (3) The Secretary of Defense should instruct the Assistant Secretary of Defense for Health Affairs to clarify DOD’s guidance to explicitly define whether or how all types of personnel—including DOD’s military and civilian personnel, contractors, dependents, and beneficiaries—would be included in DOD’s distribution of vaccines and antivirals, and communicate this information departmentwide. (3) In August 2007, DOD issued additional guidance related to the distribution of its vaccine and antiviral stockpiles in the event of an influenza pandemic. (4) The Secretary of Defense should instruct the Assistant Secretary of Defense for Public Affairs to implement a comprehensive and effective communications strategy departmentwide that is transparent as to what actions each group of personnel should take and the limitations of the efficacy, risks, and potential side effects of vaccines and antivirals. (4) DOD has updated its publicly available pandemic influenza Web site, to include links to the Military Vaccine Agency, which provides information on the risks and side effects of vaccines. In addition to the contact named above, major contributors to this report include Sarah Veale, Assistant Director; Maya Chakko; Susan Sato; Mark Ryan; Kara Marshall; and members of GAO’s Pandemic Working Group. Veterinarian Workforce: Actions Are Needed to Ensure Sufficient Capacity for Protecting Public and Animal Health. GAO-09-178. Washington, D.C.: February 4, 2009. Influenza Pandemic: HHS Needs to Continue Its Actions and Finalize Guidance for Pharmaceutical Interventions. GAO-08-671. Washington, D.C.: September 30, 2008. Influenza Pandemic: Federal Agencies Should Continue to Assist States to Address Gaps in Pandemic Planning. GAO-08-539. Washington, D.C.: June 19, 2008. Emergency Preparedness: States Are Planning for Medical Surge, but Could Benefit from Shared Guidance for Allocating Scarce Medical Resources. GAO-08-668. Washington, D.C.: June 13, 2008. Influenza Pandemic: Efforts Under Way to Address Constraints on Using Antivirals and Vaccines to Forestall a Pandemic. GAO-08-92. Washington, D.C.: December 21, 2007. Influenza Pandemic: Opportunities Exist to Address Critical Infrastructure Protection Challenges That Require Federal and Private Sector Coordination. GAO-08-36. Washington, D.C.: October 31, 2007. Influenza Pandemic: Federal Executive Boards’ Ability to Contribute to Pandemic Preparedness. GAO-07-1259T. Washington, D.C.: September 28, 2007. Influenza Pandemic: Opportunities Exist to Clarify Federal Leadership Roles and Improve Pandemic Planning. GAO-07-1257T. Washington, D.C.: September 26, 2007. Influenza Pandemic: Further Efforts Are Needed to Ensure Clearer Federal Leadership Roles and an Effective National Strategy. GAO-07-781. Washington, D.C.: August 14, 2007. Emergency Management Assistance Compact: Enhancing EMAC’s Collaborative and Administrative Capacity Should Improve National Disaster Response. GAO-07-854. Washington, D.C.: June 29, 2007. Influenza Pandemic: DOD Combatant Commands’ Preparedness Efforts Could Benefit from More Clearly Defined Roles, Resources, and Risk Mitigation. GAO-07-696. Washington, D.C.: June 20, 2007. Influenza Pandemic: Efforts to Forestall Onset Are Under Way; Identifying Countries at Greatest Risk Entails Challenges. GAO-07-604. Washington, D.C.: June 20, 2007. Avian Influenza: USDA Has Taken Important Steps to Prepare for Outbreaks, but Better Planning Could Improve Response. GAO-07-652. Washington, D.C.: June 11, 2007. The Federal Workforce: Additional Steps Needed to Take Advantage of Federal Executive Boards’ Ability to Contribute to Emergency Operations. GAO-07-515. Washington, D.C.: May 4, 2007. Financial Market Preparedness: Significant Progress Has Been Made, but Pandemic Planning and Other Challenges Remain. GAO-07-399. Washington, D.C.: March 29, 2007. Influenza Pandemic: DOD Has Taken Important Actions to Prepare, but Accountability, Funding, and Communications Need to be Clearer and Focused Departmentwide. GAO-06-1042. Washington, D.C.: September 21, 2006. Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Controls Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO-06-618. Washington, D.C.: September 6, 2006.
GAO has conducted a body of work over the past several years to help the nation better prepare for, respond to, and recover from a possible influenza pandemic, which could result from a novel strain of influenza virus for which there is little resistance and which therefore is highly transmissible among humans. GAO's work has pointed out that while the previous administration had taken a number of actions to plan for a pandemic, including developing a national strategy and implementation plan, much more needs to be done. However, national priorities are shifting as a pandemic has yet to occur, and other national issues have become more immediate and pressing. Nevertheless, an influenza pandemic remains a real threat to our nation and the world. For this report, GAO synthesized the results of 11 reports and two testimonies issued over the past 3 years using six key thematic areas: (1) leadership, authority, and coordination; (2) detecting threats and managing risks; (3) planning, training, and exercising; (4) capacity to respond and recover; (5) information sharing and communication; and (6) performance and accountability. GAO also updated the status of recommendations in these reports. Leadership roles and responsibilities need to be clarified and tested, and coordination mechanisms could be better utilized. Shared leadership roles and responsibilities between the Departments of Health and Human Services (HHS) and Homeland Security (DHS) and other entities are evolving, and will require further testing and exercising before they are well understood. Although there are mechanisms in place to facilitate coordination between federal, state, and local governments and the private sector to prepare for an influenza pandemic, these could be more fully utilized. Efforts are underway to improve the surveillance and detection of pandemic-related threats, but targeting assistance to countries at the greatest risk has been based on incomplete information. Steps have been taken to improve international disease surveillance and detection efforts. However, information gaps limit the capacity for comprehensive comparisons of risk levels by country. Pandemic planning and exercising has occurred, but planning gaps remain. The United States and other countries, as well as states and localities, have developed influenza pandemic plans. Yet, additional planning needs still exist. For example, the national strategy and implementation plan omitted some key elements, and HHS found many major gaps in states' pandemic plans. Further actions are needed to address the capacity to respond to and recover from an influenza pandemic. An outbreak will require additional capacity in many areas, including the procurement of additional patient treatment space and the acquisition and distribution of medical and other critical supplies, such as antivirals and vaccines for an influenza pandemic. Federal agencies have provided considerable guidance and pandemicrelated information, but could augment their efforts. Federal agencies, such as HHS and DHS, have shared information in a number of ways, such as through Web sites and guidance, but state and local governments and private sector representatives would welcome additional information on vaccine distribution and other topics. Performance monitoring and accountability for pandemic preparedness needs strengthening. Although certain performance measures have been established in the National Pandemic Implementation Plan to prepare for an influenza pandemic, these measures are not always linked to results. Further, the plan does not contain information on the financial resources needed to implement it. GAO has made 23 recommendations in its reports--13 of these have been implemented and 10 remain outstanding. Continued leadership focus on pandemic preparedness remains vital, as the threat has not diminished.
The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 significantly changed the system for providing assistance to low- income families with children by replacing the existing entitlement program with fixed block grants to states to provide Temporary Assistance for Needy Families (TANF). TANF provides about $16.5 billion annually to states to help families become self-sufficient, imposes work requirements for adults, and limits the time individuals can receive federal assistance. However, accessing entry-level jobs to meet TANF work requirements can be challenging for low-income individuals, many of whom do not ow n cars or have poorly maintained cars that are not equipped to drive long distances. As we reported in 2004, many rural TANF recipients cannot afford to own and operate a reliable vehicle and public transportation to and from employment-related services and work is often not available. Existing public transportation systems cannot always bridge the gap between the location of individuals’ homes and jobs for which they not to mention child care and other domestic responsibilities and employment-related services. These systems were originally established to allow urban residents to travel within cities and bring suburban residents to central-city work locations. According to 2007 U.S. Census Bureau data, a higher proportion of people in metropolitan areas who are below poverty level live in cities in those areas than in the corresponding suburbs. Furthermore, employees at many entry-level jobs must work shifts in the evenings or on we ekends, when public transit services are either unavailable or limited. As a result, Congress created the JARC program in the Transportation Equity Act for the 21st Century (TEA-21) to support the nation’s welfare- reform goals. The purpose of the program was to improve the mobility of low-income individuals by awarding grants that states and localities cou use to provide additional or expanded transportation services and thus provide more opportunities for individuals to get to work. JARC funds ld were awarded to grantees designated for project funding in the conference reports that accompanied appropriations acts. TEA-21 also required GAO to review the JARC program every 6 months. In a series of reports from December 1998 to August 2004, GAO found, among other things, that JARC had increased coordination among transit and human service agencies, but that FTA was slow in evaluating the program. These reports included recommendations to assist FTA in improving its evaluation process. In response to these recommendations, FTA developed specific objectives, performance criteria, goals, and performance measures for the JARC program, although GAO noted limitations in the performance measures and recognized that FTA planned to continue to develop more comprehensive and relevant performance measures. SAFETEA-LU made several changes to the JARC program that affected recipients. Most notably, SAFETEA-LU created a formula to distribute funds beginning with fiscal year 2006: SAFETEA-LU requires that 40 percent of JARC funds each year be apportioned among states for projects in small urbanized and rural areas—those with populations of 50,000 to 199,999 and less than 50,000, respectively. It also required that the remaining 60 percent be apportioned among large urbanized areas—those with populations of 200,000 or more. As a result, rural and small urbanized areas were each apportioned a total of $27.3 million in fiscal year 2006, while large urbanized areas were apportioned a total of $82 million (see table 1). The change to a formula grant program significantly altered the allocation of JARC funds because some states and large urbanized areas that did not formerly receive funds now receive them, and others receive different amounts than they received in the past. For example, total funds available in Florida and Virginia increased by more than 1,200 percent from fiscal years 2005 to 2006 (from $594,708 to $8.3 million and from $84,249 to $2.5 million, respectively). Similarly, the funds available for the large urbanized area of Tampa/St. Petersburg increased 64 percent from 2005 to 2006 (from $594,708 to $978,029). However, the total funds available to Alaska and Vermont decreased by more than 80 percent (from $1.7 million to $207,503 and from $991,182 to $186,885, respectively), and the funds available to the Birmingham, Alabama, area decreased 88 percent from 2005 to 2006 (from $3 million to $356,107). In addition, 18 states were apportioned JARC funds for fiscal year 2006 that did not receive funds in fiscal year 2005. Recipients have up to 3 years in which to apply for funds for each fiscal year. For example, recipients could apply for fiscal year 2006 funds until September 30, 2008. Any funds not applied for by then lapsed and would have been reapportioned among all recipients for fiscal year 2009. Similarly, fiscal year 2007 funds are available until September 30, 2009 and fiscal year 2008 funds will be available until September 30, 2010. The amount of available JARC funds is relatively small compared to FTA’s primary grant programs. For example, FTA’s Urbanized Area Formula Grant program (Section 5307), which provides transit funding for large and small urbanized areas, was apportioned $3.9 billion for fiscal year 2008, while FTA’s Rural Area Formula Grant program (Section 5311) was apportioned about $416 million in fiscal year 2008. In contrast, the total amount of JARC funds available for the 3 fiscal years 2006 through 2008 is $436.6 million. SAFETEA-LU also requires JARC recipients to fulfill specific requirements and follow specific processes (see fig. 1): SAFETEA-LU required that a recipient be designated to award JARC funds. This recipient is responsible for distributing funds to other agencies. The governor of each state designated a recipient—almost always the state department of transportation—for JARC funds at the state level for small urbanized and rural areas. For large urbanized areas, the governor, local officials, and public transportation operators selected designated recipients, often a major transit agency or metropolitan planning organization (MPO). SAFETEA-LU required that designated recipients certify that JARC projects are derived from locally developed coordinated public transit-human services transportation plans. The coordinated planning process must include representatives of public, private, and nonprofit transportation and human services providers and participation by the general public. In general, among the states we contacted, either the designated recipient or MPO has taken the lead in developing coordinated plans in large urbanized areas. For small urbanized and rural areas, some designated recipients at the state level have generally delegated responsibility to develop plans to agencies at the local level, while in others the designated recipients have taken the lead. Local officials must ensure that appropriate transportation and human services providers participate in the process. Under SAFETEA-LU, designated recipients at the state level must develop a solicitation process for small urbanized and rural areas to apply for funds. States must use a competitive selection process to select projects for these areas. Large urbanized areas must also develop and conduct a competitive selection process for their projects. After projects are selected, states and large urbanized areas must apply to FTA to fund the projects and certify that selected projects were derived from a locally developed, coordinated public transit-human services transportation plan. SAFETEA-LU allows states and large urbanized areas to use 10 percent of JARC funds for administrative activities, including planning and coordination activities. Under TEA-21, the use of JARC funds for planning and coordination activities was prohibited. To ensure designated recipients fulfill their stewardship roles, FTA requires designated recipients to submit a management plan describing how they plan to administer the JARC program. Designated recipients for large urbanized areas submit program management plans, while state agencies that are designated recipients for small urbanized and rural areas submit state management plans. States have submitted management plans in the past for other transit programs. FTA allows states to amend existing management plans to include the JARC program. SAFETEA-LU increased the federal government’s share of capital costs to no more than 80 percent. Under TEA-21, the federal match for capital projects was 50 percent, which was inconsistent with the federal share for capital projects in other FTA programs. As under TEA-21, JARC recipients must identify and raise 50 percent of the funds for operating projects. Matching funds may come from other federal programs that are not administered by DOT, such as TANF block grants, as well as from non-cash sources, such as in-kind contributions, employer contributions, and volunteer services. SAFETEA-LU also requires that two other FTA programs that provide funding for transportation-disadvantaged populations certify that projects be derived from a locally developed coordinated human services transportation plan. One of these, the New Freedom program, was created by SAFETEA-LU to support new public transportation services and public transportation alternatives beyond those required by the Americans with Disabilities Act. According to FTA, the program is intended to fill gaps between human service and public transportation services and to facilitate integrating individuals with disabilities into the workforce as well as full participation in the community. The program provides alternatives to assist individuals with disabilities with transportation, including transportation to and from jobs and employment support services. The second program, the Elderly Individuals and Individuals with Disabilities program (commonly referred to as the Section 5310 program), has existed since 1975. The Section 5310 program originally provided formula funding for capital projects to help meet the transportation needs of elderly individuals and persons with disabilities. However, in 1991, Congress expanded the Section 5310 program to allow funds to be used to acquire services to promote the use of private-sector providers and to coordinate with other human service agencies and public transit providers. These purchases are also considered to be capital expenses. As indicated in tables 2 and 3, Congress apportioned $283.3 million and $408 million for the New Freedom and the Section 5310 programs, respectively, from fiscal years 2006 through 2009. Similar to the JARC program, the New Freedom and Section 5310 programs are relatively small in comparison with FTA’s regular transit formula programs. Recipients apply separately for funds for each of these programs. In our last evaluation of FTA’s progress—our first report under SAFETEA- LU, issued in November 2006—we noted that, in response to our previous concerns over performance evaluation, FTA was taking steps to further improve its evaluation process, such as revising the JARC performance measures. We also noted that FTA was developing its strategies to evaluate and oversee the program and had not yet issued final guidance to implement JARC, and states were still working to meet the new requirements. At that time, 3 states and 9 out of 152 large urbanized areas had received fiscal year 2006 funds as of the end of that fiscal year; these funds represented less than 4 percent of the fiscal year 2006 JARC funds apportioned to states and large urbanized areas. In our report, we recommended that FTA update its existing oversight processes to include the JARC program and specify how often it will monitor recipients that are not subject to its existing oversight processes. FTA agreed to consider our recommendations and has incorporated oversight provisions for the JARC program into its review processes. FTA also issued final guidance implementing the changes to JARC in May 2007. As part of that guidance, FTA established policies and procedures for agencies to implement the program and established two performance measures to evaluate the performance of JARC projects: number of rides and number of jobs accessed. FTA has awarded 48 percent (about $198.0 million) of JARC funds for fiscal years 2006 through 2008 to 49 states and 131 of 152 large urbanized areas. However, about 14 percent of fiscal year 2006 funds lapsed— primarily in small urbanized areas—for various reasons, including delays in fulfilling administrative requirements under SAFETEA-LU. According to FTA data, recipients plan to use the funds awarded thus far primarily to operate transit services as opposed to capital and other projects. Overall, FTA has awarded almost half of the apportioned $436.6 million available for fiscal years 2006 through 2008 (about 48 percent) to 49 states and 131 of 152 large urbanized areas, as of March 2009. This level represents significant improvement since GAO’s last evaluation of FTA’s progress in 2006, when 3 states and 9 large urbanized areas had received fiscal year 2006 funds. As shown in figure 2, FTA has awarded about $118 million (around 86 percent) of fiscal year 2006 JARC funds, approximately $56.7 million (around 39 percent) of fiscal year 2007, and around $23.2 million (about 15 percent) of fiscal year 2008. The majority of the fiscal year 2006 JARC funds (about 64 percent) were awarded in fiscal year 2008 before the September 30, 2008, deadline. Recipients we spoke with who did not apply for these funds until fiscal year 2008 said they delayed applying partly because of FTA’s delay in issuing guidance and other challenges discussed later in the report. However, about $18.6 million (roughly 14 percent) of fiscal year 2006 funds lapsed and will be reapportioned to all recipients with the fiscal year 2009 JARC funds apportionments. While the largest amount of funds that lapsed were for large urbanized areas (about $10.9 million, or about 13 percent of the amount allocated for those areas), a greater proportion lapsed in small urbanized areas (about $5.2 million, or 19 percent of the amount allocated for those areas). Thirty-three out of 152 large urbanized areas (about 22 percent) allowed a portion of the fiscal year 2006 JARC funds to lapse. While 5 out of the 33 large urbanized areas allowed less than 1 percent of their allocated funds to lapse, about 64 percent of those recipients allowed all of the allocated funds to lapse. For instance, Miami, Florida, allowed all of its appropriated JARC funding—almost $2.8 million—to lapse. For small urbanized areas, 11 states and one U.S. territory had about $5.2 million funds lapse, with 6 states and U.S. territories having the entire allocated funds lapse. Finally, for rural areas, five states and one U.S. territory had about 9 percent (about $2.5 million) lapse. (See app. II for a complete list of areas that allowed fiscal year 2006 funds to lapse.) According to FTA officials, fiscal year 2006 JARC funds lapsed for various reasons. Some areas encountered delays in developing the coordinated public transit human service transportation plan, and did not complete the plans in time to apply to FTA for fiscal year 2006 funds. (The next section of the report discusses these challenges in more detail.) Despite the lapse of fiscal year 2006 funds, FTA is making progress to award the funds remaining for fiscal years 2007 and 2008 before the deadlines at the end of fiscal years 2009 and 2010, respectively. According to FTA, regional and headquarters staff have contacted stakeholders in areas where funds lapsed to explore ways for these communities to use the remaining funds. For example, in March 2009, FTA headquarters and Region 4 staff in Atlanta, Georgia., conducted a conference call with Miami transit providers and MPOs to discuss strategies for the large urbanized area to use its remaining JARC funds. During the call participants agreed to select a designated recipient, finalize coordinated plans, and conduct a competitive selection in time to apply for the area’s fiscal year 2007 JARC funds. As of May 2009, the Governor of Florida has selected a designated recipient and the competitive selection process for JARC projects within the Miami area is underway. As a result of such efforts, FTA has awarded more fiscal year 2007 and 2008 JARC funds, relative to the rate at which it awarded fiscal year 2006 funds. For example, FTA awarded about 3.9 percent of fiscal year 2006 funds in the first year of availability, compared with approximately 5.0 percent and 14.3 percent awarded in the first year of available fiscal years 2007 and 2008 funds, respectively. FTA officials and designated recipients we interviewed attributed the increase in the rate of awarding funds to various factors, including availability of and improvements to the final guidance, overcoming the initial learning curve in implementing the program, and awarding projects on a 2-year funding cycle. FTA expects to award more than 90 percent of fiscal year 2007 funds—slightly more than the 86 percent for fiscal year 2006—before the September 30, 2009, deadline. Recipients have used or plan to use JARC funds primarily to operate or expand existing transit routes in an effort to target low-income populations. Recipients have the discretion to use JARC funds for three types of expenditures: (1) operating assistance to subsidize the cost of operating new or existing transit services, such as staffing, advertising costs, insurance and fuel; (2) capital assistance, such as purchasing vehicles and equipment; and (3) administrative costs. Designated recipients can use up to 10 percent of allocated JARC funds for administrative costs, such as the cost to conduct coordinated planning and competitive selection processes, but have discretion on how to use the remaining allocated amount—whether for operating assistance or capital projects. As shown in figure 3, recipients have used or plan to use about 65.3 percent of fiscal year 2006 funds primarily for operating assistance, compared to about 27.5 percent for capital expenses and 7.2 percent for administrative costs. Many recipients we interviewed are using funds to help cover the cost to operate existing transit routes, or to expand transit services targeted at low-income populations. For example, the Rochester-Genesee Regional Transportation Authority, a designated recipient in a large urbanized area in upstate New York, plans to use JARC funds to operate an existing reverse commute, fixed route service during evenings and on weekends from the city of Rochester to employment locations in outlying suburban areas. Similarly, New Jersey Transit awarded the North Jersey Transportation Planning Authority funds to offset operating costs for its demand response transit service in Bergen County, New Jersey Recipients also plan to use funds to operate other types of transit projects eligible under JARC, such as bicycle loan or auto repair programs. For instance, the Southwestern Wisconsin Community Action Program is currently using JARC funds to operate an auto loan program to assist low- income workers in rural areas in purchasing vehicles for shared rides to work, while the Kenosha Achievement Center in the Kenosha, Wisconsin, small urbanized area is using JARC funds to operate a bike loan program that would provide transportation to jobs for low-income job seekers. Fewer JARC recipients we interviewed plan to use the funds for capital assistance. Although JARC provides up to 80 percent of federal funds for recipients’ capital assistance and 50 percent for operating assistance, recipients noted that the available funding is not generally sufficient to start new services and/or purchase vehicles and equipment—both of which can be costly—and continue operating services after receiving JARC funds. For instance, representatives of a designated recipient in Georgia told us that they would like to establish and operate new bus routes to transport low income workers to a new employment center being developed. The designated recipient was allocated about $192,000 for fiscal year 2006, but officials indicated that this amount would only allow them to purchase one transit bus, which typically costs about $300,000 to purchase and $200,000 per year to operate. The funding would not cover additional buses or sustain operations beyond 1 year. The designated recipient may apply for fiscal year 2007 and 2008 funds but would still have difficulty continuing the routes under current budget constraints in the region. Nevertheless, other recipients we contacted do plan to use JARC funds for capital expenses, such as purchasing a van for a vanpool or a global positioning system to assist in operating a mobility management program. For instance, the Coastal Georgia Regional Development Center plans to use its fiscal years 2006 and 2007 JARC funds to operate 12 regional vanpools that will serve eight passengers per vehicle and provide two trips per day in the southern rural areas of Georgia, while the Lower Savannah Council of Governments plans to use some of its funds to defray the cost of operating a new mobility management program in its rural and small urbanized regions. A few designated recipients also indicated that they plan to use some of their JARC funds to implement such a program. Finally, many designated recipients chose not to use the funds for administrative purposes because they wanted to use the funds for transportation rather than support services. Recipients and local authorities we interviewed cited multiple challenges throughout the process for implementing JARC-funded projects. Although many of these recipients and local authorities have addressed these challenges and have received JARC funding, a common concern we heard is that, overall, the amount of effort required to obtain JARC funds is disproportionate to the relatively small amount of funding available compared to other transit programs. FTA officials are taking steps to address these challenges, and noted that some challenges—such as the amount of funding and flexibility in using JARC funds—are rooted in statute and would need to be addressed by Congress in the next surface transportation reauthorization. Although many designated recipients we interviewed commented that FTA has made progress in implementing JARC, some noted that issues with FTA’s guidance hindered implementation. First, FTA did not issue its final guidance until May 2007, almost 2 years after SAFETEA-LU was enacted in August 2005 and 2 months after FTA initially planned to issue it. As we previously reported, FTA used an extensive public participation process to develop the guidance and received a large volume of public input, partially in response to requests from transit agencies and stakeholders. While this process helped FTA develop the final guidance, it also delayed its issuance. Consequently, FTA’s interim guidance included a “hold harmless” provision stating that the final guidance requirements would not apply retroactively to grants awarded before FTA issued the final guidance. Some designated recipients chose to implement JARC programs using FTA’s interim guidance. Others, however, were hesitant to do so because of uncertainties in interpreting policies and procedures and chose to wait for the final guidance. This ultimately reduced the time available for these recipients to apply for JARC funds appropriated in fiscal year 2006. Second, some JARC recipients found FTA’s interim and final guidance vague and overly broad. Designated recipients noted that the guidance did not provide sufficient specific information on whether a project was eligible for JARC funds or the standards of oversight for subrecipients. Specifically, designated recipients in Arizona, California, and Pennsylvania commented that FTA’s guidance does not provide enough information on overseeing and managing subrecipents. For example, one recipient was unsure of the parameters for funding and monitoring JARC auto loan projects. Recipients were also unsure how to oversee and manage projects that cross boundaries throughout the region, such as large and small urbanized and rural areas. For example, a recipient and subrecipient in Arizona were unsure about how to develop a cost-allocation method for demand response and fixed route projects that operate across large and small urbanized and rural boundaries. An FTA official stated that the guidance was intended to provide a broad framework for implementation and allow states and large urbanized areas flexibility to administer programs that best meet local and regional needs without being overly prescriptive. FTA also noted that the final JARC circular includes examples and detailed lists to supplement the guidance. In addition, some designated recipients and an industry association representative commented that FTA provided inconsistent information. For instance, one FTA regional office required all designated recipients in its jurisdiction to submit locally developed, coordinated public transit- human services transportation plans to verify that project applications for JARC funds were derived from the plans. However, this practice was not consistent with other FTA regions. FTA subsequently directed regional offices to instead rely on JARC applicants’ certification that projects were derived from the plans. They also directed regional offices to confirm that the individual applicants and projects submitted are included in the program of projects required to receive JARC funds. An FTA official acknowledged inconsistent information and interpretation of its guidance among some regional offices and stated that FTA has been using a document entitled “Questions and Answers on the Section 5310, JARC, and New Freedom Programs” posted on its Web site to reduce inconsistencies among regional offices. An FTA official also noted that the agency has periodically taken advantage of its regularly scheduled bi- weekly meetings between headquarters and regional staff to clarify JARC program guidance and to provide additional guidance to regional staff. Some recipients commented that delays in identifying designated recipients in large urbanized areas contributed to delays in awarding fiscal year 2006 funds and implementing transit projects. Some states and large urbanized areas did not identify designated recipients until fiscal year 2008. Moreover, although the majority of designated recipients have been identified, as of September 2009, 5 out of 152 large urbanized areas had not yet identified a designated recipient; these 5 areas allowed fiscal year 2006 funds to lapse. This may be because prospective designated recipients are reluctant to take on the role. Officials with the New York Metropolitan Transportation Authority reported that they did not want to be the designated recipient primarily because they were not sure they could fulfill the requirements with the limited amount of funds available to administer and manage the program. Specifically, SAFETEA-LU allows non-profit agencies to receive JARC funding and FTA requires that designated recipients ensure that subrecipients, which could include non- profit agencies, comply with federal requirements. Some non-profit agencies have not received FTA funds in the past and local officials were not confident these agencies had the financial capability to manage JARC funds and comply with FTA’s requirements. These agency officials expressed concern that they would be held liable if non-profit agencies ultimately did not comply with those requirements. In particular, many New York City transit agencies had these concerns and, as a result, the New York State DOT agreed to become the designated recipient for the New York City portion of the New York-Newark large urbanized area. Concerns about taking on the designated recipient role were not limited to areas without designated recipients. For instance, the Port Authority of Allegheny County, the major transit agency in the Pittsburgh large urbanized area, plans to transfer the designated recipient role to the area’s MPO—the Southwestern Pennsylvania Commission—because the administrative requirements exceeded its capacity and regional jurisdiction. Additionally, 8 states—4 of which we contacted—took on the role of designated recipient for 16 large urbanized areas. According to officials in New York and Wisconsin, the state departments of transportation took on the responsibility primarily because they did not want funds to lapse and local authorities did not want to take on the responsibilities, respectively. For instance, officials with the MPOs in Madison and Milwaukee told us they asked the Wisconsin DOT to be the designated recipient for those large urbanized areas because the state had experience with administering the program under TEA-21 and the MPOs had insufficient resources to take on the responsibilities. Because the process of identifying designated recipients in some areas took more than 2 years after SAFETEA-LU was enacted, it reduced the time available for those areas to conduct a coordinated planning process, develop a coordinated human services transportation plan, conduct a competitive selection process, and apply to FTA for funds before the September 30, 2008, deadline to award fiscal year 2006 funds. Designated recipients that were not identified until fiscal year 2008 were at a particular disadvantage because they had less time to apply for JARC funds. Designated recipients in large urbanized areas in California and Georgia and a subrecipient in Chicago all commented that the process for identifying and selecting designated recipients ultimately delayed applications to FTA for fiscal year 2006 funds and hindered implementing projects. Some recipients indicated that assigning multiple designated recipients to administer and manage JARC funds has resulted in additional steps to administer JARC. Under SAFETEA-LU, state agencies must be the designated recipients for small urbanized and rural areas, while local agencies, such as a major transit agency or MPO, can serve as designated recipients in large urbanized areas. However, the jurisdiction of some local agencies that were selected as designated recipients in large urbanized areas may include small urbanized and rural areas. Specifically, officials in Sacramento, Los Angeles, and San Francisco/Oakland in California, and Phoenix, Arizona, indicated that this infrastructure is disjointed and confusing because states are responsible for rural and small urbanized areas that may also be under the jurisdiction of designated recipients for other FTA programs in large urbanized areas. For example, the Sacramento Area Council of Governments—the MPO and designated recipient for Sacramento—has jurisdiction over the large urbanized area as well as the small urbanized and rural areas in the region for the federally required Transportation Improvement Program. Subrecipients that provide transit services for the large urbanized area as well as rural areas need to apply to both the state and the designated recipient in a large urbanized area to receive funds for the urbanized and rural areas as well as report to both the MPO and state. To facilitate coordination and share resources, some states, such as Arizona and California, have delegated the administration of JARC projects in small urbanized areas to designated recipients in large urbanized areas, while retaining jurisdiction over rural areas. For instance, California delegated the responsibility for conducting a competitive selection process to the Metropolitan Transportation Commission in the San Francisco-Oakland area and the Sacramento Area Council of Governments in Sacramento for small urbanized areas under those agencies’ jurisdiction. While delegating administration of JARC projects in small urbanized areas to designated recipients in large urbanized areas may facilitate coordination, it also results in additional work for designated recipients for both the state and large urbanized areas. As the designated recipient for small urbanized areas, the state is ultimately responsible for all aspects of funding distribution and oversight of subrecipients in those areas. Thus, it must ensure and certify that the statewide competitive selection process resulted in a fair and equitable distribution of funds. Consequently, states may want to review and assess projects for small urbanized areas that were selected as part of the large urbanized area’s competitive selection process to ensure that they were derived from the locally developed, coordinated public transit-human services transportation plan. Some states may want designated recipients for large urbanized areas to apply for small urbanized area funds through the state’s designated recipient, rather than directly to FTA. For instance, a designated recipient for a large urbanized area in California that was delegated responsibility to oversee the competitive selection process for small urbanized areas instructed the agency to send its selected JARC projects to the state for additional review and competition with other small urbanized areas in the state. The state then applied to FTA for funding. This process increased the time and effort to award funds for small urbanized areas. As previously mentioned, a greater proportion of funds lapsed in small urbanized areas, compared to funds allocated to large urbanized and rural areas. Some designated recipients suggested allowing states discretion to select designated recipients for small urbanized and rural areas, rather than requiring the state to take on that role. However, SAFETEA-LU requires that the state be the designated recipient for small urbanized and rural areas. Moreover, although SAFETEA-LU’s formula allocating funds by large and small urbanized and rural area classifications provides funds to areas that had not previously received JARC funds, some designated recipients indicated that the funding allocations between urban and rural areas limited them from distributing funds where they are most needed. Some recipients we contacted would like discretion to use funds where they are most needed in the state and the region. Currently, large urbanized areas receive more funding than small urbanized and rural areas, since the funding formula is based on the population and number of eligible low- income residents. In some cases this may meet needs, but officials from New Jersey Transit commented that transportation needs of New Jersey’s small urbanized and rural areas have been disproportionately affected by the formula, making it difficult to meet the transit needs of small urbanized and rural areas because allocated funds cannot be transferred from large urbanized areas. Additionally, officials from the Oregon Department of Transportation indicated that the state could not transfer funds from its small urbanized areas to its rural areas, even if the state received more applications from rural areas than from small urbanized areas. In another case, officials from the Metropolitan Transportation Commission indicated that they had difficulties awarding JARC funds to potential recipients in Petaluma, California—a relatively wealthy, small urbanized area in northern California—because the area did not have a large concentration of low-income residents and did not qualify for the funds that were allocated to the area. As mentioned earlier, California was one of the states in which funds lapsed for small urbanized and rural areas. Designated recipients in California, New Jersey, and Wisconsin suggested eliminating the urbanized area classifications established in SAFETEA-LU and giving local agencies discretion to allocate funds where they are most needed in the region. According to officials, this would give designated recipients flexibility to transfer funds to areas that may need more funds, such as rural areas with fewer resources than large urbanized areas. Furthermore, designated recipients in large urbanized areas that cross state lines—such as New York City, New York and Newark, New Jersey— had to take additional steps to administer the program. Industry associations noted concerns about how large urbanized areas that crossed state lines would implement changes to JARC. Although the designated recipients in multi-state jurisdictions we interviewed indicated that awarding JARC funds was not as much of an issue as expected, the process did require additional administrative and coordination efforts. For instance, in several multi-state large urbanized areas—like Chicago, Illinois - Northwestern Indiana; Augusta, Georgia -Aiken, South Carolina; and New York City, New York - Newark, New Jersey—the cities in one of the states decided not to apply for or use all of the allocated JARC funding. Specifically, officials in northwestern Indiana, Augusta, and New York City decided not to apply for or use all of the allocated JARC funding. Each of these cities transferred JARC funding to the city in the other state to ensure that the funds would be used. For example, the New York City Metropolitan Transportation Authority decided not to apply because it already provides extensive transit services 24 hours a day, 7 days a week and did not need the relatively small amount of JARC funds available. However, to accomplish this transfer, the designated recipients had to agree on how to split the apportionment and notify FTA annually of the split and the geographic area each recipient would manage. For example, when New York transferred some of the New York City portion of the JARC funds to New Jersey so that it could be used for a project in Newark, officials had to negotiate the formula to use to determine the amount of the funds to transfer to New Jersey for Newark’s use. These negotiations took some time, which subsequently delayed New Jersey Transit’s efforts to award JARC funds in the Philadelphia, Pennsylvania.-Camden, New Jersey area. In another instance, northwestern Indiana was not able to use its JARC funding during the summer of 2008 and transferred the funds to Illinois for Chicago to use. Officials with Chicago’s Regional Transportation Authority stated that they had to quickly identify projects to include in its application so that the funds would not lapse. Many state and designated recipient officials we interviewed considered the coordinated planning process beneficial and worthwhile. Recipients noted that including stakeholders from transit and planning agencies as well as human services agencies provided different perspectives and resources and brought together agencies that traditionally do not work together. As a result, the coordination process helped identify transit service needs and gaps. One planning agency stated that the coordinated planning requirement helped build on efforts it previously had in place because it compelled agencies to work together to receive federal funds and forced them to plan more strategically. However, designated recipients cited multiple factors that challenged coordination efforts: Lack of sufficient funds, resources, and expertise Many designated recipients noted that the limited amount of funds, lack of resources and, in some cases, lack of planning expertise, made coordination difficult: Some designated recipients that used the 10 percent of JARC funds SAFETEA-LU allows for administration and planning commented that the amount is insufficient to cover the cost of planning. For instance, a designated recipient in a large urbanized area in Georgia hired a consultant to conduct the coordinated planning process and develop a plan, but the allowance did not cover the cost of the consultant. In another case, Oregon can use about $59,800—the allowed amount for fiscal year 2006—for administrative purposes. However, officials noted that, in total, the state spent about $400,000 to develop coordinated plans for its 46 local and tribal agencies. Similarly, Arizona obtained a grant from the United We Ride initiative to help defray—but not entirely cover—the cost to develop a coordinated public transit-human services transportation plan for small urbanized and rural areas. Although FTA allows designated recipients to also combine JARC funds with 10 percent of the funds from the New Freedom and the Elderly Individuals and Individuals with Disabilities (commonly referred to as Section 5310) programs, some designated recipients decided not to use the funds for administrative activities because they wanted to use the relatively small amount of allocated funds for transportation services rather than support services. Six of the nine state-level designated recipients we spoke with indicated that rural areas, in particular, have fewer resources and thus find JARC’s coordinated planning requirements more challenging than do large and small urbanized areas. One state official stated that while some rural areas have used Section 5311 rural formula program funds to pay for planning and coordination costs, others that do not receive other FTA funds have no funds available for planning and coordination. In other areas, state budget issues may limit how funds can be used. For instance, Georgia applied to use JARC funds for administrative purposes, but current state budget problems have prohibited funds from being used to hire additional staff to coordinate and develop plans for rural areas. Rural areas in some states do not have a regional planning infrastructure or staff with planning expertise to conduct and develop coordinated public transit-human services transportation plans. For instance, Wisconsin officials indicated that their state does not have a regional rural planning infrastructure because the state develops rural area policies and derives projects from that process. An Illinois official commented that rural areas had never developed public transportation plans before SAFETEA-LU. The state hired planning coordinators to help develop coordinated plans in rural areas because those areas lacked staff with planning expertise. Nevertheless, recipients in other rural areas indicated that the planning process did not present challenges in rural areas, and coordinated planning in rural areas is critical because these areas are isolated and coordination is critical to providing transit services. Despite these concerns, many recipients have developed coordinated public transit-human services transportation plans. These plans will need to be periodically updated. Recipients noted that challenges in coordinating and periodically updating plans will continue, particularly if stakeholders are asked to meet regularly but are not guaranteed to receive funds, given the limited amount of JARC funding available. Recipients indicated that the amount of effort required to coordinate and develop a plan, along with conducting a competitive selection process, is disproportionate to the small amount of JARC funds available. Difficulties in engaging human services agencies Another coordination challenge cited was convincing other organizations, such as human service agencies, to consistently participate in the planning process. While designated recipients encourage stakeholders from human services agencies to participate in the coordination effort, these agencies are not necessarily required to coordinate. Some designated recipients have required these agencies to participate in the coordinated planning process in order to receive funds. However, according to a designated recipient, the relatively small amount of JARC funds does not offer sufficient incentive for some agencies to participate. Some designated recipients suggested that federal agencies, such as the Department of Health and Human Services, that provide and allow funds to be used for transportation services should require grantees to participate in coordinated planning efforts. According to Department of Health and Human Services officials, federal officials are making efforts to increase participation by other organizations, but ultimately, local human services agencies decide whether or not to participate in the coordinated planning process. Officials with FTA and other federal agencies, including the Department of Health and Human Services and the Department of Labor, reported that they have been working through the Federal Interagency Coordinating Council on Access and Mobility to encourage federal grantees to participate in coordinated transportation planning efforts. In 2003, we recommended that federal agencies develop and distribute additional guidance to states and other grantees to encourage coordinated transportation by clearly defining allowable uses of funds, explaining how to develop cost-sharing arrangements for transporting common clientele, and clarifying whether funds can be used to serve individuals other than a program’s target population. While the respective federal agencies have since issued guidance encouraging grant recipients to share resources with local transit and planning agencies through the Federal Interagency Coordinating Council on Access and Mobility, the agencies are still developing a cost sharing policy. However, officials from the departments of Labor and Health and Human Services indicated that local human services agencies may have other competing priorities that limit their ability to coordinate with transit agencies. Difficulties integrating JARC planning requirements with existing planning requirements Additionally, the different requirements between JARC’s coordinated public transit-human services transportation plan and the state and metropolitan transportation plans can result in additional work for designated recipients. For instance, under SAFETEA-LU, states and MPOs are not required to include human services providers as stakeholders in the transportation planning process; states and MPOs are only required to provide stakeholders a reasonable opportunity to comment on the state and metropolitan transportation plans. JARC, on the other hand, requires designated recipients to include human services agencies in the planning process and have a role in developing the coordinated public transit- human services transportation plan. Some designated recipients indicated that integrating human services agency coordination for JARC into existing transportation planning process would help streamline efforts. Designated recipients in four states and four large urbanized areas commented that identifying and generating matching funds has been challenging, particularly for small urbanized and rural areas. Although the state and local match for capital projects—20 percent—is less than the match for operating projects—50 percent—many recipients use JARC funds for operating projects and thus must identify and raise 50 percent of the cost of these projects. Some states, such as California and Pennsylvania, and large urbanized areas such as Chicago, have a dedicated source of funds, such as state or local sales taxes, to match federal transit programs, but other states, such as Georgia, and large urbanized areas— such as Milwaukee and Madison in Wisconsin and Savannah and Augusta in Georgia—do not. Recipients in locations with dedicated sources of matching funds also noted that those sources are not always stable. For example, a designated recipient and subrecipient relying on sales tax revenues dedicated to transit noted decreased sales tax revenues due to the current economic slowdown. Moreover, dedicated sources of matching funds are not always sufficient to cover program costs. For instance, designated recipients in New York urbanized areas have a dedicated tax that can be used for capital expenditures but not for operating projects. In addition, two recipients noted that funds from other federal agencies, such as TANF funds, are increasingly being used for purposes other than transportation, reducing the amount available for use as matching funds for JARC projects. Although some recipients we contacted indicated that the competitive process has been fair and transparent, regional FTA officials and a few designated recipients expressed concern over the lack of competitive JARC projects in some geographic areas. For instance, the designated recipient for the Phoenix large urbanized area noted that it received only one project application for the competitive selection process for fiscal years 2006 and 2007 funds. Some designated recipients noted that competition does not exist in certain areas because some potential subrecipients, particularly nonprofit organizations, cannot meet federal requirements, limiting the number of candidates that can apply for JARC funds. Several designated recipients indicated that nonprofit organizations may not have the capacity to meet federal mandates, such as FTA’s procurement requirements for purchasing vehicles, and/or manage FTA funded projects. Additionally, large transit agencies that had previously received JARC funds are in a better competitive position, which might discourage smaller transit agencies or nonprofit agencies from applying. For instance, Maricopa County’s Special Transportation Services in Phoenix, Arizona, has experience applying for federal funds, as it has historically received JARC funding since 1999, and is in a good position to compete. The agency has the resources available, such as a fleet of shuttle vans that are already in compliance with federal regulations and requirements. On the other hand, according to the designated recipient, a nonprofit agency in Phoenix that was new to the JARC program withdrew its application for funds after determining that it could not comply with federal regulations and the administrative requirements for purchasing vehicles. Several states and designated recipients in large urbanized areas noted that the requirements to manage and administer JARC duplicate those of FTA’s two other relatively small transit programs, New Freedom and Section 5310. Although some designated recipients voiced concerns about consolidating the programs because they serve populations with different needs, others suggested streamlining or consolidating them because they have similar administrative requirements, such as coordinating with human services agencies and developing a coordinated plan. FTA allows designated recipients to streamline and consolidate planning efforts for all three programs. However, some recipients commented that applying for the funds separately for these programs is redundant and time consuming. For instance, a subrecipient in Arizona submitted two identical applications—one for JARC and one for New Freedom—to the designated recipient, which in turn submitted similar applications to FTA for both JARC and New Freedom funds. Designated recipients noted that consolidating JARC with related FTA programs, such as the New Freedom and Section 5310 programs, would lessen the amount of administrative effort required to receive and manage the programs. Transit industry associations have proposed consolidating JARC with other federal transit programs to streamline and eliminate the administrative burden of coordinating and managing various FTA transit programs. AASHTO proposed consolidating JARC with FTA’s urbanized area and rural area formula grants programs and combining the New Freedom program with Section 5310. The American Public Transportation Association proposed consolidating JARC with New Freedom and Section 5310. Both associations indicated that the intent of the proposals is to reduce the programs’ administrative requirements while still maintaining the programs’ intent to provide transportation services to disadvantaged populations. Nevertheless, associations representing elderly and disabled persons, such as the Easter Seals, AARP, and Association of Programs for Rural Independent Living expressed concern that consolidating these programs would jeopardize advances in providing transportation to these populations. Officials from all of the associations—those representing the transportation agencies as well as those representing elderly and disabled persons—agreed that any changes to the JARC, New Freedom, and Section 5310 programs need to ensure that the programs’ intent remains intact. Although FTA has not completed an evaluation of the JARC program under SAFETEA-LU, recipients we spoke with indicated that projects have benefited low-income individuals by providing a means to get to work. FTA has improved its approach for evaluating the program since 2000 and currently has two studies under way to evaluate the JARC program under SAFETEA-LU. However, both studies—one on performance measures and another on the program’s economic impacts—may have limitations that could affect FTA’s assessment of the program. Although FTA’s evaluations of the JARC program are not yet complete, many designated recipients and subrecipients believe that the program is beneficial because it has helped people access and maintain jobs. State and local officials that we interviewed cited numerous examples in which projects benefited individuals because they provided a means for them to get to work. Officials noted that, without the transportation that JARC services provided, these individuals would not have been able to obtain and maintain jobs. For example, officials in Milwaukee, Wisconsin, noted that JARC bus routes provided 96,000 rides during a 6-month period, suggesting that many people were using the routes to get to jobs or job training. Similarly, in New Jersey, surveys of individuals who use JARC services indicated that 70 percent of them could not get to work without the transportation services being provided. Despite these individual experiences, however, designated recipients and other state and local officials agreed that JARC projects funded under SAFETEA-LU have not been in effect long enough to determine the projects’ impact. Any evaluation of the projects would also have to consider program costs, such as the time and effort designated recipients and others invest to implement the program and comply with its requirements. FTA has contracted with CES and TranSystems—which have been evaluating the JARC program since 2003—to further develop and improve the performance measures established in FTA’s final JARC guidance in May 2007. The current performance measures include the number of rides on JARC-funded projects and the number of jobs accessed. Designated recipients will report data on JARC projects to CES and TranSystems in May 2009. These data will likely include projects funded under SAFETEA- LU, as most of the projects implemented under SAFETEA-LU were awarded in fiscal year 2008. FTA officials anticipate a report in September 2009. However, limitations inherent in the performance measures could affect the usefulness of this evaluation: Actual or estimated number of rides (as measured by one-way trips): According to designated recipients and other state and local agency officials we spoke with, determining the number of rides to access jobs presents challenges because individuals use fixed route services for many reasons in addition to traveling to work, including shopping and medical appointments. For example, for projects that provide bus service to shopping malls, determining whether people are traveling to reach jobs at the malls, shop, or go to restaurants is difficult. In addition, CES and TranSystems noted that anyone can use these services, not just low-income populations. Although transit agency officials noted that people are not comfortable providing information on their income, FTA officials noted that they are not asking designated recipients to report the number of riders served or the incomes of these riders. FTA officials also noted that because SAFETEA-LU requires that JARC projects be derived from a coordinated plan identifying priorities to meet the transportation needs of low-income individuals traveling to employment or related activities, they believe they can presume that projects serve predominantly low-income populations. Nevertheless, because anyone can use JARC services, FTA will not know with certainty whether the targeted population is using the services to find work or better paying jobs. Number of jobs accessed: Although FTA does not plan to have designated recipients provide information on the number of jobs accessed, CES and TranSystems representatives, designated recipients, and other local officials we spoke with expressed concerns about determining the number of jobs accessed. They noted that assessing this performance measure is difficult because many designated recipients and local agencies do not have the information necessary to determine the number of jobs accessed in a given area by people using JARC services. Even if agencies could determine the number of jobs accessed, agencies would likely calculate it differently, resulting in inconsistent information. For example, while one official indicated his agency could survey riders, others indicated they would estimate the number of jobs accessed based on employment data or the number of businesses in the area. FTA officials and CES and TranSystems representatives explained that, rather than ask designated recipients to provide the number of jobs accessed, they intend to request that designated recipients provide data on the geographical areas in which they provide JARC services. For fixed route projects, designated recipients will provide information on the geographic area surrounding the length of the route. For demand response services, designated recipients will provide the geographic area—such as the state or county—in which the service is provided. CES and TranSystems will use this information to estimate the number of jobs accessed. CES and TranSystems officials noted that, in some cases, the actual number of jobs accessed is known. For example, a subrecipient in Ohio provides transportation that only serves temporary employees traveling to jobs in a manufacturing plant. Consequently, the provider knows the number of jobs being accessed and can report that number rather than information on the geographical area. In addition to limitations in the performance measures, the method to estimate the number of jobs accessed has limitations. CES and TranSystems plan to use the geographic data to calculate a very rough estimate of the number of jobs accessed. CES and TranSystems will use a Census Bureau program, the Longitudinal Employer-Household Dynamic (LEHD), to estimate the number of jobs accessed by calculating the number of jobs in a given geographical area. For example, for fixed route services, CES and TranSystems will estimate the number of jobs within a ½-mile “zone” along the route, i.e., ¼-mile on either side of the route. For demand response services, CES and TranSystems will estimate the number of jobs within a geographical unit, such as the county in which a service is provided. According to CES and TranSystems officials, this approach only estimates the number of jobs accessed at a national level and cannot be used to estimate the number of jobs at a state or local level. This approach has other limitations: The LEHD program does not include information from all 50 states. As of September 2008, 47 states supplied data. Of those, 42 were included in the program. For demand response services, CES and TranSystems can estimate the total number of jobs and low-wage jobs within specific geographic boundaries, such as a county or state. However, if the demand response service area does not correspond directly to specific geographic units, job information is not available. FTA officials acknowledged these limitations and noted that CES and TranSystems have been working with FTA to improve the quality of the jobs accessed measure. Specifically, CES and TranSystems noted that this performance measure actually estimates the potential number of jobs, which overstates the number of jobs accessed. Consequently, CES and TranSystems developed two alternatives: translating ridership into jobs reached by assuming that individuals make round trips when traveling for work-related purposes, and dividing the number of trips by two; and comparing theoretical capacity to jobs accessed by determining the number of individuals who could be served and dividing by two (again assuming round trips). CES and TranSystems noted that each of these approaches have advantages and disadvantages. For example, while the first alternative directly translates ridership into jobs, it also assumes that all riders are traveling to jobs, which is not realistic. Moreover, it does not consider that different people use services on different days. As a result, the estimates could misstate the number of jobs accessed. The second approach, which compares theoretical capacity to jobs accessed, considers the transit system’s capacity. However, CES and TranSystems acknowledge that this approach may not be realistic as services are not necessarily filled to capacity while in operation. Although these approaches attempt to address the weaknesses of the current efforts to estimate jobs accessed, they could still misstate the extent to which the target population benefits from the JARC program. CES and TranSystems have also developed measures for other JARC services allowed under SAFETEA-LU that cannot be measured using the number of riders and number of jobs accessed, such as informational services and capital projects. CES and TranSystems are using a matrix to capture key information regarding the projects such as the number of requests for information services mobility managers received or the number of additional vehicles purchased. The second study, conducted under contract with the University of Illinois at Chicago (UIC), will focus on analyzing the economic impact of the JARC program using data from a survey of JARC service users, program managers, and coordinated human services transportation plan participants. As of May 2009, researchers were in the process of finalizing the survey instruments for this study. FTA expects UIC to issue a report in the spring of 2010. According to FTA officials and UIC researchers, the survey design and analysis for the planned 2010 report will use a methodology similar to a June 2008 survey-based economic analysis that UIC conducted on JARC outcomes under TEA-21. In the 2008 study, the researchers estimated the benefits and costs associated with the program. Potential benefits of the JARC program include higher paying jobs that participants may gain as a result of being able to travel to areas with better paying jobs. Potential costs include those associated with operating the program. However, we noted several limitations in the 2008 study, including weaknesses in the design of the survey as well as the analysis of data obtained from the survey. Although all surveys are potentially subject to sources of error, the researchers did not use standard practices that would help minimize these sources of error when developing and implementing the survey used in the 2008 report. This limitation could affect the reliability of the survey data used to estimate the economic impacts. Specifically, the researchers may have overstated the benefits to the target population. For example, the survey estimates were reported as if they were based on a probability sample and were generalizable to the population that the JARC program targets. However, the estimates were not based on a probability sample and, therefore, should not be generalized. In addition, they did not disclose this fact or take it into account when developing overall economic impacts. According to FTA officials, the researchers were careful to not generalize the results of their survey research. While the report does note that generalizing the results is difficult, the report made several conclusions that, as written, appear to apply to the population of JARC users as opposed to the survey sample. For example, the report concluded that employment transportation services are providing valuable services to users and that those services are being appropriately targeted. In addition, the report indicates that the individuals using the services are greatly dependent on them, and that the benefits to the users are high and likely to persist over time. However, the report does not qualify the results to clarify that they apply only to the users surveyed. Without this qualification, the report appears to extend the results to all users, which would be inappropriate because the users surveyed were not selected as part of a probability sample. The absence of this qualification thus limits the usefulness of their assessment. In addition, the researchers did not consider the need to qualify the results when developing overall economic impacts. In addition, the survey used in the 2008 study is subject to nonsampling errors, including coverage error, non-response error, and measurement error: The 2008 survey is subject to coverage error, which results when all members of the survey population do not have an equal or known chance of being sampled for participation. Standard practice is to note to whom and when the surveys are disseminated. For example, if the transportation system providing JARC services operates 24 hours a day, researchers would have to survey across all days and time frames. Otherwise, individuals using the service on days and times that researchers do not survey would have a zero chance of being selected. The researchers indicated that they rode the selected services for 6 to 12 hours so they could cover at least one if not both rush-hour periods and, where appropriate, they also rode during off- peak hours including late night and early morning. In addition, researchers said that in a few cases they administered surveys over multiple days to ensure that they surveyed a sufficient number of respondents. However, the researchers did not include in the study a detailed sampling plan that would fully explain how coverage issues were addressed. As a result, the extent of coverage error is unknown, and the 2008 survey results should not be generalized to all JARC users. The 2008 survey also suffers from nonresponse error, which results when people responding to a survey differ from sampled individuals who did not respond, in a way that is relevant to the study. Standard practice to minimize this type of error includes using a systematic sampling approach when disseminating surveys and noting, to the extent possible, who is not participating, to see if non-respondents differ from respondents. For the June 2008 study, UIC researchers indicated that they boarded buses and developed a rapport with some riders. However, the researchers acknowledged that not all riders were willing to complete the survey. In addition, the study does not identify the survey response rate and did not consider potential differences between respondents and nonrespondents. Without this information, the extent to which the estimates are biased is unknown. Finally, the wording for the questions used in the 2008 survey may have resulted in inaccurate or uninterpretable responses. In general, standard practice includes pretesting and technical review of the instrument before administering to help minimize measurement error. Although the researchers indicated that they pretested the survey instrument and made changes based on the pretests and believed that the pretest was thorough, we found obvious weaknesses in the survey instrument. For example, we found that some response categories in the survey were not mutually exclusive or exhaustive, questions appeared ambiguous, and instructions for responding were not clear. Collectively, we believe that these potential sources of error raise questions about the validity of the survey data as it was used to estimate the economic effect of the JARC program in the 2008 study. We found similar limitations in the draft survey instrument that the researchers have proposed to use for the 2010 study and provided specific technical review feedback to FTA and the researchers regarding these limitations. FTA officials indicated that the researchers had made numerous changes that incorporated our comments as well as the results of pretests and their own internal reviews. We also identified limitations in the economic analysis used to estimate the benefits and costs of the JARC program in the June 2008 study. For example, the researchers used a before-and-after approach to analyze the benefits and costs. That is, the program was analyzed in terms of its effect on individuals (for example, on changes in earnings) before and after using the service. However, this approach does not indicate what would have happened without the program. For example, an individual’s earnings may have increased over time even without the program. The researchers said that because they implemented the survey just after the JARC service started, they believe they primarily captured the program’s effects. The researchers also indicated that they plan to refine the survey questions for the next study to more precisely capture the program’s effect and exclude significant life events that might also affect an individual’s earnings. In addition, the researchers found that, overall, the net benefits of the program are positive. However, when analyzing more specific aspects of the program, such as the benefits and costs of fixed route and demand response services, the researchers reported that the program’s net benefits are negative. The researchers attribute this conflicting result to their use of averages in computing net benefits and indicated that they used averages to smooth out irregularities in the survey responses. For example, the study indicates that the survey data had a wide distribution with some large positive and negative values (for example, some survey respondents may have lost higher-paying jobs before using the JARC service and took a lower-paying job after using the service.) However, the extent to which the reported irregularities in the survey data are reasonable differences in responses between riders or are due to the survey limitations discussed above is not clear. In addition, the reported economic results make it difficult to ascertain whether the program is generating positive net benefits and whether it is an efficient use of society’s resources. The researchers acknowledged some of these limitations and indicated they have taken steps to improve their research design for the current study, such as incorporating changes into their survey instrument. They also indicated that they plan to make other improvements. We believe that changes to address the limitations could improve the usefulness of the results in assessing the economic effect of the JARC program. Nevertheless, FTA does not have a comprehensive process in place to ensure that evaluations of the impact of the JARC program use generally accepted survey design and data analysis methodologies. Although FTA officials indicated that an FTA economist reviewed the researchers’ proposed data collection and evaluation methodology at the beginning of the project, FTA did not review the draft report. FTA officials indicated that they did not have the expertise to do so and noted that another entity—such as the Bureau of Transportation Statistics within DOT’s Research and Innovative Technology Administration—would need to assist with this type of evaluation. Since the study was published in 2008, FTA officials said that the results did not inform FTA’s decisions about how to implement the JARC program. However, FTA indicated that the results of the study as well as other evaluations contribute to discussions on the program’s future. FTA has made progress in awarding JARC funds since Congress passed SAFETEA-LU in 2005, although FTA’s delay in issuing final guidance and other challenges contributed to a lapse in some fiscal year 2006 funds. Now that the final guidance has been issued and recipients have had experience implementing the program, the expectation is that more fiscal year 2007 and 2008 funds will be awarded to implement more projects, and accordingly, less funds will lapse in the future. Recipients have faced several challenges in implementing JARC. A message we consistently heard from designated recipients and subrecipients is that the requirements for the current program are extensive considering the relatively small amount of funding available. Although FTA and recipients are becoming accustomed to the new formula program and its requirements—which could lessen the severity of these challenges in the future—recipients told us that they continue to face challenges in a number of circumstances, such as when: designated recipients for large urbanized areas have jurisdiction over small urbanized and/or rural areas and when the service provided by an individual transit provider overlaps two or more of these areas; designated recipients are responsible for ensuring that organizations that do not traditionally receive FTA funding comply with FTA requirements; local agencies, particularly those in rural areas, have limited staff, funding, and/or expertise needed to update coordinated public transit-human services transportation plans; and JARC requirements duplicate the requirements for other programs, such as New Freedom and Section 5310. The results of FTA’s evaluations of the JARC programs under SAFETEA- LU may have limitations that could affect FTA’s assessment of the program. FTA’s current performance measures—number of rides and number of jobs accessed—have limitations that could misstate the program’s performance. FTA’s ongoing study of JARC’s economic outcomes, conducted by UIC, may also have limitations if it utilizes the same survey design and data analysis methodology used in UIC’s June 2008 study of the JARC program under TEA-21. While FTA does not have the expertise to review the methodologies used in these studies, other entities, such as the Bureau of Transportation Statistics within DOT’s Research and Innovative Technology Administration, could assist with this review. Recognizing that FTA has improved its evaluation approach over time and that the JARC program is relatively small compared with FTA’s regular transit formula programs, drawing on this expertise within DOT could provide additional assurances that the methodologies used in the evaluations follow generally accepted survey design and data analysis practices without expending significant additional resources. We recommend that the Secretary of DOT direct the FTA to: Determine what actions FTA or Congress could take to address the challenges agencies have encountered. For example, these actions could include providing more specific guidance to assist large urbanized areas with jurisdiction over small urbanized or rural areas, or suggesting that Congress consider consolidating the application processes for JARC and other programs with similar requirements. Ensure that program evaluations use generally accepted survey design and data analysis methodologies by conducting a peer review of current and future program evaluations, including UIC’s current study of the JARC program. This review could be conducted with the assistance of another agency within DOT, such as the Bureau of Transportation Statistics within DOT’s Research and Innovative Technology Administration. DOT reviewed a draft of this report and provided comments by e-mail requesting that we incorporate information providing additional perspective on FTA’s progress in implementing the JARC program, including its evaluations of the program, which we have done. For example, DOT officials noted that FTA’s current evaluation framework responds to prior GAO concerns by using an access to jobs measure rather than an access to employment sites measure. We agree that FTA’s current methodology for evaluating the JARC program—although still limited in some respects—represents an improvement over the agency’s previous approaches and that the agency has been responsive to GAO’s prior concerns. DOT also provided technical corrections, which we have incorporated as appropriate. We are sending copies of this report to congressional committees with responsibility for transit issues; the Secretary of Transportation; the Administrator, Federal Transit Administration; and the Director, Office of Management and Budget. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-2834 or at wised@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. We are mandated to evaluate the Job Access and Reverse Commute (JARC) program every 2 years under the Safe, Accountable, Flexible, Efficient Transportation Equity Act-A Legacy for Users (SAFETEA-LU). This report addresses (1) the extent to which FTA has awarded available JARC funds for fiscal years 2006 through 2008 and how recipients are using the funds since the changes went into effect under SAFETEA-LU, (2) the challenges recipients have faced in implementing the program, and (3) how FTA plans to evaluate the JARC program. To determine the extent to which FTA awarded available JARC funds, we collected and analyzed JARC grants award data from FTA’s Transportation Electronic Awards Management (TEAM) System. To assess the reliability of TEAM data, we (1) reviewed existing documentation related to the data source and (2) obtained information from the system manager on FTA’s data reliability procedures. We also brought discrepancies we found in the data to FTA officials’ attention so they could resolve them before we conducted our analyses. We determined that the data were sufficiently reliable for the purposes of our report. To examine how recipients have used JARC funds since SAFETEA-LU went into effect, we interviewed 26 designated recipients—9 states and 17 agencies representing large urbanized areas—and 16 subrecipients that were selected from a nonprobability sample. Table 4 lists the 26 designated recipients and table 5 lists the 16 subrecipients we interviewed. We collected and reviewed information from these recipients on the different types of JARC projects being planned or implemented, including demand response and fixed route transit services, auto loan projects, mobility management services, and vanpool services. We selected the designated recipients based on a diverse range of criteria that included states and large urbanized areas that: received an increase or a decrease in JARC funds as a result of changing to the formula program; were previously interviewed for our November 2006 report; had awarded all or portions of fiscal year 2006 funds, as of May 2008; had not identified a designated recipient as of May 2008; and FTA and industry association officials suggested. We also chose large urbanized areas that crossed multiple states and considered the geographic locations of states and large urbanized areas to obtain a wider range of geographic coverage and dispersion. We selected subrecipients that covered the three types of areas that were apportioned JARC funding under SAFETEA-LU—large and small urbanized as well as rural areas—and were based on recommendations from designated recipients. These interviews cannot be generalized to the entire JARC recipient and stakeholder population because they were selected from a non-probability sample. To identify the challenges recipients have encountered in implementing the program, we interviewed officials from our selected non-probability sample of designated recipients and subrecipients as well as 19 stakeholders, such as metropolitan planning organizations and local public transit agencies, to obtain their views on challenges associated with implementing the JARC program. Table 6 lists the 19 stakeholders we interviewed. In addition, we interviewed the applicable FTA regions responsible for the states and large urbanized areas we visited to obtain their perspective on challenges identified in the region. We also interviewed officials from industry associations, including the American Association of State Highway and Transportation Officials (AASHTO), the American Public Transportation Association, the Community Transportation Association of America, and the National Association of Regional Councils to identify challenges faced by the agencies these associations represent. Our interviews with AASHTO included discussions with officials from state departments of transportation from California, Illinois, Iowa, New York, North Dakota, Oregon, and Texas. We summarized our interview responses to identify common challenges in implementing the program. We also reviewed relevant laws and regulations, including SAFETEA-LU and FTA’s final guidance on administering JARC, and other FTA information, such as the Frequently Asked Questions document posted on FTA’s Website, to clarify the guidance. To identify challenges faced by human services agencies associated with the coordinated human services transportation planning process, we interviewed officials from the U.S. Department of Labor and U.S. Department of Health and Human Services. Additionally, we interviewed officials of associations representing the elderly and disabled, including Easter Seals, AARP, the Association of Programs for Rural Independent Living, and North Country Independent Living to obtain their perspectives on consolidating JARC with other FTA transit programs, such as the New Freedom, Elderly Individuals and Individuals with Disabilities, and urbanized and rural area programs. To determine how FTA plans to evaluate the JARC program, we reviewed previous evaluations and interviewed officials from FTA and two contractors, TranSystems/CES and the University of Illinois at Chicago (UIC), that are evaluating the JARC program. For each evaluation, we assessed the contractors’ scope and methodology. Specifically, for the TranSystems/CES evaluation, which focuses on JARC performance measures, we determined the contractor’s plans to collect and analyze data on JARC projects. We also interviewed designated recipients, subrecipients and other state and local officials to obtain their perspectives on FTA’s JARC performance measures. For the UIC evaluation, which focuses on JARC’s economic impact and outcomes, we reviewed UIC’s June 2008 evaluation of the JARC program under TEA-21 using standard survey and economic principles and practices as criteria, and interviewed UIC researchers to identify similarities and differences between UIC’s methodologies for the prior and current studies and the implications those methodologies could have on UIC’s current evaluation. We also reviewed and assessed the survey document UIC used on the prior evaluation as well as the survey documents UIC plans to use in the current study and provided feedback to the researchers. Lapsed Amount (percent) Aguadilla-Isabela-San Sebastian, Puerto Rico $ 530,843 (100%) Daytona Beach-Port Orange, Florida Philadelphia, Pennsylvania-New Jersey-Delaware- Maryland 406,084 (100%) 238,265 ( 74.9%) 14,105 (18.8%) 191,671 (100%) 149,168 (30.6%) 136,539 (100%) 18,975 (12.4%) 152,079 (100%) 154,803 (100%) 116,718 (100%) 118,352 (100%) 296,056 (100%) 188,181 (100%) 125,080 (100%) 2,798,658 (100%) 307,613 (52.5%) 110,760 (100%) 162,591 ( 100%) 156,161 (7.2%) Port St. Lucie, Florida 134,102 (100%) 138,244 (100%) Lapsed Amount (percent) 108,520 (100%) 292,557 (90.0%) 347,894 (33.9%) 3,175,710 (100%) 178,704 (100%) 130,784 (100%) $10,879,217 (76.8%) $10,879,217 (13.3%) Small urbanized areas in the state $1,050,607 (36.9%) 47,028 (100%) 51,652 (100%) 18,627 (2.8%) Total allocated to small urbanized areas 250,000 (31.5%) 142,431 (100%) 37,708 (100%) 426,704 ( 83.1%) 550,122 (63.1%) 2,571,505 (100%) 1,535 (1.2%) 97,515 (100%) $5,245,434 (59.8%) $5,245,434 (19.2%) Lapsed Amount (percent) Non-urbanized, rural areas in the state 880,209 (63.2%) Total allocated to non-urbanized rural areas 60,739 (100%) 312,252 (57.1%) 862,267 (62.6%) 354,265 (100%) $2,469,732 (66.2%) $2,469,732 (9.0%) Other key contributors to this report were Sara Vermillion (Assistant Director), Lynn Filla-Clark, Kathleen Gilhooly, Timothy Guinane, Jennifer Kim, Heather May, Jaclyn Nelson, Karen O’Conor, Lisa Reynolds, Terry Richardson, and Amy Rosewarne.
Established in 1998, the Job Access and Reverse Commute Program (JARC)-administered by the Federal Transit Administration (FTA)--awards grants to states and localities to provide transportation to help low-income individuals access jobs. In 2005, the Safe, Accountable, Flexible, Efficient Transportation Equity Act--A Legacy for Users (SAFETEA-LU) reauthorized the program and made changes, such as allocating funds by formula to large and small urban and rural areas through designated recipients, usually transit agencies and states. SAFETEA-LU also required GAO to periodically review the program. This second report under the mandate examines (1) the extent to which FTA has awarded JARC funds for fiscal years 2006 through 2008, and how recipients are using the funds; (2) challenges faced by recipients in implementing the program; and (3) FTA's plans to evaluate the program. For this work, GAO analyzed data and interviewed officials from FTA, nine states, and selected localities. FTA is making progress in awarding funds and has awarded about 48 percent of the $436.6 million in JARC funds apportioned for fiscal years 2006 through 2008 to 49 states and 131 of 152 large urbanized areas. Recipients plan to use the funds primarily to operate transit services. However, about 14 percent of fiscal year 2006 funds lapsed. According to FTA officials, these funds lapsed for several reasons. For example, some applicants did not meet administrative requirements in time to apply for funds. FTA officials are working with states and localities to reduce the amount of funds that lapse in the future. Recipients plan to use 65 percent of fiscal year 2006 funds to operate transit services, 28 percent for capital projects, and 7 percent for administrative costs. States and local authorities GAO interviewed cited multiple challenges in implementing the JARC program; a common concern is that, overall, the effort required to obtain JARC funds is disproportionate to the relatively small amount of funding available. One challenge cited by recipients was that FTA's delay in issuing final guidance and the process to identify designated recipients reduced the time available to secure funds before the funds expired. In addition, although recipients considered the coordinated planning process beneficial, many cited factors that hindered coordination, including lack of resources and the reluctance of some stakeholders to participate. Moreover, although the JARC program requires human service providers to be included as stakeholders, other transportation planning requirements do not, complicating the coordinated planning process. Some designated recipients also expressed concerns about identifying stable sources of matching funds and duplicative efforts in administering JARC with other FTA programs. These challenges have delayed applications for funds and project implementation, and contributed to the lapse in fiscal year 2006 funds. Although FTA has not completed an evaluation of the JARC program under SAFETEA-LU, recipients we spoke with indicated that projects have benefited low-income individuals by providing a means to get to work. Since 2000, FTA has refined its approach for evaluating the program and currently has two studies under way to evaluate the JARC program under SAFETEA-LU. However, both studies may have limitations that could affect FTA's assessment of the program. One of these studies--due in September 2009--will evaluate projects using FTA's performance measures; specifically, the number of rides provided and number of jobs accessed. However, collecting reliable data for these measures is problematic, particularly for the number of jobs accessed. The other study--due in the spring of 2010--will include results of a survey of JARC recipients and individuals using JARC services and will focus on the program's impact on those using the services. However, this study will use a methodology similar to that used in a prior study which had limitations in the survey instrument design and data analysis. FTA does not have a comprehensive process in place to assess whether its researchers use generally accepted survey design and data analysis methodologies.
The operation of the Medicare program is extremely complex and requires close coordination between CMS and its contractors. CMS is an agency within HHS but has responsibilities for expenditures that are larger than those of most other federal departments. Under Medicare’s fee-for-service system—which accounts for over 80 percent of program beneficiaries— physicians, hospitals, and other providers submit claims to receive reimbursement for services they provide to Medicare beneficiaries. In fiscal year 2000, fee-for-service Medicare made payments of $176 billion to hundreds of thousands of providers who delivered services to over 32 million beneficiaries. About 50 Medicare claims administration contractors carry out the day-to- day operations of the program and are responsible not only for paying claims but also for providing information and education to providers and beneficiaries that participate in Medicare. Contractors that process and pay part A claims (i.e., for inpatient hospital, skilled nursing facility, hospice care, and certain home health services) are known as fiscal intermediaries and those that administer part B claims (i.e., for physician, outpatient hospital services, laboratory, and other services) are known as carriers. Contractors periodically issue bulletins that outline changes in national and local Medicare policy, inform providers of billing system changes, and address frequently asked questions. To enhance communications with providers, the agency recently required contractors to maintain toll-free telephone lines to respond to provider inquiries. It also directed them to develop Internet sites to provide another reference source. While providers look to CMS’ contractors for help in interpreting Medicare rules, they remain responsible for properly billing the program. In congressional hearings held earlier this year, representatives of physician groups testified that they felt overwhelmed by the volume of instructional materials sent to them by CMS and its contractors. Following up on these remarks, we contacted 7 group practices served by 3 carriers in different parts of the country to determine the volume of Medicare- related documents they receive from the CMS central office, carriers, other HHS agencies, and private organizations. Together, these physician practices reported that, during a 3-month period, they received about 950 documents concerned with health care regulations and billing procedures. However, a relatively small amount—about 10 percent—was sent by CMS or its contractors. The majority of the mail reportedly received by these physician practices was obtained from sources such as consulting firms and medical specialty or professional societies. Congress has also held hearings on management challenges facing the Medicare program. We recently testified that HHS contracts for claims administration services in ways that differ from procedures for most federal contracts. Specifically: there is no full and open competition for these contracts, contracts generally must cover the full range of claims processing and related activities, contracts are generally limited to reimbursement of costs without consideration of performance, and CMS has limited ability to terminate these contracts. Since 1993, HCFA has repeatedly proposed legislation that would increase competition for these contracts and provide more flexibility in how they are structured. In June 2001, the Secretary of HHS again submitted a legislative proposal that would modify Medicare’s claims administration contracting authority. CMS relies on its 20 carriers to convey accurate and timely information about Medicare rules and program changes to providers who bill the program. However, our ongoing review of the quality of CMS’ communications with physicians participating in the Medicare program shows that the information given to providers is often incomplete, confusing, out of date, or even incorrect. MRCRA provisions establish new requirements and funding for CMS and its contractors that could enhance the quality of provider communication. We found that carriers’ bulletins and Web sites did not contain clear or timely enough information to solely rely on those sources. Further, the responses to phone inquiries by carrier customer service representatives were often inaccurate, inconsistent with other information they received, or not sufficiently instructive to properly bill the program. Our review of the quarterly bulletins recently issued by 10 carriers found that they were often unclear and difficult to use. Bulletins over 50 pages in length were the norm, and some were 80 or more pages long. They often contained long articles, written in dense language and printed in small type. Many of the bulletins were also poorly organized, making it difficult for a physician to identify relevant or new information. For example, they did not always present information delineated by specialty or clearly identify the states where the policies applied. Moreover, information in these bulletins about program changes was not always communicated in a timely fashion, so that physicians sometimes had little or no advance notice prior to a program change taking effect. In a few instances, notice of the program change had not yet appeared in the carriers’ bulletin by its effective date. To provide another avenue for communication, carriers are required to develop Internet Web sites. However, our review of 10 carrier Web sites found that only 2 complied with all 11 content requirements that CMS has established. Also, most did not contain features that would allow physicians and others to readily obtain the information they need. For example, we found that the carrier Web sites often lacked logical organization, navigation tools (such as search functions), and timely information—all of which increase a site’s usability and value. Five of the nine sites that had the required schedule of upcoming workshops or seminars were out of date. Call centers supplement the information provided by bulletins and Web sites by responding to the specific questions posed by individual physicians. To assess the accuracy of information provided, we placed approximately 60 calls to the provider inquiry lines of 5 carriers’ call centers. The three test questions, all selected from the “frequently asked questions” on the carriers’ Web sites, concerned the appropriate way to bill Medicare under different circumstances. The results of our test, which were verified by a CMS coding expert, showed that only 15 percent of the answers were complete and accurate, while 53 percent were incomplete and 32 percent were entirely incorrect. We found that CMS has established few standards to guide the contractors’ communication activities. While CMS requires contractors to issue bulletins at least quarterly, they require little else in terms of content or readability. Similarly, CMS requirements for web-based communication do little to promote the clarity or timeliness of information. Instead, they generally focus on legal issues—such as measures to protect copyrighted material—that do nothing to enhance providers’ understanding of, or ability to correctly implement, Medicare policy. In regard to telecommunications, contractor call centers are instructed to monitor up to 10 calls per quarter for each of their customer service representatives, but CMS’ definition of what constitutes accuracy and completeness in call center responses is neither clear nor specific. Moreover, the assessment of accuracy and completeness counts for only 35 percent of the total assessment score, with the representative’s attitude and helpfulness accounting for the rest. CMS conducts much of its oversight of contractor performance through Contractor Performance Evaluations (CPEs). These reviews focus on contractors that have been determined to be “at risk” in certain program areas. To date, CMS has not conducted CPE reviews focusing on the quality or usefulness of contractors’ bulletins or Web sites, but has begun to focus on call center service to providers. Again, the CPE reviews of call centers focus mainly on process—such as phone etiquette—rather than on an assessment of response accuracy. CMS officials, in acknowledging that provider communications have received less support and oversight than other contractor operations, noted the lack of resources for monitoring carrier activity in this area and providing them with technical assistance. Under its tight administrative budget, the agency spends less than 2 percent of Medicare benefit payments for administrative expenses. Provider communication and education activities currently have to compete with most other contractor functions in the allocation of these scarce Medicare administrative dollars. CMS data show that there are less than 26 full-time equivalent CMS staff assigned to oversee all carrier provider relations efforts nationwide, representing a just over 1 full-time equivalent staff for each Medicare carrier. This low level of support for provider communications leads to poorly informed providers who are therefore less likely to correctly bill the Medicare program for the services they provide. Despite the scarcity of resources, CMS has begun work to expand and consolidate some provider education efforts, develop venues to obtain provider feedback, and improve the way some information is delivered. These initiatives—many in the early stages of planning or implementation—are largely national in scope, and are not strategically integrated with similar activities by contractors. Nevertheless, we believe that these outreach and education activities will enhance some physicians’ ability to obtain timely and important information, and improve their relationships with CMS. For example, CMS is working to expand and consolidate training for providers and contractor customer service representatives. Its Medlearn Web site offers providers computer-based training, manual, and reference materials, and a schedule of upcoming CMS meetings and training opportunities. CMS has produced curriculum packets and conducted in- person instruction to the contractor provider education staff to ensure contractors present more consistent training to providers. CMS has also arranged several satellite broadcasts on Medicare topics every year to hospitals and educational institutions. In addition, CMS established the Physicians’ Regulatory Issues Team to work with the physician community to address its most pressing problems with Medicare. Contractors are also required to form Provider Education and Training Advisory groups to obtain feedback on their education and communication activities. We believe that the provisions in Section 5 of MRCRA can help develop a system of information dissemination and technical assistance. MRCRA’s emphasis on contractor performance measures and the identification of best practices squarely places responsibility on CMS to upgrade its provider communications activities. For example, it calls on CMS to centrally coordinate the educational activities provided through Medicare contractors, to appoint a Medicare Provider Ombudsman, and to offer technical assistance to small providers through a demonstration program. We believe it would be prudent for CMS to implement these and related MRCRA provisions by assigning responsibility for them to a single entity within the agency dedicated to issues of provider communication. Further, MRCRA would channel additional financial resources to Medicare provider communications activities. It authorizes additional expenditures for provider education and training by Medicare contractors ($20 million over fiscal years 2003 and 2004), the small provider technical assistance demonstration program ($7 million over fiscal years 2003 and 2004), and the Medicare Provider Ombudsman ($25 million over fiscal years 2003 and 2004). This would expand specific functions within CMS’ central office, which would help to address the lack of administrative infrastructure and resources targeted to provider communications at the national level. Although we have not determined the specific amount of additional funding needed for these purposes, our work has shown that the current level of funding is insufficient to effectively inform providers about Medicare payment rules and program changes. MRCRA also establishes contractor responsibility criteria to enhance the quality of their responses to provider inquiries. Specifically, contractors must maintain a toll-free telephone number and put a system in place to identify who on their staff provides the information. They must also monitor the accuracy, consistency, and timeliness of the information provided. Current law and long-standing practice in Medicare contracting limit CMS’ options for selecting claims administration contractors and frustrate efforts to manage Medicare more effectively. We have previously identified several approaches to contracting reform that would give the program additional flexibility necessary to promote better performance and accountability among claims administration contractors. CMS faces multiple constraints in its options for selecting claims administration contractors. Under these constraints, the agency may not be able to select the best performers to carry out Medicare’s claims administration and customer service functions. Because the Medicare statute exempts CMS from competitive contracting requirements, the agency does not use full and open competition for awarding fiscal intermediary and carrier contracts. Rather, participation has been limited to entities with experience processing these types of claims, which have generally been health insurance companies. Provider associations, such as the American Hospital Association, select fiscal intermediaries in a process called “nomination” and the Secretary of HHS chooses carriers from a pool of qualified health insurers. CMS program management options are also limited by the agency’s reliance on cost-based reimbursement contracts. This type of contract reimburses contractors for necessary and proper costs of carrying out Medicare activities, but does not specifically provide for contractor profit or other incentives. As a result, CMS generally has not offered contractors the fee incentives for performance that are used in other federal contract arrangements. Medicare could benefit from various contracting reforms. Perhaps most importantly, directing the program to select contractors on a competitive basis from a broader array of entities would allow Medicare to benefit from efficiency and performance improvements related to competition. A full and open contracting process will hopefully result in the selection of stronger contractors at better value. Broadening the pool of entities allowed to hold Medicare contracts beyond health insurance companies will give CMS more contracting options. Also, authorizing Medicare to pay contractors based on how well they perform rather than simply reimbursing them for their costs could result in better contractor performance. We also believe that the program could benefit from efficiencies by having contractors perform specific functions, called functional contracting. The traditional practice of expecting a single Medicare contractor in each region to perform all claims administration functions has effectively ruled out the establishment of specialized contracts with multiple entities that have substantial expertise in certain areas. Moving to specialized contracts for the different elements of claims administration processing would allow the agency to more efficiently use its limited resources by taking advantage of the economies of scale that are inherent in some tasks. An additional benefit of centralizing carrier functioning in each area is the opportunity for CMS to more effectively oversee carrier operations. Functional contracting would also result in more consistency for Medicare-participating providers. Several key provisions of MRCRA would address these elements of contracting reform. MRCRA would establish a full and open procurement process that would provide CMS with express authority to contract with any qualified entity for claims administration, including entities that are not health insurers. MRCRA would also encourage CMS to use incentive payments to encourage quality service and efficiency. For example, a cost- plus-incentive-fee contract adjusts the level of payment based on the contractor’s performance. Finally, MRCRA would modify long-standing practice by specifically allowing for contracts limited to one component of the claims administration process, such as processing and paying claims, or conducting provider education and technical assistance activities. The scope and complexity of the Medicare program make complete, accurate, and timely communication of program information necessary to help providers comply with Medicare requirements and appropriately bill for their services. The backers of MRCRA recognize the need for more resources devoted to provider communications and outreach activities, and we believe the funding provisions in the bill will help assure that more attention is paid to these areas. MRCRA also contains provisions that would provide a statutory framework for Medicare contracting reform. We believe that CMS can benefit from this increased flexibility, and that many of these reform provisions will assist the agency in providing for more effective program management. Madam Chairman, this concludes my prepared statement. I would be happy to answer any questions that you or other Subcommittee Members may have. For further information regarding this testimony, please contact me at (312) 220-7767. Jenny Grover, Rosamond Katz, and Eric Peterson also made key contributions to this statement.
Complete, accurate, and timely communication of program information is necessary to help Medicare providers comply with program requirements and appropriately bill for their services. Information provided to physicians about billing and payment policies is often incomplete, confusing, out of date, or even incorrect. GAO found that the rules governing Centers for Medicare and Medicaid Services (CMS) contracts with its claims processors lack incentives for efficient operations. Medicare contractors are chosen without full and open competition from among health insurance companies, rather from a broad universe of potential qualified entities, and CMS almost always uses cost-only contracts, which pay contractors for costs incurred but generally do not offer any type of performance incentives. To improve Medicare contractors' provider communications, CMS must develop a more centralized and coordinated approach consistent with the provisions of the Medicare Regulatory and Contracting Reform Act (MRCRA) of 2001. MRCRA would require that CMS (1) centrally coordinate contractors' provider education activities, (2) establish communications performance standards, (3) appoint a Medicare Provider Ombudsman, and (4) create a demonstration program to offer technical assistance to small providers. MRCRA would also broaden CMS authority so that various types of contractors would be able to compete for claims administration contracts and their payment would reflect the quality of the services they provide.
As shown in table 1, according to SOI data, somewhat over half of the approximately 130 million individual tax returns filed for tax year 2002 were done by a paid preparer. This filing breakdown was true for all income levels we analyzed, although the income level exceeding $100,000 had the highest percentage—64 percent. As not all paid preparers provide preparer information on returns they prepare, the percentages of returns that actually were prepared by another person for pay is probably somewhat higher. As table 2 shows, this consistency of use did not hold for other groupings of individual tax returns prepared by paid preparers. Use of paid preparers differed among different types of returns, taxpayers of different filing statuses, filers taking different types of deductions, and claimants and nonclaimants of the earned income tax credit. According to the breakdown in table 2, one-third of taxpayers filing the simplest individual tax form—the Form 1040EZ—used a paid preparer for tax year 2002, and two-thirds of a low-income working group—those claiming the EIC—paid someone to prepare their tax returns. Table 3 shows that whether taxpayers prepared their own returns or paid a preparer, their tax returns showed a median of hundreds of dollars in tax refunds for tax year 2002. However, overall and at the four lowest income categories, those using paid preparers had a higher median at statistically significant levels. At the $0–20,000 income level, a major part of the reason why refunds are so different for those who used paid preparers versus those who prepared their own returns appears to be the EIC. As table 4 shows, those who claimed the EIC and used a paid preparer had tax returns showing a median more than $900 higher in refunds than those who claimed the EIC and prepared their own returns. Different types of paid preparers are governed by different regulations. All are subject to Internal Revenue Code (IRC) penalties, and all paid preparers who choose to file electronically are subject to IRS Electronic Return Originator (ERO) rules. However, only paid preparers who choose to represent taxpayers before IRS are governed by IRS Circular No. 230 regulations. In addition, California and Oregon have their own regulations that apply to all paid preparers. Table 5 summarizes how different types of paid preparers are covered by different regulations. All paid preparers are subject to IRC penalties and the regulations that implement them. According to the Internal Revenue Manual, penalties are IRS’s key tools against noncompliant preparers. Table 6 lists civil penalties that apply specifically to preparers and some of the criminal penalties (sections 7206, 7207, and 7216) that apply to paid preparers. Some civil penalties for preparers who engage in improper conduct are found in IRC sections 6694 and 6701. These include a $1,000 per return penalty if the understatement of the taxpayer’s liability was due to the preparer’s willful attempt to understate liability or reckless or intentional disregard for the rules. They also include a $1,000 penalty on preparers who help taxpayers understate their liability. In addition, they include a $250 per return penalty if the preparer knew or reasonably should have known that the understatement of a taxpayer’s liability was due to a position that had no realistic possibility of being sustained. IRC section 6695 contains many identification penalties that apply to preparers. For instance, a preparer must sign the return after it is completed but before the taxpayer signs it and provide the taxpayer a copy of the return. The preparer must also put his or her social security number or other number issued by IRS on the return. The penalty for failing to meet these requirements is $50 per failure but cannot annually exceed $25,000 per person for each type of failure. Most penalties in this section are not to be assessed if the preparer shows that the violation was due to reasonable cause or not due to willful neglect. All penalties in this section can be assessed in conjunction with other penalties. IRC section 6695 includes requirements specific to the EIC. It requires paid preparers to take certain actions in determining the taxpayer’s eligibility for the EIC and the amount of EIC claimed. For instance, preparers are required to complete an eligibility checklist to determine if a child is a “qualifying child” by meeting residency, age, and relationship requirements. Of particular importance in our investigation, a qualifying child must have lived with the taxpayer for over half of the year. Preparers are also subject to criminal sanctions arising from improper conduct. Civil and criminal penalties can be imposed for the same violation. Preparers who help taxpayers prepare false or fraudulent returns may be liable and could receive a prison term and a fine of up to $100,000. Other penalties, both civil and criminal, protect taxpayers from paid preparers improperly disclosing the information they provide for their tax return. Section 6713 imposes a civil penalty on preparers who improperly use or disclose taxpayer information. Section 7216 imposes a criminal penalty on preparers who knowingly or recklessly disclose or use return information. IRS’s Small Business/Self Employed Division has responsibility for assessing and collecting monetary penalties against any paid preparers who do not comply with civil tax laws when filing returns. Under section 7407, IRS may also bring a civil action in District Court to seek an injunction prohibiting preparers from preparing taxes. IRS’s Criminal Investigation Division investigates paid preparers suspected of violating criminal tax laws. In fiscal year 2005, Criminal Investigation conducted 248 investigations under its Return Preparer Program, with 140 of these resulting in recommended prosecutions. Some IRS rules and regulations apply only to paid preparers in certain circumstances. For example, ERO rules apply to preparers who are EROs—entities that IRS has approved to file electronic returns. EROs may or may not be preparers. ERO rules also apply to ERO principals and responsible officials. Circular 230 regulations apply to enrolled agents, attorneys, and CPAs. IRS has broad authority to monitor and sanction any paid preparer who is authorized to file tax returns electronically. To participate in the IRS e-file program, applicants must pass an IRS suitability check that may include a background check, a credit history check, a tax compliance check, and a check for prior e-file noncompliance. An IRS official told us that although some EROs do not provide preparation services, most do. IRS monitors EROs to ensure compliance with revenue procedures and publications that govern IRS’s e-file program. For instance, according to an IRS official, IRS continues to see if program participants are suitable to participate. It also suggests that EROs verify the identity and taxpayer identification number of taxpayers to protect the e-file program from fraud and abuse. Violation of provisions in either a revenue procedure or an IRS publication could lead to sanctions. IRS sanctions range from a letter of reprimand for a relatively minor infraction to expulsion from the e-file program for more severe infractions. According to IRS, in 2005 it conducted 1,104 monitoring visits for the e-file program resulting in 322 sanctions or proposed sanctions. Circular 230 imposes standards on enrolled agents, attorneys, and CPAs. According to the Circular, in general, only practitioners may represent taxpayers before IRS; however, unenrolled preparers may represent taxpayers in certain situations. An attorney or CPA may represent taxpayers before IRS by filing a written declaration with IRS that he or she is licensed as either an attorney or a CPA. Under Circular 230, tax preparers who are not attorneys or CPAs but who wish to have the unrestricted privilege of representing taxpayers must be approved as enrolled agents with IRS. Enrolled agent applicants must either pass an examination on tax matters or have past IRS employment experience. They are also required to meet continuing education requirements. Circular 230 describes the standards of conduct that practitioners must follow to maintain the right to represent taxpayers before IRS. There are generally three categories of misconduct covered under Circular 230: (1) misconduct while representing a taxpayer, (2) misconduct while preparing a taxpayer’s return, and (3) misconduct not directly involving IRS representation. In terms of the second category—tax preparation—one standard is the realistic possibility standard. This standard restricts practitioners from signing tax returns if the position does not have a realistic possibility of being sustained by IRS. In addition, practitioners are required to advise taxpayers of any noncompliance issue or omission from tax returns submitted to IRS, advise taxpayers of the consequences of this noncompliance or omission, and exercise due diligence to ensure accuracy in preparing tax returns. Practitioners are also prohibited from charging contingent fees, that is, fees based on whether the return will avoid challenge from IRS, for some services including preparation of an original tax return. Finally, practitioners are prohibited from making fraudulent, coercive, or deceptive advertising statements. IRS’s Office of Professional Responsibility (OPR) administers the rules set forth in Circular 230. OPR may censure, suspend, or disbar any practitioner from practice before IRS if the practitioner violates any Circular 230 regulation, is shown to be incompetent or disreputable, or misleads or threatens a client with intent to defraud. OPR receives complaints from taxpayers and IRS employees regarding tax preparers. The American Jobs Creation Act of 2004 added the authority to impose a monetary penalty on a practitioner who violates Circular 230, and an employer or firm if it knew, or should have known, of the misconduct. The act also added violations of Circular 230 to the list of misconduct that can lead to an injunction. In fiscal year 2005, OPR investigated 719 practitioners, resulting in 320 sanctions. In the section on diligence as to accuracy in Circular 230, a practitioner will have been “presumed to have exercised due diligence for purposes of this section if the practitioner relies on the work product of another person and the practitioner used reasonable care in engaging, supervising, training, and evaluating the person, taking proper account of the nature of the relationship between the practitioner and the person.” According to an IRS official, “another person” includes an unenrolled preparer, and enrolled agents are responsible for ensuring that unenrolled preparers working for them do high quality work. According to the official, if there were a problem with an unenrolled preparer’s work, IRS could take action against the employing enrolled agent. Although all states have licensing requirements for CPAs and attorneys, only two states have licensing requirements for unenrolled preparers. California and Oregon both require unenrolled paid preparers to register with state agencies and meet continuing education requirements. California requires that paid preparers pass a 60-hour approved course and obtain a tax preparer bond to become registered. California also requires 20 hours of continuing education annually. In Oregon, tax preparers must be at least 18 years old, have a high school degree or equivalent, complete 80 hours of income tax law education, and pass a tax preparer examination. Oregon also requires 30 hours of continuing education annually. While Oregon requires enrolled agents to register, enrolled agents must meet far fewer registration requirements than unenrolled preparers must. In addition to state licensing requirements, tax practitioners often belong to professional organizations such as the American Institute of Certified Public Accountants, the American Bar Association, or the National Association of Enrolled Agents. These organizations impose general standards of conduct on the actions of their members, including those who prepare tax returns. Taxpayers relying on paid preparers to provide them with accurate, complete, and fully compliant tax returns may not get what they pay for. Tax returns prepared for us in the course of our investigation often varied widely from what we determined the returns should and should not include, sometimes with significant consequences. Many of the problems we identified put preparers, taxpayers, or both at risk of IRS enforcement actions. The National Research Program’s review of 2001 tax returns also found many errors on returns prepared by paid preparers, and some of those errors were more frequent on paid prepared returns than on self- prepared returns. All 19 of our visits to tax return preparers affiliated with chains showed problems. Nearly all of the returns prepared for us were incorrect to some degree, and several of the preparers gave us very bad tax advice, particularly when it came to reporting non-W-2 business income. Only 2 of 19 tax returns showed the correct refund amount, and in both of those visits the paid preparer made mistakes that did not affect the final refund amount. While some errors had fairly small tax consequences, others had very large consequences. Incorrectly reported refunds ranged from refunds overclaimed by nearly $2,000 to underclaims of over $1,700. Figures 1 and 2 below show how the tax return preparers we visited completed key lines on the 1040 form, and explanations of some of these lines follow the figures. Also, appendix I has descriptions of selected visits we made to paid preparers, describing two example visits with fewer issues and two with serious compliance problems. Identifying information. Taxpayer names and social security numbers were correctly entered on all but one of our returns, with one preparer entering a wrong middle initial. Some preparers asked for this information orally, and some asked us to complete information worksheets. Filing status. All of our prepared tax returns showed the correct filing status for the two different scenarios we used. The plumber’s return always correctly indicated married filing jointly, and the sales worker’s return always indicated her filing status as head of household. Exemptions. Exemption information entered on the returns prepared for us included some mistakes. All 9 of the plumber’s returns listed the correct number of exemptions. However, the plumber’s daughter was listed with a different last name on 1 return. Also, both of the plumber’s children were listed with first and middle names on another return, despite the 1040 form clearly calling for dependents’ first and last names. Of the 10 sales worker returns prepared for us, 7 incorrectly indicated both children lived with the taxpayer in 2005. When asked where her children lived, our staff always said that one lived with her and the other with the child’s grandmother throughout 2005. However, this question was not always asked. In general, incorrectly reporting the number of dependent children may have implications for other lines on a tax return, specifically the dollar amount of personal exemptions on line 42, the child tax credit reported on line 52, and the additional child tax credit on Form 8812 and line 68. Wages and investment income. Most income documented by third-party reporting forms (Forms W-2 or 1099) was included on our returns correctly, but not in every case. Wages shown on forms W-2 were correctly listed on line 7 (see fig. 1) of all 19 of the tax returns prepared for us in our investigation. Similarly, tax-exempt interest (line 8a) and qualified dividends (line 9b) were listed on a Form 1099 from a mutual fund and were entered correctly on all 9 of the plumber’s returns. However, the same Form 1099 included ordinary dividends, but 1 preparer entered the wrong amount on line 9a. Also, the mutual fund Form 1099 listed capital gains, but 2 returns did not include capital gains income on line 13. State tax refunds. State tax refunds were also shown on Forms 1099 given to the paid preparers we visited, but 8 out of 19 preparers handled them incorrectly. In the plumber scenario, the state tax refund should have been reported as income (line 10) on this year’s return, but this was not done on 5 of the 9 returns prepared for him. The sales worker did not itemize deductions for 2004, so her state tax refund was not supposed to be reported as income this time. However, 2 of 10 preparers included her state tax refund on line 10, and a third preparer listed the state tax refund amount from the state Form 1099 as unemployment compensation on line 19. Business income. Reporting “side income”—income from casual self- employment arrangements—was very problematic in many of our visits to paid preparers. Both of our taxpayer scenarios included self-employment income, and we told the preparer that we had such income whenever we were asked. Also, if the preparer did not ask about non-W-2 business income, we still told the preparer that we had such income before the end of the visit. Despite being told of the side income in every case, 2 out of 9 plumber return preparers and 8 out of 10 sales worker return preparers did not report the income as required. Even in cases where the side income was reported, several paid preparers gave us incorrect information. Several advised us that reporting such income was our decision because IRS would not know of it unless we reported it. One preparer told our investigator posing as a sales worker that she did not have to report the income unless it was over $3,200. Another said that her income could not be reported because she did not have the names and the social security numbers of the children she watched. On the other hand, the discussion of side income with the paid preparers (when a discussion took place) often, to the sales worker’s potential benefit, included detailed probing by the preparer to identify expenses to offset the income we described. The amount of business income we built into our scenarios, and that preparers often did not include on the tax returns that they prepared, was not unusual for wage-earning taxpayers who underreported business income for tax year 2001. According to data taken from IRS’s recent NRP efforts, for tax year 2001, about 37 percent of taxpayers with wages and business income who underreported their business income did so by amounts of up to $1,500, and about 65 percent underreported their business income by up to $5,000. Deductions. Only 2 of 9 of the plumber’s returns reported the correct amount of itemized deductions (line 40). Returns done by 2 preparers claimed the standard deduction, even though it was about $4,000 less than the total amount of itemized deductions we included in the scenario. Five other preparers itemized deductions for the plumber, but made other mistakes. These errors changed the amount of the plumber’s refund, although sometimes by fairly small amounts. One preparer, however, missed deductions for property taxes worth about $4,000, meaning that the claimed refund was hundreds of dollars lower than it should have been. On the other hand, all 10 of the sales worker returns claimed the standard deduction, which was to the taxpayer’s advantage in these cases because she had very few deductions to itemize. In 2002, we reported that as many as 2 million taxpayers failed to minimize their taxes by failing to itemize their deductions and that about half of these taxpayers had returns prepared by another person. Foreign tax credit. The plumber’s Form 1099 from his mutual fund showed a small amount of foreign taxes paid, but only 1 of the 9 preparers we visited claimed the foreign tax credit (line 47) for which the taxpayer was eligible. Child-care expenses. The sales worker had child-care expenses, but none of the 10 preparers we visited included the credit for child- and dependent-care expenses (line 48) for which she was eligible. Some preparers told her that she could not claim the credit because she did not have the social security number of her child-care provider. This information was incorrect. The instructions for Form 2441 state that a taxpayer who attempts to collect the social security number of his or her child-care provider but is unsuccessful can report that fact on Form 2441 and still claim the credit. Education credits. In the plumber scenario, one of the taxpayer’s children was a college student in the second year of postsecondary education, but 6 of 9 paid preparers made some sort of error in determining the line 50 education credit—either improperly including items in expenses, not claiming the credit most advantageous to the taxpayer, or both. The expenses and the year in school made the Hope education credit far more advantageous to the taxpayer than either the tuition and fees deduction (line 23) or the Lifetime Learning credit. Of the 9 plumber’s returns, 6 included the Hope credit, but 3 of the 6 preparers involved improperly included books among the expenses, increasing the credit by about $100 above what it should have been. One preparer included the tuition and fees deduction instead of the Hope credit and 2 others claimed the Lifetime Learning credit, reducing the taxpayer’s refund by hundreds of dollars. In 2005, we reported that many tax returns, including many prepared by paid preparers, made such suboptimal choices among the three postsecondary education tax preferences. Earned income credit. The EIC on line 66a was another area where paid preparers made very significant mistakes. Of the 10 returns prepared for the sales worker, 5 reported two children on Schedule EIC, Earned Income Credit, instead of the one child who lived with the taxpayer in 2005 and was eligible for the EIC. IRS has estimated that incorrectly claimed children are the largest category of errors for the EIC, accounting for about $3 billion of the estimated $8.5 billion to $9.9 billion in EIC overclaims in tax year 1999. IRS regulations require that paid preparers ask a series of questions to determine eligibility for the EIC, including whether children lived with the taxpayer in the United States for more than half of the year. We were posing as a fairly unsophisticated taxpayer who was unaware of EIC eligibility rules, so we did not volunteer that one of our children did not live with us in 2005. Whenever we were asked if our children lived with us, however, we said that one did and one did not. Only 1 preparer asked all of the required questions. Three preparers asked about the names, dates of birth, and social security numbers of the two children but never asked where the children lived in 2005. Three preparers gave us a worksheet to complete that asked most but not all of the required questions, but 2 of these preparers still entered two children when we wrote down that one child did not live with the sales worker at all during the year. In 1 of these cases, another employee reviewed the return. Refunds. As a result of the errors described above, some claimed refunds on line 73a on our 19 returns were either substantially higher or lower than they should have been. Figure 3 shows the deviation from the correct refund amount under our two scenarios. The pairs of bars shown in the figure indicate returns prepared by employees affiliated with the same chain. As shown in the figure, refunds reported for the plumber were incorrect in all 9 cases—sometimes by only small amounts, but at other times by substantial sums. Refunds reported for the sales worker were correct in 2 cases and overstated in the other 8 cases. The paid preparers that arrived at the refund amount that was $218 too high ignored the sales worker’s side income but reported the correct number of children living with her when calculating the EIC. The preparers who arrived at overclaimed refunds of $1,956 did not include the side income and reported two children for EIC purposes. The 19 paid preparers we visited arrived at the correct refund amount only twice. On 5 returns, all for the plumber, they understated our refund amount by a total of $3,465. On 12 returns (4 for the plumber and 8 for the sales worker) they overstated the refund by a total of $12,169—a total of $1,735 in overstated refunds for the plumber and $10,434 for the sales worker. Preparer’s identifying information. In addition to various computational errors, some preparers also did not include identifying information required on the 1040 forms they completed. IRS regulations require that paid preparers include a signature or typed name, a social security number or “PTIN” (an IRS-issued unique identifier for paid preparers), and the name and employer identification number of their employer. Four of our 19 returns had no preparer signature and 2 had no preparer social security number or PTIN. All but 1 return prepared for us included a company name and employer identification number; that return was missing all identifying information. Preparer services and fees. Most paid preparers we visited offered services besides the federal tax returns we requested. Some preparers offered to prepare the state tax return for us. In a few cases the preparer gave us completed state tax returns along with the federal return and did not indicate that there was an additional charge. Whenever asked, we said we only wanted a federal tax return. Electronic filing was always an option. One preparer proceeded to electronically file our return, even after we said we wanted to mail in a paper return. In this case, the preparer did not ask us to provide a personal identification number or ask us directly to sign a form authorizing the electronic filing, as required by IRS regulations. We were also usually offered ways to get our refunds more quickly than waiting for a check mailed from IRS. Some of these options involved RALs—short-term loans made to taxpayers and paid off with tax refunds— and others involved direct deposit alternatives. In some cases, what were clearly RALs were not described as loans but as “options” or “bank products.” One preparer gave us a RAL application to sign at the start of the visit without explaining what it was we were being asked to sign. Another preparer told us the size of the refund we could receive in 12 to 48 hours but did not give us the amount we would receive if we were willing to wait for a check from IRS, did not identify the faster refund as a loan, and did not explain that the amount we would receive was reduced by the amount of the fee associated with the option. In this case, the fee for the RAL was between about $470 and about $570, after subtracting the amount charged to prepare the return. With a refund amount of about $5,000 and assuming a 10-day wait for the refund, this means that the annual percentage rate for the loan was between about 380 percent and about 470 percent. The fees charged in our 19 visits varied widely, sometimes between offices affiliated with the same chain, and were sometimes significantly larger or smaller than the original estimate we were given. In both the plumber and the sales worker scenarios, we received 1 set of returns at no cost, and another paid preparer reduced the fee for the sales worker without explaining why. Figure 4 shows the fees charged by each of the 19 paid preparers we visited. The pairs of bars in figure 4 represent the fees charged by offices of the same chain for the same scenario. In only 1 of the 9 cases where the same firm prepared the same tax return were we charged the same amount. In some cases, the preparer stressed that one advantage of purchasing a RAL or paying the fees to arrange for direct deposit of the refund would mean that the cost of the visit would come out of the refund and that we would not have to pay any money on the day of the visit. One of the common sense steps we mentioned earlier when choosing or working with a paid preparer is to make sure you understand how much the services you are getting cost. For this reason, we asked for an estimate of fees at the start of every paid preparer transaction. Eight preparers either did not provide an estimate or gave an estimate with the qualifier that the fee would depend on the forms required. In the other 11 cases, we were quoted a fee or a range that did not depend on a variety of forms, and in 9 of those the fee we were ultimately charged was within the quoted range, within $30 of the fee quoted, or less than the estimate. Some preparers provided a detailed receipt showing the forms that were prepared, but some receipts only showed the final fee. None of the more detailed receipts, however, included specific costs for individual forms. According to IRS officials, paid preparers and taxpayers risk enforcement action by filing a tax return that includes the types of misstatements or omissions that we have described. According to the officials, although IRS seldom has clear evidence about what transpires between a preparer and a taxpayer, if IRS were to uncover problems with the preparation of real tax returns similar to several that we found, the preparers would be subject to civil sanctions. Several penalties would be applicable depending on the facts and circumstances of each situation. IRS officials said that if the preparers had been preparing tax returns to be actually filed, many of them would have been subject to civil penalties for such things as negligence and willful or reckless conduct. For example, as stated earlier in our testimony, if a paid preparer encourages a taxpayer not to report or to erroneously report transactions on his or her tax return, resulting in a tax-due understatement or refund overstatement, the preparer could be assessed penalties of up to $1,000 for willful or reckless disregard of tax rules and regulations. In both of our scenarios, information provided to preparers included self- employment income that the preparer did not encourage reporting. According to IRS officials, the preparer is clearly responsible for properly reporting all income, including the self-employment income in these scenarios, on a taxpayer’s return. They added that although preparers are not required to audit taxpayers to uncover unreported income, they must make reasonable inquiries to correctly report income. IRS officials also said that civil penalties would be applicable to other issues we encountered, depending on the facts and circumstances. Preparers who did not ask all the EIC due diligence questions would be subject to the penalty for the failure to be diligent in determining EIC eligibility. Similarly, preparers who improperly included hundreds of dollars of books in the education credit taken would be subject to a penalty for negligence. IRS officials we spoke with, who included representatives of Criminal Investigation, said that although the dollar amounts of errors made by the practitioners might not result in prosecutions, criminal sanctions such as willful preparation of a false or fraudulent return might apply. In addition to paying the tax due after correcting the return and any related late payment interest, the taxpayer may also be assessed a penalty, depending on the facts and circumstances of each situation, according to IRS officials. For example, if taxpayers substantially understate income, overstate deductions, or provide other incorrect information resulting in decreased tax or improperly high refunds, they may be assessed an accuracy-related penalty. The penalty could be assessed for any failure to comply with the tax laws, including the failure to report self-employment income. Because the returns we had prepared were not real returns and were not filed, penalties would not apply. However, we have referred matters we encountered to IRS so that any appropriate follow-up actions can be taken. IRS’s tax year 2001 NRP data also indicate that tax returns prepared by paid preparers contained a significant level of errors. As shown in table 7, IRS audits of returns prepared by a paid preparer showed a higher error rate—56 percent—than audits of returns prepared by the taxpayer—47 percent. Errors in this context changed either the tax due or the amount to be refunded. A similar statistically significant relationship existed for all income groups of $80,000 and below that we studied. Of course, as noted before, it is important to remember that tax preparers are used more often on some more complicated returns than on some simpler ones, although we were unable to gauge the full extent to which this might be true. Also, the fact that errors were made on a return done by a paid preparer does not necessarily mean the errors were the preparer’s fault; the taxpayer may be to blame. The preparer must depend on the information provided by the taxpayer. The different error rates for paid preparer and self-prepared returns translated into different amounts that taxpayers owed IRS after audit. For instance, as shown in table 8, taxpayers using a paid preparer owed a median of $363 to IRS after audit, compared with a median of $185 for taxpayers preparing their own returns. This type of disparity in taxes owed existed for every income level we studied except for the $40,001–60,000 and $60,001–80,000 ranges in which the differences were not statistically significant. Table 9 shows some specific Form 1040 line items for which the NRP paid preparer and self-prepared error rates differed from each other in a statistically significant way. We also found problems with these line items in our visits to paid preparers. For example, NRP audits revealed that, for the Form 1040 line showing the amount of standard deduction or itemized deductions taken, about 23 percent of self-prepared individual returns had errors, compared with about 31 percent of returns done by paid preparers. Paid preparer and self-prepared error rates did not differ from each other in a statistically significant way for business income and education credits line items, other line items for which we had found problems. Our limited review and the problems we found do not permit observations about the quality of the work of paid tax preparers in general. Undoubtedly, many paid preparers do their best to provide their clients with tax returns that are both fully compliant with the tax law and cause them to neither overpay nor underpay their federal income taxes. Furthermore, as we observed in 2003, it is easy to understand how the complexity of the tax code brings many taxpayers to conclude that they should turn to a paid preparer. As we also observed in 2003, however, our tax system depends on taxpayers accurately completing and filing their returns. With their important role in helping taxpayers meet their obligations, paid preparers become a critical quality-control checkpoint for the tax system. Where we saw serious problems in our few visits, these same preparers may make similar mistakes on the genuine tax returns they complete this year. Their mistakes and misstatements may also ripple even further through the system as the taxpayers they serve may come to believe that, for example, non-W-2 business income does not have to be reported, and they may even spread that misinformation among their friends and neighbors. In light of the importance of paid preparers in our tax system today, knowing if what we found is the exception or the rule in the paid tax preparation services industry is critical. With better information about the extent of problems, IRS can better target its limited enforcement and education resources. Finally, our observation in 2003 that taxpayers who choose to use paid preparers need to be wise consumers is even more important today in light of our most recent findings. As IRS notes on its Web site under “Tips for Choosing a Tax Preparer,” no matter who prepares a tax return, the taxpayer is legally responsible for all of the information on that tax return. We discussed our findings and observations with senior IRS officials, and they generally agreed with our message. We recommend that the Commissioner of Internal Revenue conduct necessary research to determine the extent to which paid preparers live up to their responsibility to file accurate and complete tax returns based on information they obtain from their customers. In conducting this research, the Commissioner should consider whether the methodology we used would provide IRS with a more complete understanding of paid preparers’ performance. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information on this testimony, please contact Michael Brostek at (202) 512-9110 or brostekm@gao.gov. David Lewis, Assistant Director; Mario Artesiano; Paul Desaulniers; Danielle Free; Leon Green; George Guttman; Christine Hodakievic; Lindsey Houston; Shirley Jones; Jason Kelly; Lawrence Korb; Barbara Lewis; John Mingus; Karen O’Conor; and Cheryl Peterson made key contributions to this testimony. None of our 19 visits to paid preparers were problem-free, but some had relatively minor issues while others had more serious problems. The following are descriptions of selected visits we made to paid preparers. For each scenario, we provide one example of a visit that had fewer compliance issues than most of our visits under the same scenario, and one example that had more serious problems than most. During this site visit, the paid preparer asked various questions and prepared a return with few problems. For example, presumably to determine the taxability of a state income tax refund, the preparer asked about the previous year’s itemized deductions and their amount. The preparer also asked about which year the college-age child was in schooling and whether the tuition in question had been paid in 2005, questions needed to determine the applicability of the Hope education credit. While the preparer did not ask about side income, when the taxpayer volunteered that he had non-W-2 income, the preparer included it on the return without discussing whether to either change it or not report it. The preparer also probed for expenses to offset it. The refund on the completed tax return was only $4 below the correct amount. The difference was due to the preparer (1) overclaiming the amount of personal property tax paid by including nondeductible fees and (2) not taking the credit for foreign taxes paid. The preparer also listed noncash charitable donations as cash donations, though this did not affect the amount of the refund. The cost of the visit to the paid preparer was about $100 more than the amount originally quoted. However, at the start of the visit, the preparer had said that the actual amount would depend on the number of forms used. One of the forms used was the Schedule B, Interest and Ordinary Dividends. While this form might have been used to capture information the taxpayer provided, it did not need to be filed with IRS, since the income amounts were less than the minimums requiring the form. The paid preparer did not offer other services such as a Refund Anticipation Loan (RAL) to the taxpayer. Costly issues for the taxpayer during this site visit were the paid preparer’s failure to itemize deductions and the preparer’s decision to claim the tuition and fees deduction instead of the Hope education credit. The preparer did not itemize the deductions despite the fact that the taxpayer showed the preparer the documents supporting itemization. The preparer even asked questions about medical expenses and charitable contributions. The preparer also asked about whether there were any nonreimbursed employee expenses and about whether the college-age child was a full-time student. On another issue, when discussing the taxpayer’s side income, the preparer wondered if the taxpayer had reported it the previous year, which he had. The preparer suggested also reporting it this time so as not to arouse suspicion, but at a much lower amount than the taxpayer identified. The taxpayer declined the offer, and the preparer ultimately included the correct amount. The preparer did not provide the taxpayer with a completed Schedule C-EZ or a Schedule SE, although information from both was reported on the form 1040. In addition, the preparer did not include the state tax refund as income. When asked about the tax return’s price at the beginning of the session, the preparer could not give an exact estimate but instead provided a range. However, the preparer ended up not charging the taxpayer at all since the refund involved was so small. In fact, the refund was about $1,700 smaller than the correct amount. This example is 1 of the 2 retail sales worker returns in which the refund computed by the paid preparer was the same amount we computed. The preparer reported the correct number of children for EIC purposes and asked most of the due diligence EIC questions. Although the preparer claimed the wrong number of children as exemptions, that did not affect the final refund amount. Although the preparer did not ask directly about side income, the preparer included it when we offered the information. The price charged was the same as the price quoted, and the preparer pointed out that a RAL was in fact a loan. The preparer did not, however, sign the tax return or provide any other preparer information on it. In this example, the paid preparer’s return resulted in the tax return showing a refund of almost $2,000 more than the correct amount. The return did not include the side income even though the preparer asked about anything else that should be considered and the taxpayer mentioned it. The preparer said the taxpayer would need records of income and expense to be able to report the income. The return included two children as qualifying for the EIC and the additional child tax credit even though only one lived with the taxpayer. The preparer appeared to go through an on-screen EIC checklist but did not ask the taxpayer the questions. The papers taken away from the preparer included an EIC worksheet with the answers completed by the preparer, some of them incorrect. There were also other issues with the return prepared. First, it did not include child-care expenses as the taxpayer was told the expenses would have to exceed $7,300 to be claimed. Second, it incorrectly included the state tax refund as income because the preparer said the amount was for unemployment compensation. Third, the return did not include the preparer’s social security number although it did show his name. The preparer offered a RAL that would have been available in an hour at a cost of about $400. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Despite the importance of paid tax return preparers in helping taxpayers fulfill their obligations, little data exist on the quality of services they provide. Paid preparers include, for example, enrolled agents, who are approved by the Internal Revenue Service (IRS) once they pass an examination on tax matters or demonstrate past IRS employment experience, and unenrolled preparers, who include self-employed individuals and people employed by commercial tax preparation chains. GAO was asked to determine (1) what the characteristics were of tax returns done by paid preparers, (2) what government regulation exists for paid preparers, and (3) what specific issues taxpayers might encounter in using paid preparers. To do its work, GAO analyzed IRS data, reviewed paid preparer regulatory requirements, and had tax returns prepared at 19 outlets of several tax preparation chains. Many taxpayers choose to pay others to prepare their tax returns rather than prepare their own returns. According to the most recent reliable data, about 56 percent of all the individual tax returns filed for tax year 2002 used a paid preparer, with higher paid preparer usage among taxpayers with more complicated returns such as those claiming the earned income credit (EIC). All paid preparers are subject to some IRS regulations and may be penalized if they fail to follow them. For example, all paid preparers must identify themselves on the returns they prepare and must not deliberately understate a taxpayer's tax liability. When the EIC is involved, paid preparers must also ask specific questions to determine a taxpayer's eligibility for the credit. In GAO visits to commercial preparers, paid preparers often prepared returns that were incorrect, with tax consequences that were sometimes significant. Their work resulted in unwarranted extra refunds of up to almost $2,000 in 5 instances, while in 2 cases they cost the taxpayer over $1,500. Some of the most serious problems involved preparers not reporting business income in 10 of 19 cases; not asking about where a child lived or ignoring GAO's answer to the question and, therefore, claiming an ineligible child for the EIC in 5 out of the 10 applicable cases; failing to take the most advantageous postsecondary education tax benefit in 3 out of the 9 applicable cases; and failing to itemize deductions at all or failing to claim all available deductions in 7 out of the 9 applicable cases. GAO discussed these findings with IRS and referred to it problems that were found. Had these problems been discovered by IRS on real returns, IRS officials said that many of the preparers would have been subject to penalties for such things as negligence and willful or reckless disregard of tax rules.
The transfer of defense items to friendly nations and allies is an integral component in both U.S. national security and foreign policy. The U.S. government authorizes the sale or transfer of military equipment, including spare parts, to foreign nations either through government-to-government agreements or through direct sales from U.S. manufacturers. The Arms Export Control Act and Foreign Assistance Act of 1961, as amended, authorize the DOD foreign military sales program. The Department of State sets the overall policy concerning which countries are eligible to participate in the foreign military sales program. DOD, through the military services, enters into foreign military sales agreements with individual countries. The Air Force Security Assistance Center, which is an activity of the Air Force Materiel Command, is responsible for the administration of the Air Force’s foreign military sales program. The center’s responsibilities start with the initial negotiation of the foreign military sale and end with the delivery of parts and completion of all financial aspects of the agreements. The center uses an automated management information system, the Security Assistance Management Information System, to support its management of the program with accurate and timely information. For blanket order cases, the system uses criteria such as an item’s national item identification number, a federal supply class, or a federal supply group to restrict the parts available to foreign military sales customers. Once the system has verified a country’s eligibility and approved a requisition, the requisition is sent to a supply center to be filled and shipped. The overall foreign military sales process, as it applies to the Air Force, is shown in figure 1. This report addresses the portion of the process relating to the Air Force’s approval or disapproval of foreign countries’ requisitions for classified and controlled spare parts under blanket order cases. Blanket orders are for a specific dollar value and generally cover classes of parts that a country may need rather than a specific item within a class. Under blanket orders, the Air Force restricts classes of items, such as munitions and nuclear spare parts, from being requisitioned. The Air Force’s internal controls for foreign military sales using blanket orders are not adequate to prevent countries from ordering and receiving classified and controlled spare parts that they are not eligible to receive. We found that (1) controls based on supply class restrictions were ineffective and resulted in erroneously approved requisitions for shipment, and that written policies for recovering the erroneously shipped items did not exist; (2) the Air Force did not validate modifications to its Security Assistance Management Information System related to blanket orders or test the system’s logic for restricting requisitions, and (3) command country managers did not always document reasons for overriding either the Security Assistance Management Information System or foreign military sales case manager recommendations not to ship classified spare parts. As a result of these inadequate internal controls, classified and controlled spare parts were shipped to countries not authorized to receive them. The Air Force Security Assistance Center has taken or plans to take actions to correct these issues. Foreign country requisitions for classified and controlled spare parts were erroneously validated, as a result of an incorrect federal supply class, by the Air Force’s Security Assistance Management Information System. The Air Force attempts to prevent countries from obtaining classified and controlled spare parts by restricting them from receiving spare parts that belong to selected federal supply classes. Included in the national stock number is a four-digit federal supply class (see fig. 2), which may be shared by thousands of items. The national stock number also contains a nine-digit national item identification number that is unique for each item in the supply system. A country can obtain a classified or controlled spare part by using an incorrect, but unrestricted, supply class with an item’s correct national item identification number. We found that because the Security Assistance Management Information System was not properly programmed, it erroneously validated 35 blanket order requisitions (of the 123 in our review), even though an incorrect supply class number was used, because the countries used supply classes that were not restricted. For example, in one case, the Air Force restricted countries from requisitioning parts belonging to the 1377 federal supply class (cartridge- and propellant-actuated devices and components) on blanket orders. The restriction included an outline sequencer (national stock number –1377010539320) used on ejection seats for various aircraft. The country ordered the sequencer using national stock number 1680010539320. Because supply class 1680 (miscellaneous aircraft accessories and components) was not restricted and the Security Assistance Management Information System did not verify that 1680 was the correct supply class for national item identification number 010539320, the system approved the requisition. Had the system validated the entire 13-digit national stock number, it would have found that the number was incorrect and would not have approved the requisition. Subsequently, the item manager recognized that 1680 was not the correct federal supply class and corrected the supply class to 1377 before the part was shipped. This example is summarized in figure 3. Air Force officials were unaware of this situation until our review identified the problem. In another case, involving the restricted 1377 federal supply class, a country ordered a restricted battery power supply for the F-16 aircraft using national stock number 6130013123511. Because supply class 6130 (nonrotating electrical converters) was not restricted and the Security Assistance Management Information System did not verify the entire 13- digit national stock number, the requisition was approved. The Air Force shipped the restricted battery power supply to the country. Neither the Air Force nor the center had written policies or procedures in place for recovering the items erroneously shipped. Without these types of policies and procedures, the Air Force cannot be assured that appropriate steps will be taken to recover the parts. Air Force Security Assistance Center officials agreed that the supply class restrictions alone were ineffective and could be bypassed by use of inaccurate supply class information. The Air Force has not validated modifications to the Security Assistance Management Information System that restrict parts that countries can requisition, and has not tested the system’s logic for restricting requisitions since 1998 to ensure that it is working properly. As a result, modifications that were not properly made went undetected, and foreign countries were able to requisition and obtain controlled spare parts that the Air Force was trying to restrict. For example, the Air Force instructed programmers to modify a table of restrictions in the Security Assistance Management Information System to prevent certain countries from using blanket orders to requisition controlled bushings in the 5365 supply class. Although Air Force Security Assistance Center officials subsequently told us that the bushings had been improperly restricted, we found that, for 18 of the 123 requisitions we reviewed, countries had ordered and received the bushings, because the Security Assistance Management Information System was incorrectly programmed and did not identify the requisitions as requiring a review by command country managers. After we brought the transactions to the attention of Air Force Security Assistance Center officials, they investigated and found that programmers had entered the restrictions in the wrong area of the system. Because the Air Force had not validated that system modifications were properly made, the system had approved the requisitions. Although the Air Force later determined that the bushings should not have been restricted, this example nevertheless demonstrates the need to validate system changes. The Air Force does not periodically test the Security Assistance Management Information System to ensure that it accurately reviews requisitions for compliance with restrictions. For example, when the system is working correctly, it will identify restrictions relating to parts, such as ammunition or nuclear spare parts, and will disapprove requisitions from countries that are ineligible to order these parts. Air Force Security Assistance Command officials said that the system had not been tested since 1998 to ensure that it accurately reviews requisitions for compliance with restrictions. When we tested the system’s ability to restrict items based on their federal supply class, we found that the system did not always perform as intended. As discussed earlier, the system did not perform as intended because countries could requisition and obtain classified and controlled spare parts using an incorrect, but unrestricted, federal supply class with an item’s correct national item identification number. In the Federal Information System Controls Audit Manual, which lists internal control activities for information systems, one of the control activities listed involves the testing of new and revised software to ensure that it is working correctly. In addition, management of federal information resources rules require agencies to establish information system management oversight mechanisms that provide for periodic reviews to determine how mission requirements might have changed and whether the information system continues to fulfill ongoing and anticipated mission requirements. Further, DOD’s ADP Internal Control Guidelineat the time stated that periodic reviews of systems should be conducted to determine if they operate as intended. According to Air Force Security Assistance Center officials, there have been few changes to the table of restrictions in the system. However, they did agree that existing changes need to be validated and were working to accomplish this. Based on our observations, the Air Force’s failure to validate modifications and test model logic is in part due to an unquestioning confidence in the Security Assistance Management Information System’s ability to correctly restrict the requisitioning of classified and controlled spare parts. Command country managers did not always document reasons for overriding Security Assistance Management Information System or foreign military sales case manager recommendations not to ship classified spare parts. According to the Standards for Internal Control in the Federal Government, all transactions and other significant events need to be clearly documented. The standards state that such documentation should be properly managed and maintained and should be readily available for examination. Of the 123 requisitions we reviewed, the Security Assistance Management Information System identified 36 requisitions for command country manager review. For 19 of the requisitions, command country managers overrode the system recommendations and shipped classified and controlled spare parts without documenting the reasons for overriding the system. For example, the command country manager overrode the system and shipped four classified target-detecting devices, but the case file did not contain any documentation explaining why the command country manager did so, and managers we queried could not provide an explanation for the override. Similarly, a command country manager authorized the shipment of a controlled communications security part that the Security Assistance Management Information System and foreign military sales case manager recommended not be shipped. The case file contained no documentation explaining why the spare part was shipped. According to Air Force officials, there were no written policies or procedures for documenting decisions to override the system or foreign military sales case manager recommendations. The Air Force Security Assistance Center plans to issue guidance to command country managers to document system bypass authorizations. The Air Force has not established nor does it maintain effective internal controls over foreign military sales sold under blanket orders. Specifically, internal controls involving use of the federal supply class to restrict requisitions, the modification of tables restricting the access to classified and controlled spare parts in the Air Force’s system, testing of the system, and documentation of system overrides were inadequate. Without adequate internal controls, classified and controlled spare parts may be released to countries that are ineligible to receive them, thereby providing military technology to countries that might use it against U.S. national interests. Further, without written policies detailing the steps to be taken when the Air Force becomes aware of an erroneous shipment, the Air Force’s ability to recover erroneously shipped classified or controlled parts is lessened. To improve internal controls over the Air Force’s foreign military sales program and to minimize countries’ abilities to obtain classified or controlled spare parts under blanket orders for which they are not eligible, we are recommending that the Secretary of Defense instruct the Secretary of the Air Force to require the appropriate officials to take the following steps: Modify the Security Assistance Management Information System so that it validates country requisitions based on the requisitioned item’s complete national stock number. Establish policies and procedures for recovering classified or controlled items that are erroneously shipped. Establish polices and procedures for validating modifications made to the Security Assistance Management Information System to ensure that the changes were properly made. Periodically test the Security Assistance Management Information System to ensure that the system’s logic for restricting requisitions is working correctly. Establish a policy for command country managers to document the basis for their decisions to override Security Assistance Management Information System or foreign military sales case manager recommendations. In commenting on a draft of this report, DOD fully concurred with four of our recommendations and cited corrective actions that had been taken or were planned, and it partially concurred with another recommendation. Specifically, with regard to our recommendation to modify the Security Assistance Management Information System to validate country requisitions based on the requisitioned item’s national stock number, the department said that it has had a change in place since January 2003 to validate requisitions based on an item’s national stock number. We believe that the department’s change is responsive to findings that we brought to the Air Force’s attention in December 2002. However, because our audit work was completed when the Air Force brought this change to our attention, we did not have an opportunity to validate the change. The department also stated that the Air Force (1) will write a policy memorandum on procedures for recovering classified or controlled items that are erroneously shipped, (2) will issue a policy memorandum directing that all modifications to the system be validated in accordance with existing policies and procedures, and (3) has issued a policy memorandum specifying those staff who can input transactions for overriding restrictions and requiring that waiver approvals for using the bypasses be documented. With regard to our recommendation to periodically test the system to ensure that its logic for restricting requisitions is working correctly, DOD partially concurred. The department said that a program is being implemented to test new modifications placed in the system and that the testing of old modifications would be an ongoing effort. Testing the modifications placed in the system will ensure that they were made correctly. However, just testing the modifications will not ensure that the system is correctly applying its logic to the modifications in order to restrict requisitions for items that countries are not eligible to receive. For example, testing modifications may not identify logic problems, such as the one we identified involving the approval of requisitions based on an item’s federal supply class. Thus, we continue to believe that the system’s logic for restricting requisitions should be periodically tested to ensure that it is working correctly. Otherwise, classified and controlled spare parts that are requisitioned may continue to be erroneously released. DOD’s comments appear in appendix I. To determine the adequacy of the Department of the Air Force’s key internal control activities aimed at preventing countries from requisitioning and receiving classified and controlled spare parts that they are ineligible to receive, we held discussions with officials from the Under Secretary of Defense (Policy Support) International Security Program Directorate; Deputy Under Secretary of the Air Force (International Affairs); and the Air Force Materiel Command’s Security Assistance Center, Dayton, Ohio. We discussed the officials’ roles and responsibilities, the criteria and guidance they used in performing their duties, and the controls used to restrict countries from receiving parts that they are not eligible to requisition. At the Air Force Security Assistance Center and Air Logistics Centers at Warner Robins Air Force Base, Macon, Georgia, and Tinker Air Force Base, Oklahoma City, Oklahoma, we interviewed military and civilian officials to obtain an overview of the requisitioning and approval processes applicable to classified and controlled spare parts. To test the adequacy of the internal controls, we obtained records from the Air Force Security Assistance Center on all classified and controlled spare parts that were purchased under blanket orders and approved for shipment to foreign countries for the period October 1, 1997, through July 31, 2002. We limited our study to blanket orders because defined orders and Cooperative Logistics Supply Support Agreements specified the parts that countries were entitled to requisition by national stock number. In contrast, only Security Assistance Management Information System restrictions limited the parts that countries were entitled to order under blanket orders. The records covered 444 blanket orders that resulted in 72,057 requisitions for classified and controlled spare parts. Specifically, we took the following steps: We tested the Security Assistance Management Information System by applying the system’s restrictions that applied to classified and controlled spare parts that were shipped under blanket orders, and identified 525 requisitions that appeared to violate the restrictions. We obtained satisfactory explanations from the Air Force Security Assistance Command for all except 200 of the requisitions, which were shipped despite restrictions. We reviewed case files for 123 requisitions, including 87 requisitions for which the Security Assistance Management Information System had approved the shipment of classified and controlled spare parts without referring the requisitions to command country managers to determine if the requisitions should be approved. We followed up on these requisitions by consulting with command country managers. The case files that we reviewed included 36 requisitions that the Security Assistance Management Information System had referred to command country managers for review to determine if they had documented their decisions to override the system’s decisions. We followed up on these reviews through discussions with command country managers. We conducted our review from May 2002 through May 2003 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to the Secretary of Defense; the Secretary of the Air Force; the Director, Office of Management and Budget; and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me on (202) 512-8365, if you or your staff have any questions concerning this report. Key contributors to this report were Lawson (Rick) Gist, Jr.; Jennifer Thomas; Arthur James, Jr.; Lou Modliszewski; Susan Woodward; John Lee; and Kristy Lehmann.
From 1990 through 2001, the Department of Defense delivered over $138 billion in services and defense articles--including classified and controlled parts--to foreign governments through its foreign military sales programs. Classified spare parts are restricted for national security reasons, while controlled parts contain technology that the military does not want to release. GAO was asked to review the Air Force's internal controls aimed at preventing countries from requisitioning and receiving classified or controlled spare parts that they are ineligible to receive. The Air Force's internal controls for its foreign military sales program using blanket orders are not adequate, placing classified and controlled spare parts at risk of being shipped to countries not authorized to receive them. The Air Force's system has erroneously approved foreign country requisitions for classified and controlled spare parts based on incorrect federal supply classes. The system approves items for shipment based in part on an item's federal supply class--not the item's entire national stock number, which is a combination of the supply class number and a part number unique to the item. GAO found that because the system was not properly programmed and countries used unrestricted supply class numbers, the system erroneously approved 35 of 123 selected requisitions reviewed. For example, one country ordered a controlled outline sequencer used on various aircraft by using a supply class that was unrestricted, but incorrect for the part it requisitioned. Because supply class 1680 was not restricted and the system did not verify that 1680 was the correct supply class for national item identification number 010539320, the system approved the requisition. Had the system validated the entire 13-digit national stock number, it would have found that the number was incorrect and would not have approved the requisition. In addition, the Air Force has no written policies or procedures in place for recovering items that have been shipped in error. The Air Force has not validated modifications to the Security Assistance Management Information System that restrict parts available to foreign countries and has not tested the system since 1998 to ensure that it is working properly. Because modifications were not validated, the Air Force did not detect improperly made modifications to the system, and foreign countries were able to requisition and obtain controlled spare parts that, at the time, the Air Force was trying to restrict. GAO identified 18 instances in which countries requisitioned and received a controlled part for which they were not eligible because programmers had entered the restrictions in the wrong area of the system. Although Air Force officials subsequently told us that the part was improperly restricted, this example nevertheless demonstrates the need to validate system changes. Air Force command country managers did not always document reasons for overriding the recommendations of the system or the foreign military sales case manager. For 19 of the 123 requisitions GAO reviewed, command country managers overrode the system recommendations and shipped classified and controlled spare parts without documenting the reasons for overriding the system. For example, a command country manager overrode the system and shipped four classified target-detecting devices without documenting the reasons for overriding the system.
Transit agencies provide transportation services in a variety of ways. For purposes of this report, we used the following descriptions of transportation modes: Fixed-route bus service: rubber-tired passenger vehicles operate on fixed routes and schedules over roadways. Diesel, gasoline, battery, or alternative fuel engines power vehicles. This category also includes bus rapid transit, commuter bus, and trolley bus. ADA paratransit: vehicles operate in response to calls or requests from passengers. It uses buses, vans, or taxis to provide complementary ADA paratransit service for individuals with disabilities who are unable to use a fixed-route system. These services are associated with or attributed to ADA compliance requirements. Demand response (also referred to as dial-a-ride): vehicles operate in response to calls or requests from passengers. Demand response uses small buses, vans, or taxis to provide transportation service that is not on a fixed route or schedule. For example, transportation may be provided for individuals whose access may be limited, or whose disability or health condition prevents them from using the regular fixed-route bus service. For purposes of this report, we have defined these services as unrelated to ADA requirements. Commuter rail: vehicles operate along electric or diesel-propelled railways and provide train service for local, short distance trips between a central city and adjacent suburbs. Heavy rail: vehicles operate on electric railways with high-volume traffic capacity. This mode has separated rights-of-way, sophisticated signaling, high platform loading and high-speed rapid-acceleration rail cars operating singly or in multi-car trains on fixed rails. Light rail: vehicles operate on electric railways with light-volume traffic capacity. The mode may have either shared or exclusive rights- of-way, low or high platform loading, or single or double car trains. The transit contracting industry in the United States is characterized by a few large providers that operate nationwide, some mid-size regional providers, and numerous small, local providers that primarily operate bus, demand response, and ADA paratransit service. Transit agencies can contract out various aspects of their operations with contractors, such as operating the service, vehicles, maintenance, security, and administrative services. Contracting arrangements can range from the transit agency’s contracting out all aspects of its operations, as is the case for a delegated management contract, to contracting out only one component of operations, such as maintenance. The federal government has a limited role in overseeing transit contracting. FTA tracks transit agencies’ contracting practices through reports submitted by transit agencies to the National Transit Database. Additionally, FTA oversees transit contracting, along with other aspects of transit agencies’ operations, through procurement reviews and triennial reviews that focus on whether transit agencies have followed federal regulations and have appropriate systems in place for contracting, among other things. The Federal Railroad Administration oversees commuter rail operations but does not conduct any reviews of contracting practices. The Department of Labor is responsible for issuing what is commonly referred to as “Section 13(c)” certifications, which certify that fair and equitable labor protection arrangements are in place for employees who may be affected by certain grants of federal financial assistance. When existing transit service is contracted out, Section 13(c) protections may be triggered, including assurances of employment and priority of reemployment. According to officials at the Department of Labor, after a search of their records and to the best of their knowledge, there has never been an instance where a transit agency has been unable to contract out public transit operations and other services because doing so would jeopardize Section 13(c) certification from the Department of Labor. Contracting is a prevalent means of providing transit services, with about 61 percent of the 463 transit agencies that responded to our survey reporting they contract out some aspect of their operations. By size of agency, 52 percent of small agencies, 69 percent of medium agencies, and 92 percent of large agencies had at least one service that they contracted out. According to our survey, among the agencies providing such services, paratransit, demand response, and commuter rail are more likely to be contracted out, and fixed-route bus, heavy rail, and light rail are most often operated by the transit agency. Among the approximately 61 percent of surveyed agencies that reported contracting, more agencies reported contracting for ADA paratransit than for any other service. Results from our survey shows that of 359 respondents that provide ADA paratransit services, 204, or 57 percent, contract out this service. Seven of the 10 transit agencies we interviewed also contract out ADA paratransit services. (See fig. 1.) Interviewees report that contracting ADA paratransit occurs for various reasons, including the following: ADA paratransit requires specialized training and equipment that can be difficult to provide because agencies may lack staff, expertise, or resources needed to train workers, according to a transit agency official we interviewed. Contracting for this service can be more cost effective than providing the service in-house. According to an industry group we spoke with, ADA paratransit operations are very expensive and for agencies, contracting this service is viewed as a way to potentially save money. Contracting ADA paratransit allows agencies to remove themselves from the day-to-day operations and reduces the risk and liability associated with operational responsibility, according to another transit agency official we interviewed. We found the extent that surveyed transit agencies contract varies by type of service, but among transit agencies that contract out, operations are most often contracted across all modes, followed by maintenance services. (See table 1.) As shown in table 1, in each of the modes, less than half of the agencies that contract out include vehicles in their contracts. In our interviews with transit agencies and contractors, officials told us that transit agencies generally provide their own vehicles for several reasons: Owning vehicles gives the agency more flexibility to terminate a contract if needed, because it can be very difficult for an agency to quickly find another contractor with vehicles to provide continuous service. Purchasing and owning vehicles used in the transit service can attract bidders who would otherwise be hesitant to buy expensive vehicles without the assurance that they would be used beyond the initial length of the contract. Owning the vehicles gives the agency more control in making decisions about vehicle replacement or major repairs, such as replacing engines or transmissions. This can lower costs because the contractor may anticipate and budget for these costs in a contract without knowing for certain if they will be needed, in order to minimize their own risk. According to our survey, the factors that transit agencies considered when deciding to contract a particular mode of service vary based on the mode and by needs of individual transit agencies. (See table 2.) For fixed-route bus, demand response, and paratransit service, the factors that were considered most often were reducing costs and improving efficiency. For commuter rail, the factors that were considered most often were starting new service and improving efficiency. For heavy and light rail, the factors that were considered most often were starting new service and being directed to contract by the Board of Directors. Our literature review indicated these factors vary because needs and costs vary by mode, as well as the individual needs of transit agencies. Thus, for one transit agency, the cost of procuring vehicles and building facilities and expertise needed to operate the service may be a paramount concern, whereas it may be less of a concern for others. For example, commuter, heavy, and light rail services have high start-up costs because of the infrastructure and vehicles needed to operate the service, and, as supported by our survey results, starting new service is a primary factor in their decision to contract out for service. For fixed-route bus, demand response, and paratransit services, the start up costs are typically much lower. As a result, transit agencies may place more importance on reducing costs or providing more efficient service when making the decision to contract out for service. Reducing costs. Although the factors that affect the decision to contract vary across modes, reducing costs is consistently taken into account, according to our survey, interviews, and literature review. The previous table shows that among our survey respondents that contract out, reducing cost was the most often cited factor. This finding was supported by our interviews with transit agency officials and the literature, which indicated that wage rates are lower for contracted drivers and operators, in part because: Contractors can “reset” wage rates to the market rate by hiring new operators at entry-level wage rates, according to some contractors we interviewed. Contractors may not always provide pensions and other benefits for contract workers. Contractors may have lower health insurance rates for their employees because of the large number of employees under their coverage. As we will describe later in this report, unions have concerns regarding the lower wages and benefits for contract workers. Some agencies, such as the Washington Metropolitan Area Transit Authority (Washington, D.C.) and the Metropolitan Rail Authority (Metra) (Chicago, Illinois), have found it more cost effective to contract out a portion of their services. Officials from the Washington Metropolitan Area Transit Authority told us that it is more cost effective for contractors to provide ADA paratransit service because of their reduced labor costs. Officials at Metra said they would not achieve cost savings by directly operating two of their commuter rail services currently contracted out to the Burlington Northern/Santa Fe and Union Pacific Railroads. Officials stated that first, the freight railroads own the track and they would not be able to negotiate new track agreements with the railroads that would produce any more cost savings than what is in their present agreement. Second, they gain efficiencies from sharing certain overheads such as management personnel and facilities. If Metra had to create separate standalone facilities and staffing, this would be more expensive. Third, if Metra directly operated the service, then both Burlington Northern/Santa Fe and Union Pacific Railroad employees would be brought under Metra’s collective bargaining agreements, which pay a higher wage rate than the freight railroads. Starting new service. According to survey respondents as well as transit agency officials and contractors that we interviewed, transit agencies may contract out in order to avoid high start-up costs, including the cost of new services, procuring new vehicles, hiring staff, and obtaining facilities. For example, officials from the Nashville Regional Transportation Authority (Nashville, Tennessee) told us that they contract their commuter rail service because they lacked facilities to house or maintain their vehicles. Also, contracting out services can enable agencies to offer services they would otherwise not be able to provide—such as service that is located away from their main service area—because it is not cost effective. For example, while New Jersey Transit directly operates some of its services including fixed-route bus service, officials said when they create new services or expand other services it makes more sense for them to contract out, particularly in areas that lack a service garage or where there would be long travel times to where drivers store their vehicles at the end of the day. In addition, from our interviews, we found that some transit agencies contract out when starting new service because they do not have the capability to perform transit services in-house. For example: New Orleans Regional Transit Authority (New Orleans, Louisiana) entered into a 10-year delegated management contract with a contractor that covers all planning, operations, and maintenance to quickly restore and rebuild the transit services and infrastructure that hurricane Katrina destroyed. Yuma County Intergovernmental Public Transportation Authority (Yuma, Arizona) contracts out all operations and maintenance for both its fixed-route bus and ADA paratransit services because it has only been in existence a short time and has not developed the capability to perform these services in house. Improved efficiency and flexibility: According to our survey, improved efficiency and flexibility are two other primary considerations for contracting out service. Contractors and transit agency officials that we interviewed said that in some cases, contractors can operate more efficiently by having operators split their time between transit during the peak hours and other services, such as charter services—which are not typically provided by transit agencies—during other times of the day. Also, contractors may be able to provide service at a lower cost because their workforce is more flexible, with a greater number of employees working in part-time positions, resulting in decreased wage and benefit costs. According to one contractor we interviewed, offering part-time employment or flexible schedules may be also preferable for some employees. Other factors. Legislative requirements that mandate or limit a transit agency from using a certain level of contracting, is another reason that can influence certain agencies’ decisions to contract out service. However, state laws were not a leading factor in contracting decision- making, according to our survey respondents and seven transit agency For example, the state of Colorado limits officials that we interviewed.the amount of contracting used by the Denver Regional Transit District (Denver, Colorado) to 58 percent. According to agency officials whom we interviewed, the cap on contracting has not had much impact on the agency’s contracted services, which are currently about 56 percent of their bus and ADA paratransit operations. At the federal level, as described previously, when existing transit service is contracted out, Section 13(c) protections may be triggered, including assurances of employment and priority of reemployment. At all nine of the transit agencies we interviewed that use contracting, transit agency officials said that provisions of Section 13(c) have not been a deterrent to contracting; however, some transit agencies that responded to our survey reported that challenges presented by Section 13(c) are a reason for not contracting out service. Agencies that do not contract out any transit services or that contract out some, but not all aspects of their operations, also do so for reasons that vary by mode. As shown in table 3, for all modes except commuter rail, the top three reasons to not contract are that the agency desired to maintain control over operations, found no reason to change from the transit agency providing service, or found contracting was not cost effective. For commuter rail services, cost effectiveness was not among the primary reasons transit agencies reported for not contracting out service. Of the transit agencies that we interviewed, one transit agency does not contract out for any service and five transit agencies only contract out some modes, and they cited similar reasons as our survey respondents for their decisions not to contract out service. One transit agency that we interviewed that did not contract out any service—Western Maine Transportation Service (Auburn, Maine)—said that they never considered contracting. Transit agency officials stated that this was due in part to difficulty in finding a contractor willing to implement a costly drug and alcohol program that met FTA standards and also because the cost to contract out maintenance services would have been more expensive than directly operating it themselves, according to a comparative analysis performed by the transit agency of the cost to contract out versus the cost of operating it themselves. In addition, one transit agency contracted out service in the past and decided to bring the service back in-house. Dallas Area Rapid Transit (Dallas, Texas) used to use a contractor for its fixed- route bus service and later decided to provide the service using transit agency staff because, according to officials, in addition to local economic conditions and declining sales tax revenues, the contractor was not meeting service requirements and key performance indicators for maintenance of transit agency-owned vehicles. More recently, beginning in fiscal year 2012, the agency decided to keep operation of certain bus routes in-house after an analysis determined that the agency’s costs to operate the service were lower than the privately contracted options. According to our survey responses, transit agencies use several methods to select contractors. (See fig. 2.) The most common method (used by 200 transit agencies that responded to our question) is competition through a request for proposals, wherein the transit agency solicits offers for the service to be provided. Fewer than 50 transit agencies that responded to our survey that use contracting use each of the following methods: orders under pre-existing contracts,sole source or preferred vendors, exercising a contract option,selection from a list of preferred vendors. Officials we interviewed at eight of the nine transit agencies that use contracting said that they had at least three offers in response to their most recent solicitations for each mode operated, except when obtaining offers for the operation of their commuter rail services. Officials noted that it might not be cost effective for other contractors to make an offer on some commuter rail contracts because of specific circumstances, such as one contractor owning the tracks. In selecting a contractor, transit agencies may be required to consider potential conflicts of interest. Nearly all (99 percent) of the agencies that we surveyed that use contracting have an ethics policy or standards in place that prohibit conflicts of interest. Furthermore, nearly all (99 percent) consider federal law, regulations, and guidance prohibiting conflicts of interest for contractor employees and businesses when contracting out. Once the transit agency makes the decision to contract and selects a contractor, the two parties enter into a contract. Among other things, the contract specifies compensation, which can be structured in several ways. It may specify fixed-price compensation, which is based on a set price. For example, the payment may be a fixed amount per month. Compensation can also be hourly, so that the contractor pays based on the number of hours that the service is provided, which can be for the number of hours when the transit service is collecting fares or from the time the vehicles leave the facility until they return. Finally, the contractor can be compensated on a per-trip or per-mile basis, wherein the transit agency pays based on the number of trips provided or miles travelled. According to our survey, transit agencies structure compensation in their contract in various ways, including fixed price, price per revenue service based on hours or miles, price per vehicle miles or hours, and number of passenger trips provided. Contracts also specify the terms of service. For transit agencies that we interviewed, the most common contract term— used by six of the nine transit agencies that use contracting—was a 5- year initial contract period, sometimes including the option to extend into additional years. With respect to public access to contracts, federal regulations require transit agencies that enter into contracts using FTA grants to use their own procurement procedures that reflect applicable state and local laws and regulations. Of the transit agencies that responded to our survey that contract out, 92 percent allow public access to all of their contract documents, 6 percent allow public access to some of their contract documents, and 1 percent do not allow public access to any of their documents. Transit agencies reported undertaking a variety of activities to assess the quality of contracted services. Among our survey respondents that contract out services, the most commonly used methods are periodic reports or meetings, on-site inspections, the use of performance metrics, and real time monitoring. (See fig. 3.) About 84 percent of the surveyed transit agencies that contract out services reported having a specific unit or department to conduct oversight. Transit agency officials whom we interviewed described how they use various methods, arrangements, and metrics they use to oversee contractor performance. For example: Officials at all of the nine agencies that use contractors told us that they oversee contractors’ performance through activities such as routinely communicating with their contractors, either through periodic meetings or on as needed basis; inspecting contractors’ facilities or vehicles; and/or using real-time monitoring devices installed on vehicles. Seven of the nine agencies use in-house staff to monitor the contractor’s performance, while two use third-party contractors to perform this function. For example, in March 2013, the Washington Metropolitan Area Transit Authority signed a contract with a company to oversee the performance of the three transit contractors that operate its ADA paratransit services. Officials told us that using contractors to oversee the day-to-day performance of the contractors frees up in-house staff for more high-level oversight and management. Seven of the nine agencies that contract out for some or all of their transit services use metrics to establish performance incentives and/or penalties in contracts. For example, the Denver Regional Transit District uses a set of performance metrics, such as on-time performance and the number of complaints received from customers to measure the contractor’s performance for its fixed-route bus service. The performance metrics are used to determine performance incentives or penalties. Likewise, officials from the New Orleans Regional Transit Authority said that they include performance incentives and penalties in their delegated management contract. Among the contract’s provisions is a requirement for the contractor to reduce costs by 25 percent (adjusted for inflation) within 5 years in order to receive an automatic extension. The contractor said that it has already met this goal by focusing on safety throughout the organization, a focus that has reduced claim costs and related expenditures. Also, the contractor has changed the maintenance procedures and fleet operations and better managed the inventory of parts, which have also enhanced cost efficiencies. According to the contractor, during this 5-year period it has experienced increased service and ridership levels while meeting its cost-reduction goals. Conversely, officials from Yuma County Intergovernmental Public Transportation Authority do not include performance incentives in their contracts because they expect the contractor to always perform at a high level of service; however, they do have penalties for certain violations such as accidents. Transit agency officials whom we interviewed and the literature we reviewed cite potential benefits to contracting for transit agencies, which may vary based on the needs and circumstances of individual transit agencies. Contracting can be used to start or expand service. According to our literature review, transit agencies view contracting as advantageous when new services need to be established quickly, based on the assumption that private firms can mobilize faster than a public agency. Also, we have previously mentioned that starting new service is a primary consideration for contracting out service, according to our survey respondents (see table 2). Transit agencies contract out for new services in order to avoid the high start-up costs, including the cost of new services, procuring new vehicles, hiring staff, and obtaining facilities, according to transit agency officials that we interviewed. In addition, a 2005 study found that transit agencies use contracting to try out new service, as one agency did to provide new lines in outlying areas, because managers suspected that those new lines would have very low ridership and would not be cost effective. That study indicated that, according to the manager, contracting was more efficient because the contractor used smaller vehicles and had lower labor costs than the transit agency. Contracting can be used to maintain service levels. According to our survey respondents’ write-in answers on how transit service contracting has met their expectations, three transit agencies reported that contracting for certain services has allowed the transit agency to maintain service that would have been discontinued due to budget reductions. For example, one transit agency that contracts out its paratransit service with a ridership of about 52,000 per month reported that this service would have been cut due to cost constraints were it not for contracting. Provides access to contractor’s expertise and resources. Two of the 10 transit agency officials whom we interviewed contract to gain the expertise and the resources that a contractor can bring to their transit operation. For example, according to New Orleans Regional Transit Authority officials, the agency used a contractor because the officials felt that the contractor had the expertise and experience to lead the agency back to full recovery after Hurricane Katrina. Shifts risk of providing service to contractor. According to a 2007 study, when contracting for operation of services requiring buses, the transit industry has moved toward providing the vehicles and even the maintenance facility to a contractor. As a result, the contractor assumes greater financial risk in terms of providing the insurance that is required for the vehicles. Transit agency officials we spoke with cited this transfer of risk as a benefit. For example, an official at Yuma County Intergovernmental Public Transportation Authority told us that contracting reduced the transit agency’s insurance costs by 45 percent or more. Additionally, according to this official, insurance costs are typically higher for new transit authorities than for contractors, because they often do not have the same degree of experience operating transit services. In addition to these reported benefits and their associated cost savings, transit agencies and literature cite the following challenges to contracting out transit services, which may, in some cases, outweigh the benefits or cost savings: Diminishes an agency’s direct control over operations. Based on studies of contracting, some transit agencies have not accepted contracting because it does not provide an economic benefit equal to the risks associated with delegating service control to a contractor. Specifically, according to this 2008 study, absent a compelling economic return that includes discounted future savings, in-house delivery of transit services is preferable because it provides managers with a direct line of authority to adjust services to meet a community’s demand for services or deal with unforeseen service events. Requires a complex request for proposal and contract-monitoring process. Officials at two transit agencies told us that the contracting process is complex, long, and arduous. For example, officials from the Denver Regional Transportation District told us that they start the contract solicitation process approximately one year prior to the expiration of the existing contract. The process includes writing the scope of work, updating requirements (which includes getting input from various departments within the agency), issuing the request for proposal, evaluating the responses, negotiating with the selected contractor, and monitoring any start-up activities, thus costing the agency time and money. In addition, studies suggest that the costs of monitoring the contractor’s performance may, in some cases, outweigh the benefits. In particular, a recent study found that transit agencies may need to keep in-house staff to evaluate and monitor contracts, which can reduce efficiency gains and related cost savings. Another study suggests that the transaction costs that transit agencies incur when they draw up requests for proposals, evaluate offers, negotiate contracts, and monitor contracts with private providers could offset or even exceed cost savings from contracting transit service operation and management functions. Requires transit agency to address labor issues. According to one study, transit agencies that are unionized must consider how organized labor would react to a contracting decision. This study suggests that transit agencies face opposition to contracting from unions representing their employees. While a union may concede to contracting out new services, it tends to show much stronger opposition to contracting out existing services, which threatens union members’ current jobs. However, according to the study, while most agencies with some in-house service are sensitive to union resistance to contracting, they may also face financial distress and must find ways to increase cost efficiency. Under such conditions, agencies need to maintain a good relationship and open communication with the union. This situation enables both parties to work together to simultaneously increase the cost efficiency of the in-house service while avoiding significant job losses due to contracting. Even when transit managers are aware of other strategies for increasing cost efficiency, they need cooperation and concessions from the union to implement them. Contractors we interviewed cited the following benefits to contracting. Provides cost savings. Contractors told us that they are able to increase efficiencies while reducing costs to transit agencies. For example, one contractor told us that his company includes in its contracts with transit agencies technology for routing and scheduling that is proprietary and, therefore, not available to transit agencies outside of a contract. Another contractor told us that its contracts provide access to specialized routing technology, which transit agencies would otherwise have to spend a great deal of money to purchase. Also, according to another contractor, providing the insurance on the vehicles that it uses can be a significant savings depending on the size of the transit agency. As previously mentioned in this report, contracting can also be a way for transit agencies to potentially lower their health insurance costs. For example, contractors with large numbers of employees may have lower health insurance rates and be able to offer lower rates to their employees. As a result of these and other efficiencies, one contractor told us that if it were to exclude the cost of capital assets (vehicles and facilities) the operating costs for the contractor would be significantly lower, usually in the range of 15 percent to 35 percent. However, expected cost savings do not always come to fruition. For example, based on anecdotal evidence from our literature review, one transit agency brought transit services back in house after it found that its arrangement with its contractor was too expensive. The contractor had a 5-year contract that was terminated in less than 3 years. Provides access to contractor’s expertise and resources. Contractors also cited the expertise and resources that they can bring to a transit agency as a benefit to contracting. One contractor we interviewed told us that it brings specific knowledge and expertise in areas such as training, and resources, such as customer call centers, and thus allows the transit agency to focus on its own or management strengths. Additionally, another contractor told us that contracting allows a private company to provide resources that the transit agency does not have. For example, transit agencies might receive access to expertise for technical issues and labor negotiations, as well as discounted purchasing rates for fuel, vehicle parts, and other equipment, because of the large amount purchased by the contractor to cover several transit agencies’ operations. Increases labor flexibility. Contractors cited labor flexibility as a benefit to contracting. For example, according to one contractor, contracting offers the transit agency more labor flexibility, in that, if additional staff is needed to perform a particular service the contractor generally has greater flexibility to quickly bring in the needed staff, because it has the resources of the entire company, whereas the transit agency may be limited in that regard. Another contractor said that its company’s labor agreements are not very restrictive in terms of how the workforce is deployed or scheduled. For example, the contractor can cross-train staff, and if a dispatcher is needed to drive or a driver is needed to perform dispatching functions, its labor agreements generally allow those things to happen, which increases efficiencies. The contractors we interviewed cited few challenges to contracting. Three of the six contractors we spoke to said that the capital investment that is required for a contract might prevent them from bidding. Also, according to one contractor, the biggest barrier that exists to contracting is transit agency funds. Transit agencies are sometimes forced into a contracting arrangement based on price rather than value because of funding constraints. Officials at national and local unions we spoke with said that whereas contracting may provide some short-term cost savings to transit agencies, the savings are almost entirely from lower wages and benefits paid by the private companies to their employees. This statement is consistent with what we heard from some transit agencies regarding the source of cost savings associated with contracting. Recent studies we reviewed also suggest that this is the case. For example, one study of 12 transit agencies found that cost savings accrue primarily as a result of private transit labor consistently earning lower wages and fewer benefits compared to similar public sector employees. Moreover, one local union official that we spoke with told us that the wages for the transit-agency bus operators it represents are generally higher than the contractors’ bus operators’ wages. A new bus driver starts at about $12-13 per hour for the contractors and $15 per hour for that transit agency, according to that union official. Additionally, this union official told us that while wages for contractor employees and transit agency employees tend to be at the same level at the top of the pay scale, it takes contractor employees longer to reach the top levels. However, according to national union officials, commuter rail employees covered under the Railway Labor Act receive comparable wages and benefits whether employed by transit agencies or contractors. The national and local union officials we interviewed stated that contracting might lead to decreased level of safety, poor service quality, and hidden costs. Decreased level of safety. Local union officials we spoke with said that contracting decreases the level of safety, possibly because, in their view, contracted employees receive less training than transit agency employees. For example, one local union official told us that the local transit agency provides 8 weeks of classroom and on-the- road training whereas the contractors provide 5 weeks of training. Another local union official told us that the contractor reduced its training course to 2 weeks from 3 (the amount provided by the transit agency). In addition, one union official told us that privatized buses are not as safe as agency buses. For example, the official said that he had seen private buses on the road with side view mirrors held in place with duct tape. Two studies published since 2002 discussed the quality of contracted services, and one noted that contracted service had higher rates (by 70 percent) of vehicle collisions and the other reported that service quality may be lower among low cost contracted operators. However, as previously mentioned in this report, officials at all of the nine agencies we interviewed that use contractors told us that they oversee contractors’ performance through various activities, including inspecting contractors’ facilities or vehicles, and none of the officials that we interviewed raised concerns about safety. Additionally, one transit agency told us that officials inspect the contractors’ buses on a daily basis to determine their condition or whether preventive maintenance or repairs have been performed. The agency also reviews performance data related to customer complaints, on-time performance, accidents, and maintenance, which it compiles in a monthly performance report. Poor service quality. Union officials we spoke with generally agreed that because contractors are profit driven, they may not have incentives to provide the same level and quality of service as the transit agencies. According to one union official, the contractor will only provide the level of effort mandated by the contract, whereas the agency will go above and beyond to ensure high-quality service. Recent literature has discussed the quality of contracted services, with one study finding that contracted service had more vehicle breakdowns (by 36 percent). Anecdotal evidence from the literature shows that some contractors are having difficulties in providing quality service. For example, a contractor took over paratransit operations in a Florida county in the summer of 2012, and by May 2013, due to performance failures, such as vehicle breakdowns, accidents, maintenance requirements, and other problems; the county fined the contractor $2.2 million. The county has since directed the transit agency to find a second service provider by November 2013, to help provide paratransit services. Also, as discussed earlier, one transit agency that we interviewed ceased using a contractor for its fixed- route bus service because of concerns about service quality, among other issues. However, one union representative we spoke with thought that the quality of service may actually be better under a contractor, because contractors are penalized for not meeting performance measures, such as on-time performance, as discussed earlier in this report. Hidden costs. Union officials cited hidden costs incurred by transit agencies related to activities such as proposal evaluation and contract monitoring as a disadvantage to contracting. As previously mentioned in this report, transaction costs that public agencies incur when they draw up requests for proposals, evaluate offers, negotiate contracts, and, monitor contracts with private providers could offset or even exceed cost savings from contracting transit service operation and management functions. Moreover, one local union official we spoke with told us that privatization adds a level of management, which can create inefficiencies and duplication of effort. For example, he told us that one transit agency that he represents has two sets of street supervisors—one for the contractor and another for the transit agency—each doing the same work. Also, if a private contractor fails to provide service on a route, the transit agency is ultimately responsible and must find other means to operate the route. According to our interviews with five of the six citizens’ advisory groups that are affiliated with transit agencies, the quality of service was generally viewed as being comparable whether provided by the transit agency or a contractor. The Denver Regional Transit District conducted a customer satisfaction survey and found no measurable difference in customer satisfaction between in-house and contracted services. In addition, Dallas Area Rapid Transit citizens’ advisory group said that the level of service has been good with contracting. Lastly, the advisory board for Metropolitan Rail Authority told us that the public generally does not know whether a contractor or the transit agency operates services. Contracting is not a one-size-fits-all approach for providing transit services. For some transit agencies, contracting may be the most cost- effective way to provide service, because transit agencies can benefit from access to certain technologies or reduced labor, fuel, and insurance costs. For other transit agencies, contracting may be impractical because of additional costs incurred from the bidding process and contractor oversight. Given our challenging economy, it is important that transit agencies are able to make decisions that allow them to use federal funds in the most efficient manner while also considering factors such as providing high quality service, regardless of whether these services are provided by transit agency employees or contractors. We provided a copy of this report to the Department of Transportation and the Department of Labor for review. The agencies had no comment on the report. We are sending copies of this report to interested congressional committees, the Secretary of the Department of Transportation, and the Secretary of the Department of Labor. In addition, this report will be available at no charge on GAO’s web site at http://www.gao.gov. If you or your staff have any questions or would like to discuss this work, please contact me at (202) 512-2834 or wised@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals making key contributions to this report are listed in appendix II. To comply with the Moving Ahead for Progress in the 21st Century Act (MAP-21) mandate we addressed the following questions: (1) What is the extent that transit agencies contract public transit operations and services and identify reasons for doing so? (2) What methods do transit agencies use to select and oversee contracted services? (3) What are the potential benefits, challenges, and disadvantages of contracting out public transit operations and other services? To address our questions, we conducted a web-based survey of all 637 transit agencies that reported to the Federal Transit Administration’s (FTA) National Transit Database in 2011 and operate fixed-route bus; demand response; ADA (Americans with Disability Act) paratransit; and heavy, light, or commuter rail services and asked about their contracting practices in 2011. We excluded transit agencies that received a reporting waiver. The survey was conducted from March 4, 2013, to April 23, 2013. To prepare the questionnaire, we pretested potential questions with transit agencies of different sizes and that operate all of the modes to ensure that (1) the questions and possible responses were clear and thorough, (2) terminology was used correctly, (3) questions did not place an undue burden on the respondents, (4) the information was feasible to obtain, and (5) the questionnaire was comprehensive and unbiased. On the basis of feedback from the four pretests we conducted, we made changes to the content and format of some survey questions. The results of our survey can be found at GAO-13-824SP. To identify transit agencies to survey, we conducted interviews with the appropriate FTA officials responsible for the National Transit Database to learn about information collected from transit agencies regarding transit contracting and obtain contact information. We contacted all of the transit agencies in advance, by e-mail, to ensure that we had identified the correct respondents and to request their completion of the questionnaire. After the survey had been available for 1 week, and again after 2 and 4 weeks, we used e-mail and telephone calls to contact transit agencies who had not completed their questionnaires. Using these procedures, we received responses from 463 transit agencies for a response rate of 73 percent. The results of our survey are not generalizeable to all transit agencies. Estimates and responses to survey questions in this report refer only to the views of the respondents. The survey was a census survey, and we did not try to extrapolate the findings to the agencies that chose not to respond. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or were analyzed can introduce unwanted variability into the survey results. We took steps in the development of the questionnaire, the data collection, and the data analysis to minimize these nonsampling errors. For instance, a survey specialist designed the questionnaire in collaboration with GAO staff who have subject-matter expertise. Further, the draft questionnaire was pretested with four transit agencies to ensure that the questions were relevant, clearly stated, and easy to comprehend. When the data were analyzed, a second, independent analyst checked all computer programs. Finally, we analyzed nonresponding transit agencies for evidence of bias. We found that transit agencies that provide heavy rail were less likely to respond to our survey than other transit agencies. To obtain in-depth information and contracting experiences from local jurisdictions, we interviewed transit agencies at ten sites across the country. (See table 4.) At each location, we attempted to interview private transit contractors, citizens’ advisory groups, and a local union. We judgmentally selected these locations based on geographic location, population served, transit modes, agency sizes, and contracting practices. The interviews from these locations are not generalizable to all transit agencies. We also interviewed the American Public Transportation Association, Community Transportation Association of America, and national labor unions representing operators and maintenance workers including the American Federation of Labor and Congress of Industrial Organizations, the Amalgamated Transit Union and the Transport Workers Union, International Brotherhood of Teamsters, and representatives of the following unions for commuter rail operators and maintenance workers— International Brotherhood of Electrical Workers, Brotherhood of Railroad Signalmen, National Conference of Firemen & Oilers, Brotherhood of Maintenance of Way Employees Division, Sheet Metal Air, Rail and Transportation - Sheet Metal Workers’ International Association, Transport Workers Union, Transportation Trades Department, International Association of Machinists, Sheet Metal Air, Rail and Transportation-United Transportation Union, and the Brotherhood of Locomotive Engineers and Trainmen. We also interviewed Federal Railroad Administration officials regarding contracting of commuter services and Department of Labor officials to understand their role when transit agencies decide to contract out services. We also reviewed and synthesized information from our body of work and relevant literature on contracting out transit services in the United States. We reviewed citations identified through a search of databases containing peer-reviewed articles, government reports, and “gray literature,” including Transport Research International Documentation, Social SciSearch, and PROQUEST. Publications were limited to the years after 2001. After an initial review of citations, 37 articles were selected for further review. To collect information on the articles, we developed a data collection instrument to gather information on the articles’ scope and purpose, methods, findings and their limitations, and additional areas for follow-up, including a review of the bibliography to determine the completeness of our literature search. To apply this data collection instrument, one analyst reviewed each article and recorded information in the data collection instrument. A second analyst then reviewed each completed data collection instrument to verify the accuracy of the information recorded. We summarized the findings and limitations of the articles based on the completed data collection instruments, as well as areas for additional research identified in the articles. We conducted this performance audit from October 2012 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Teresa Spisak (Assistant Director), Stephanie Purcell, Carl Barden, Dwayne Curry, Leia Dickerson, Lorraine Ettaro, Kathy Gilhooly, Cathy Hurley, Stu Kaufman, Alex Lawrence, and Amy Rosewarne made key contributions to this report.
Some transit agencies have found that they can save money by contracting out some or all of their services with private providers, while others have found it more beneficial to use their own staff to provide services. The Moving Ahead for Progress in the 21st Century Act mandated that GAO review issues related to transit contracting. In this report, GAO identified: (1) the extent that public transit agencies contract operations and reasons why agencies decide to do so, (2) methods used to select and oversee contracted services, and (3) potential benefits, challenges, and disadvantages of contracting out public transit operations and other services. GAO conducted a web-based survey of 637 transit agencies that submit reports to the Department of Transportation (DOT) and obtained 463 responses for a 73 percent response rate. The survey and results can be found at GAO-13-824SP . In addition, GAO interviewed federal officials, representatives from industry organizations, and national union officials. GAO also interviewed officials from 10 transit agencies, chosen based on a variety of characteristics, including geographic diversity, population served, use of contracting, and modes operated. At each transit location, GAO interviewed private transit providers, citizens' advisory groups, and local unions. The results of the survey and interviews are not generalizeable to all transit agencies. GAO also reviewed relevant studies and literature on transit contracting. GAO is not making recommendations in this report. DOT and Department of Labor reviewed a draft of this report and had no comments. Contracting is a prevalent means of providing transit services. About 61 percent of the 463 transit agencies responding to GAO's survey reported they contract out some or all operations and services, while the rest reported that they do not contract out at all. According to GAO's survey, paratransit (services for the disabled), demand response (also known as dial-a-ride), and commuter rail service are most often contracted out, and fixed-route bus, heavy rail, and light rail service are most often operated by the transit agency. Operations are most frequently contracted out, followed by maintenance services. Transit agencies most consistently cite reducing costs as a factor influencing their decision to contract. Contracting can reduce costs because contractors' workforces are more flexible, with more employees working in part-time positions, and lower insurance costs, among other things. Transit agencies also frequently cited starting new service, improving efficiency, and allowing for more flexible service as reasons for contracting. State laws are generally not a reason for contracting, according to GAO's survey. Transit agencies that do not contract most often cited one of these three reasons: desire to maintain control over operations, no reason to change from the transit agency's providing service, or contracting was determined not to be cost effective. Transit agencies GAO surveyed use various methods to select contractors and oversee contractor performance. To select contractors, most agencies used competition with a request for proposals. For oversight, transit agencies most commonly used periodic reports or meetings, on-site inspections, performance metrics, and real time monitoring, according to GAO's survey and interviews. About 84 percent of surveyed transit agencies that contract out services reported having a specific oversight unit. Of the nine transit agencies GAO interviewed that use contracting, seven used transit agency staff for monitoring, while two used contractors to perform this function. Seven of these agencies used performance metrics to establish incentives and/or penalties in contracts. Transit agencies and contractors cited benefits and challenges to contracting, while labor unions primarily noted disadvantages--most notably, reduced wages and benefits and a potential decline in safety and service, among other issues. Specifically, transit agencies GAO interviewed and the literature cited benefits to contracting, which vary based on the individual needs and circumstances of transit agencies. For example, transit agencies that use contractors view contracting as advantageous when starting or expanding services in order to avoid start-up costs--such as the large capital cost of acquiring new vehicles and hiring new staff. Contractors reported they could improve transit agencies' operational efficiency by providing the latest technologies, such as routing systems and lower costs by providing more affordable insurance on vehicles. Transit agencies also cited some challenges to contracting, such as the agency's loss of direct control over operations. Officials from national and local unions GAO spoke with said that while contracting may provide some short-term cost savings to transit agencies, in their view the savings are almost entirely from lower wages and benefits paid by the private companies to employees.
Enacted in 1988, the Exon-Florio amendment to the Defense Production Act authorized the President to investigate the effects of foreign acquisitions of U.S. companies on national security and to suspend or prohibit acquisitions that might threaten national security. The President delegated investigative authority to the Committee on Foreign Investment in the United States, an interagency group responsible for monitoring and coordinating U.S. policy on foreign investment in the United States. Since the Committee’s establishment in 1975, membership has doubled, with the Department of Homeland Security being the most recently added member. In addition to the Committee’s 12 standing members, other agencies may be called on when their particular expertise is needed. In 1991, the Treasury Department, as Chair of the Committee, issued regulations to implement Exon-Florio. The law and regulations establish a four-step process for reviewing foreign acquisitions of U.S. companies: (1) voluntary notice by the companies; (2) a 30-day review to identify whether there are any national security concerns; (3) a 45-day investigation period to determine whether those concerns require a recommendation to the President for possible action; and (4) a presidential decision to permit, suspend, or prohibit the acquisition (see fig. 1). In most cases, the Committee completes its review within the initial 30 days because there are no national security concerns or concerns have been addressed, or the companies and the government agree on measures to mitigate identified security concerns. In cases where the Committee is unable to complete its review within 30 days, it may initiate a 45-day investigation or allow companies to withdraw their notifications. The Committee generally grants requests to withdraw. When the Committee concludes a 45-day investigation, it is required to submit a report with recommendations to the President. If Committee members cannot agree on a recommendation, the regulations require that the report to the President include the differing views of all Committee members. The President has 15 days after the investigation is completed to decide whether to prohibit or suspend the proposed acquisition, order divestiture of a completed acquisition, or take no action. Table 1 provides a breakdown of notifications and committee actions taken from 1997 through 2004 (the latest date for which data were available at the time of our 2005 review). Over the past decade, GAO has conducted several reviews of the Committee’s process and actions and has found areas where improvements were needed. In 2000, we found that gaps in the notification process raised concerns about the Committee’s ability to ensure transactions are notified. Our 2002 review, prompted by a lack of congressional insight into the process, again found weaknesses in the process. Specifically, we reported that member agencies could improve the agreements they negotiated with companies under Exon-Florio to mitigate national security concerns. We also questioned the use of withdrawals to provide additional time for reviews. While our most recent work indicated that member agencies had begun to take action to respond to some of our recommendations, concerns remained about the extent to which the Committee’s implementation of Exon-Florio had provided the safety net envisioned by the law. In 2005, we reported that a lack of agreement among Committee members on what defines a threat to national security and what criteria should be used to initiate an investigation may have limited the Committee’s analyses of proposed and completed foreign acquisitions. From 1997 through 2004, the Committee received a total of 470 notices of proposed or completed acquisitions, yet it initiated only 8 investigations. While neither the statute nor the implementing regulation defines “national security,” the statute provides a number of factors that may be considered in determining a threat to national security (see fig. 2). Some Committee member agencies argued for a more traditional and narrow definition of what constitutes a threat to national security—that is, (1) the U.S. company possesses export-controlled technologies or items; (2) the company has classified contracts and critical technologies; or (3) there is specific derogatory intelligence on the foreign company. Other members, including the Departments of Defense and Justice, argued that acquisitions should be analyzed in broader terms. According to officials from these departments, vulnerabilities could result from foreign control of critical infrastructure, such as control of or access to information traveling on networks. Vulnerabilities can also result from foreign control of critical inputs to defense systems, such as weapons system software development or a decrease in the number of innovative small businesses researching and developing new defense-related technologies. While these vulnerabilities may not pose an immediate threat to national security, they may create the potential for longer term harm to U.S. national security interests by reducing U.S. technological leadership in defense systems. For example, in reviewing a 2001 acquisition of a U.S. company, the Departments of Defense and Commerce raised several concerns about foreign ownership of sensitive but unclassified technology, including the possibility of this sensitive technology being transferred to countries of concern or losing U.S. government access to the technology. However, Treasury argued that these concerns were not national security concerns because they did not involve classified contracts, the foreign company’s country of origin was a U.S. ally, or there was no specific negative intelligence about the company’s actions in the United States. In one proposed acquisition, disagreement over the definition of national security resulted in an enforcement provision being removed from a mitigation agreement between the foreign company and the Departments of Defense and Homeland Security. Defense had raised concerns about the security of its supply of specialized integrated circuits, which are used in a variety of defense technologies that the Defense Science Board had identified as essential to our national defense—technologies found in unmanned aerial vehicles, the Joint Tactical Radio System, and cryptography and other communications protection devices. However, Treasury and other Committee members argued that the security of supply issue was an industrial policy concern and, therefore, was outside the scope of Exon-Florio’s authority. As a result of removing the provision, the President’s authority to require divestiture under Exon-Florio was eliminated as a remedy in the event of non-compliance. Committee members also disagreed on the criteria that should be applied to determine whether a proposed or completed acquisition should be investigated. While Exon-Florio provides that the “President or the President’s designee may make an investigation to determine the effects on national security” of acquisitions that could result in foreign control of a U.S. company, it does not provide specific guidance for the appropriate criteria for initiating an investigation of an acquisition. At the time of our work, Treasury, as Committee Chair, applied essentially the same criteria established in the law for the President to suspend or prohibit a transaction, or order divestiture: (1) there is credible evidence that the foreign controlling interest may take action to threaten national security and (2) no laws other than Exon-Florio and the International Emergency Economic Powers Act are adequate and appropriate to protect national security. However, the Defense, Justice, and Homeland Security Departments argued that applying these criteria at this point in the process is inappropriate because the purpose of an investigation is to determine whether or not credible evidence of a threat exists. Notes from a policy- level discussion of one particular case further corroborated these differing views. Committee guidelines required member agencies to inform the Committee of national security concerns by the 23rd day of a 30-day review—further compressing the limited time allowed by legislation to determine whether a proposed or completed foreign acquisition posed a threat to national security. According to one Treasury official, the information is needed a week early to meet the legislated 30-day requirement. While most reviews are completed in the required 30 days, some Committee members have found that completing a review within such short time frames can be difficult—particularly in complex cases. One Defense official said that without advance notice of the acquisition, time frames are too short to complete analyses and provide input for the Defense Department’s position. Another official said that to meet the 23-day deadline, analysts have only 3 to 10 days to analyze the acquisition. In one instance, Homeland Security was unable to provide input within the 23-day time frame. If a review cannot be completed within 30 days and more time is needed to determine whether a problem exists or identify actions that would mitigate concerns, the Committee can initiate a 45-day investigation of the acquisition or allow companies to withdraw their notifications and refile at a later date. According to Treasury officials, the Committee’s interest is to ensure that the implementation of Exon-Florio does not undermine U.S. open investment policy. Concerned that public knowledge of investigations could devalue companies’ stock, erode confidence of foreign investors, and ultimately chill foreign investment in the United States, the Committee has generally allowed and often encouraged companies to withdraw their notifications rather than initiate an investigation. While an acquisition is pending, companies that have withdrawn their notification have an incentive to resolve any outstanding issues and refile as soon as possible. However, if an acquisition has been concluded, there is less incentive to resolve issues and refile, extending the time during which any concerns remain unresolved. Between 1997 and 2004, companies involved in 18 acquisitions withdrew their notification and refiled 19 times. In four cases, the companies had already concluded the acquisition before filing a notification. One did not refile until 9 months later and another did not refile for 1 year. Consequently, concerns raised by Defense and Commerce about potential export control issues in these two cases remained unresolved for as much as a year—further increasing the risk that a foreign acquisition of a U.S. company would pose a threat to national security. For the other two cases, neither company had refiled at the time we completed our work. In one case, the company had previously withdrawn and refiled more than a year after completing the acquisition. The Committee allowed it to withdraw the notification to provide more time to answer the Committee’s questions and provide assurances concerning export control matters. The company refiled, and was permitted to withdraw a second time because there were still unresolved issues. When we issued our report in 2005, 4 years had passed since the second withdrawal without a refiling. In the second case, the company—which filed with the Committee more than 6 months after completing its acquisition—was also allowed to withdraw its notification. At the time we issued our report, 2 years had passed without a refiling. In response to concerns about the lack of transparency in the Committee’s process, the Congress passed the Byrd Amendment to Exon-Florio in 1992, requiring a report to the Congress if the President made any decision regarding a proposed foreign acquisition. In 1992, another amendment also directed the President to report every 4 years on whether there was credible evidence of a coordinated strategy by one or more countries to acquire U.S. companies involved in research, development, or production of critical technologies for which the United States is a leading producer, and whether there were industrial espionage activities directed or assisted by foreign governments against private U.S. companies aimed at obtaining commercial secrets related to critical technologies. While the Byrd Amendment expanded required reporting on Committee actions, few reports have been submitted to the Congress because withdrawing and refiling notices to restart the clock has limited the number of cases that result in a presidential decision. Between 1997 and 2004, only two cases—both involving telecommunications systems— resulted in a presidential decision and a subsequent report to the Congress. Infrequent reporting of Committee deliberations on specific cases provides little insight into the Committee’s process to identify concerns raised during investigations and determine the extent to which the Committee has reached consensus on a case. Further, despite the 1992 requirement for a report on foreign acquisition strategies every 4 years, at the time of our work there had been only one report—in 1994. However, another report, in response to this requirement, was recently delivered to the Congress. In conclusion, the effectiveness of Exon-Florio as a safety net depends on how the broad scope of its authority is implemented in today’s globalized world—where identifying threats to national security has become increasingly complex. While Exon-Florio provides the Committee on Foreign Investment in the United States the latitude to define what constitutes a threat to national security, the more traditional interpretation fails to fully consider factors currently embodied in the law. Further, the Committee guidance requiring reviews to be completed within 23 days to meet the 30-day legislative requirement, along with the reluctance to proceed to an investigation, limits agencies’ ability to complete in-depth analyses. However, the alternative—allowing companies to withdraw and refile their notifications—increases the risk that the Committee, and the Congress, could lose visibility over foreign acquisitions of U.S. companies. The criterion for reporting specific cases to the Congress only after a presidential decision contributes to the opaque nature of the Committee’s process. Our 2005 report laid out several matters for congressional consideration to (1) help resolve the differing views as to the extent of coverage of Exon- Florio, (2) address the need for additional time, and (3) increase insight and oversight of the process. Further, we suggested that, when withdrawal is allowed for a transaction that has been completed, the Committee establish interim protections where specific concerns have been raised, specific time frames for refiling, and a process for tracking any actions being taken during a withdrawal period. We have been told that some of these steps are now being taken. Madam Chairwoman, this concludes my prepared statement. I will be happy to answer any questions you or other Members of the Subcommittee may have. For information about this testimony, please contact Ann M. Calvaresi- Barr, Director, Acquisition and Sourcing Management, at (202) 512-4841 or calvaresibarra@gao.gov. Other individuals making key contributions to this product include Thomas J. Denomme, Gregory K. Harmon, Paula J. Haurilesko, John J. Marzullo, Russell Reiter, Karen Sloan, and Marie Ahearn. Our understanding of the Committee’s process is based on our 2005 work but built on our review of the process and our discussions with agency officials for our 2002 report. For our 2005 review, and to expand our understanding of the Committee’s process for reviewing foreign acquisitions of U.S. companies, we met with officials from the Departments of Commerce, Defense, Homeland Security, Justice, and the Treasury—the agencies that are most active in the review of acquisitions— and discussed their involvement in the process. Further, we conducted case studies of nine acquisitions that were filed with the Committee between June 28, 1995, and December 31, 2004. We selected acquisitions based on recommendations by Committee member agencies and the following criteria: (1) the Committee permitted the companies to withdraw the notification; (2) the Committee or member agencies concluded agreements to mitigate national security concerns; (3) the foreign company had been involved in a prior acquisition notified to the Committee; or (4) GAO had reviewed the acquisition for its 2002 report. We did not attempt to validate the conclusions reached by the Committee on any of the cases we reviewed. To determine whether the weaknesses in provisions to assist agencies in monitoring agreements that GAO had identified in its 2002 report had been addressed, we analyzed agreements concluded under the Committee’s authority between 2003 and 2005. We conducted our review from April 2004 through July 2005 in accordance with generally accepted government auditing standards. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Exon-Florio amendment to the Defense Production Act of 1950, enacted in 1988, authorized the President to suspend or prohibit foreign acquisitions of U.S. companies that pose a threat to national security. The Committee on Foreign Investment in the United States--chaired by the Department of Treasury with 11 other members, including the Departments of Commerce, Defense, and Homeland Security--implements Exon-Florio through a four-step review process: (1) voluntary notice by the companies of pending or completed acquisitions; (2) a 30-day review to determine whether the acquisition could pose a threat to national security; (3) a 45-day investigation period to determine whether concerns require possible action by the President; and (4) a presidential decision to permit, suspend, or prohibit the acquisition. Over the past decade, GAO has conducted several reviews of the Committee's process and has found areas where improvements were needed. GAO's most recent work, conducted in 2005, indicated concerns remained. Exon-Florio reviews are meant to serve as a safety net when other laws may be inadequate to protect national security. GAO found that several aspects of the review process may have weakened the law's effectiveness. First, member disagreement about what defines a threat to national security may have limited the Committee's analyses. Some argued that reviews should be limited to concerns about export-controlled technologies or items, classified contracts, or specific derogatory intelligence concerning the company. Others argued for a broader scope, one that considers potential threats to U.S. critical infrastructure, defense supply, and technology superiority. Committee members also differed on the criteria that should be used to determine when an investigation is warranted. Some applied essentially the criteria in the law for a presidential decision--that is, there is credible evidence that the foreign controlling interest may take action that threatens national security and that no other laws other than the International Emergency Economic Powers Act are adequate to protect national security. Others argued that these criteria are inappropriate because the purpose of an investigation is to determine if credible evidence of a threat exists. While most cases can be completed within the 30-day review period, complex acquisitions may require more time. Concerned that an investigation could discourage foreign investment, the Committee allowed companies to withdraw notifications rather than proceed to investigation. While this practice can provide additional review time without chilling foreign investment, it may also heighten the risk to national security in transactions where there are concerns and the acquisition has been completed or is likely to be completed during the withdrawal period. Finally, because few cases are investigated, few require a presidential decision, giving Congress little insight into the Committee's process.
H5N1 has spread to infect poultry and wild birds over a wide geographic area. After appearing in southeastern China and Hong Kong in 1996 and 1997, the virus reappeared in late 2003 and early 2004 in a number of other Southeast Asian countries. In 2005 and 2006, it spread rapidly to countries in other parts of Asia and to Europe and Africa. Through December 2006, H5N1 had been detected in poultry and wild birds in nearly 60 countries. Figure 1 shows the progression of the disease across countries and also notes which of those countries have experienced human cases. H5N1 has infected increasing numbers of humans. WHO confirmed only 4 cases of H5N1 infection among humans in 2003, and 3 of these occurred in one country, Vietnam. In contrast, WHO confirmed 115 human cases in 2006, in nine different countries. Table 1 shows how the number and distribution of human cases grew from 2003 through 2006. The largest numbers of human cases occurred in Southeast Asian countries where the virus is well established in wild and domestic birds. Pandemics can occur when influenza strains emerge that have never circulated among humans but can cause serious illness in them and can pass easily from one person to the next. H5N1 has shown that it can cause serious illness in humans, and could spark a pandemic if it evolves into a strain that has the ability to pass easily from one human to the next. H5N1 may evolve into such a strain gradually, through accumulation of a number of small mutations, or suddenly, through the introduction of genetic material from another influenza virus. Influenza A viruses, which cause both avian influenza outbreaks and human influenza pandemics, occur naturally in wild birds and can also infect pigs, humans, and other mammals. The various subtypes, including H5N1, mutate as they reproduce in their avian or mammal hosts. These small mutations continually produce new strains with slightly different characteristics. More rarely, when an animal or human is infected with two different subtypes, an entirely new subtype can emerge. Scientists believe that the 1957 and 1968 pandemics began when subtypes circulating in birds and humans simultaneously infected and combined into new subtypes in other host animals, most likely pigs. Disease experts caution that there are significant gaps in our understanding of the H5N1 virus in wild and domestic birds and in humans, and it is not possible to quantify the pandemic risk presented by this strain. However, they generally agree that the level of risk that H5N1 will spark a pandemic varies with (1) environmental factors, defined as the extent to which a country or region has already become infected with the virus—or may become infected from a neighboring country—and provides conditions in which the virus can spread in poultry and infect humans, and (2) preparedness factors, defined as the extent to which the country or region is prepared to detect the virus in poultry and humans and respond appropriately. Taking both environmental and preparedness factors into consideration, the risk of a pandemic emerging from the current H5N1 epidemic in poultry is considered higher in countries or regions where the virus is well-established among domestic poultry; there is substantial risk that wild birds or unregulated trade in poultry and other birds will introduce the virus from neighboring infected countries; large numbers of poultry are raised in heavily populated areas; high-risk agricultural practices (such as allowing poultry unrestricted access to family homes and selling them in “wet markets”) are common; local authorities have little ability to detect, diagnose, and report H5N1 cases or outbreaks in either poultry or humans; or local authorities have little ability to respond (apply control measures) and contain outbreaks when they occur. In such conditions, outbreaks among humans or poultry are more likely to occur and to persist for prolonged periods before they are detected or investigated. This increases the potential for mutations, and thus the emergence of a pandemic strain. The global community maintains separate systems for addressing influenza and other infectious diseases in animals and humans. At the country level, agricultural agencies are responsible for addressing disease threats to animals, while public health agencies are responsible for addressing disease threats to humans. International organizations support and coordinate these national efforts. In particular, OIE and FAO share lead responsibility for addressing infectious disease threats to animal health, while WHO leads efforts to safeguard humans. National agencies with technical expertise, such as USDA and HHS, assist in these efforts. The animal and human health systems have traditionally approached influenza in different ways. The animal health system has emphasized measures to protect flocks from exposure to influenza—for example, by reducing contact with wild birds—and, when outbreaks nonetheless occur, taking action to contain them and eradicate threatening strains. Outbreak control measures include (1) identifying and isolating infected zones, (2) “stamping out” the virus by culling (killing) all poultry within these zones, and (3) cleaning and disinfecting facilities before reintroducing poultry. Vaccines that prevent clinical illness in poultry— and decrease the risk of transmission to both other poultry and humans— are available. However, these vaccines do not completely prevent influenza viruses from infecting and replicating in apparently healthy poultry and veterinary authorities recommend their use only in conjunction with other disease control measures. No effective antiviral drugs are available for poultry and thus animal health agencies do not recommend their use. The human health system’s approach to both seasonal and pandemic influenza has traditionally emphasized development and application of vaccines to limit spread and protect individuals. However, while vaccines are likely to play a key role in mitigating the impact of the next pandemic, they are likely to play little role in forestalling its onset, barring major changes in technology. Prior to a strain being identified, the pharmaceutical industry cannot currently produce vaccines that are certain to be effective against it. Rather, when a new strain is identified, 6 months or more are required to develop and reach full production capacity for new vaccines. Therefore, a pandemic will likely be well under way before a vaccine that is specifically formulated to counteract the pandemic strain becomes available. Antiviral drugs are also used to treat and prevent seasonal influenza in humans and could be used in the event of a pandemic to contain or slow the spread of the virus. In contrast to the approach used with poultry, the human public health community has not generally attempted to contain an initial outbreak of a pandemic-potential strain or to eradicate it while it is still confined to a limited area. The U.S. government has developed a national strategy for addressing the threats presented by H5N1, and has also worked with its international partners to develop an overall global strategy that is compatible with the U.S. approach. In November 2005 the Homeland Security Council published an interagency National Strategy for Pandemic Influenza, followed in May 2006 by an Implementation Plan that assigns responsibilities to specific U.S. agencies. The U.S. strategy, in addition to outlining U.S. plans for coping with a pandemic within its own territory, states that the United States will work to “stop, slow, or otherwise limit” a pandemic beginning outside its own territory. The strategy has three pillars that provide a framework for its implementation: (1) preparedness and communications, (2) surveillance and detection, and (3) response and containment. The United States has also worked with UN agencies, OIE, and other governments to develop an overall international strategy. Figure 2 shows key steps in the development of this international strategy in relation to the spread of the H5N1 virus. These steps included the appointment of a UN System Influenza Coordinator and periodic global conferences to review progress and refine the strategy. The most recent global conference was held in Bamako, Mali, in early December 2006. At the global level, according to the UN coordinator, the overall strategic goal of avian and pandemic influenza-related efforts is to create conditions that enable all countries to (1) control avian influenza in poultry, and thus reduce the risk that it poses for humans; (2) watch for sustained human-to- human transmission of the disease (through improved surveillance) and be ready to contain it; and (3) if containment is not successful, mitigate the impact of a pandemic. To guide efforts to improve capacity for performing these tasks, the UN System Influenza Coordinator has identified seven broad objectives. Four of these focus in large measure on improving capacity to forestall a pandemic:Improve animal health practices and the performance of veterinary services. Sustain livelihoods of poorer farmers whose animals may be affected by illness or by control measures, including culling programs. Strengthen public health services in their ability to protect against newly emerging infections. Provide public information to encourage behavioral changes that will reduce pandemic risks. Although U.S. and international assessments have identified serious and widespread environmental and preparedness-related risks in many countries, gaps in the available information on both types of risk have hindered comprehensive, well-informed comparisons of risk levels by country. Assessment efforts that we examined, carried out by U.S. and international agencies from late 2005 through late 2006, illustrate these gaps. Efforts to assemble more comprehensive information are under way, but will take time to produce results. Despite these limitations, the Homeland Security Council has used available information to designate about 20 priority countries for U.S. assistance, and U.S. officials have determined that the United States should focus, in particular, on certain of these countries where pandemic risk levels appear comparatively high, including Indonesia, Nigeria, and Egypt. A global analysis based on environmental factors that USAID originally conducted during 2005 identified areas at greater risk for outbreaks but revealed gaps in available information. USAID considered two factors in its analysis: (1) the extent to which H5N1 was already present in animals and (2) the likelihood that the virus will be introduced from another country through factors such as trade in poultry and other birds and bird migration. USAID undertook this assessment to inform its decisions about spending priorities in the initial phase of heightened concern about human pandemic risk from H5N1, when very little risk information was available, according to USAID officials. USAID used OIE data on reported animal cases. For countries that had not yet reported cases, USAID estimated the risk of introduction based on proximity to affected countries and available information on poultry trade and bird migration patterns. USAID concluded that the countries at highest risk for new or recurring H5N1 outbreaks, or both, were those in Southeast Asia where the disease was well-established, with widespread and recurring infections in animals since 2003 (see fig. 3). Countries that were comparatively distant from those that had already reported cases were deemed at lowest risk. We identified three constraints on the reliability of these USAID categorizations. First, global surveillance of the disease among domestic animals has serious shortfalls. While OIE and FAO collaborate to obtain and confirm information on suspected H5N1 cases, surveillance capacity remains weak in many countries. Second, estimates of risk for disease transmission from one country to another, as well as among regions within countries, are difficult to make because of uncertainties about how factors such as trade in poultry and other birds and wild bird migration affect the movement of the disease. Specifically, illegal trade in birds is largely undocumented and movement of the virus through the wild bird population is poorly understood. Finally, these categorizations did not take other elements of environmental risk, such as high-risk agricultural practices, into account. USAID, the State Department, and the UN System Influenza Coordinator have each administered questionnaires to assess country-by-country avian and pandemic influenza preparedness. These efforts identified widespread preparedness weaknesses and provided information for planning improvement efforts in individual countries. However, the results did not provide information that was sufficiently detailed or complete to permit clear categorization of countries by level of preparedness. During 2005, USAID and the State Department collected country-level data that indicated widespread weaknesses in countries’ ability to detect and respond to avian and pandemic influenza, but did not provide enough information to place the examined countries in preparedness categories. USAID and the State Department sent separate questionnaires to their respective missions around the world to obtain a quick overview of avian and pandemic influenza preparedness by country. The two agencies requested information on key areas of concern, including surveillance, response, and communications capacity, and stockpiles of drugs and other supplies. These efforts identified widespread preparedness shortfalls. Our analysis of a selection of the USAID and State Department results found, for example, that many of the countries had not prepared stockpiles of antiviral drugs or did not have plans for compensating farmers in the event that culling becomes necessary. Missions in African countries reported the greatest overall shortfalls. (See app. V for our analysis of the USAID and State Department preparedness responses.) USAID disease experts used this information to rate each country according to a numerical “preparedness index,” but decided against using the results of the exercise to help establish U.S. assistance priorities. According to USAID headquarters officials, the information submitted by its missions provided insights on preparedness strengths and weaknesses in the examined countries but was not sufficiently complete or detailed to allow them to rate countries on a numerical scale. The officials noted that they had difficulty interpreting the largely qualitative information provided by their field missions and, in some instances, found that the responses did not match their experience in the relevant countries. In addition, the USAID exercise did not include developed countries or developing countries where the agency does not maintain a presence. The State Department did not use the information it had collected to categorize countries by preparedness level. The UN System Influenza Coordinator, in collaboration with the World Bank, has completed two data collection and analysis efforts that provided useful information on country preparedness. However, this information was not sufficiently complete or comprehensive to allow clear country comparisons. These efforts, which surveyed UN mission staff in countries, were conducted before the June and December 2006 global conferences on avian and pandemic influenza preparedness, to inform discussion at the conferences. In collaboration with the World Bank, UN staff have used the information, in addition to information from government officials and the public domain, to summarize each country’s status with regard to seven “success factors.” The staff also analyzed the aggregate results for all countries and for specific regions. Similar to the USAID effort, this exercise identified widespread shortcomings in country-level preparedness. For example, the UN found that about one-third of the countries lacked the capacity to diagnose avian influenza in humans. Figure 4 presents the UN’s summary for a representative country, Bangladesh. The information indicates, for example, that programs were in place to strengthen Bangladesh’s surveillance and reporting for avian influenza in both animals and humans, but capacity to detect outbreaks was still constrained. Like USAID, the UN data-gathering effort encountered obstacles that preclude placing countries in preparedness categories. As shown in figure 4, for example, the UN mission in Bangladesh could not provide a clear response concerning the country’s planning for farmer compensation in the event that poultry culling becomes necessary. In addition, the UN sought information from its mission staff in about 200 countries, but obtained information on 141 of these in its first round of data gathering and 80 in its second. The UN cautioned that there had been no independent validation of the information obtained on individual countries, and that the information could not be used to compare countries to one another or to make a comprehensive evaluation of preparedness levels. The World Bank has conducted more in-depth assessments of both environmental and preparedness-related risk factors in some countries (those that have expressed interest in World Bank assistance), but they do not provide a basis for making complete or comprehensive global comparisons. The World Bank has developed guidance for its staff to apply in generating the information needed to design avian and pandemic influenza preparedness improvement projects in individual countries. The guidance instructs bank staff charged with preparing assistance projects to examine and take into account both environmental and preparedness-related risk factors. In preparing their projects, bank staff often work with officials from other organizations with technical expertise, including U.S. agencies, WHO, and FAO, and conduct fieldwork in the countries requesting bank assistance. As of December 2006, the World Bank reported that it had completed or was conducting assessments of national needs in more than 30 countries. The following are examples of preparedness shortfalls in the human and animal sectors identified by World Bank teams: District-level staff responsible for human disease surveillance typically are not qualified in epidemiology and lack the equipment needed to report health events in a timely manner. Public health laboratories are not capable of diagnosing influenza in humans. The human health care system has insufficient professional staff and lacks essential drugs and needed equipment. Veterinary services are inadequately equipped and trained to deal with large-scale outbreaks. Most available laboratory facilities are outdated, with laboratory staff needing substantial training. Although the World Bank’s assessment efforts generate information that is useful in designing country-specific programs, they do not provide a basis for making complete or comprehensive global comparisons of pandemic risk levels. The World Bank performs such studies only in countries that request bank assistance, and incorporates its findings into project documents as needed. That is, bank staff members cite assessment findings to support particular points in individual project plans. The World Bank does not assess risk in countries that have not requested bank assistance, nor does it publish its assessment results in independent documents that employ a common format, and thus could be readily employed to make country-by-country comparisons. U.S. government and international agencies have initiated several data- gathering and analysis efforts to provide more complete information on country preparedness levels. However, these efforts will take time to produce substantial results. First, HHS’s Centers for Disease Control and Prevention (CDC) is developing an assessment protocol or “scorecard” that the United States could employ to obtain systematic, and therefore comparable, information on pandemic preparedness levels by country. CDC officials explained that no such assessment tool currently exists. CDC officials are developing indicators that could be applied to rate core capabilities in key areas, such as differentiating among influenza strains and identifying clusters of human illness that may signal emergence of a pandemic strain. According to CDC officials, creating such a system would provide the United States with a basis for comparing preparedness in different countries, identifying response capabilities within countries that are particularly weak, and— over time—gauging the impact of U.S. efforts to address these shortcomings. CDC officials said that they hoped to begin testing these indicators before the end of 2007. They stated that their efforts have so far been limited to human public health functions, but they have discussed with USDA and USAID opportunities to incorporate animal health functions into this format once the prototype has been worked out for human health capabilities. Second, the UN System Influenza Coordinator’s staff has indicated that it is working with the World Bank to improve the quality of the UN’s country preparedness questionnaire and increase the response rate. The goal is for their periodic efforts to assess global and country-level preparedness to generate more useful information. The impact of these efforts will not be clear until the staff publishes the results of its third survey prior to the next major global conference on avian and pandemic influenza, which is scheduled to take place in New Delhi in December 2007. Third, in 2006 OIE published an evaluation tool that can be used to assess the capacity of national veterinary services. While it has established standards for national veterinary services, the organization had not previously developed a tool that could be used to determine the extent to which national systems meet these standards. With assistance from the United States and other donors, OIE reports that it has trained over 70 people in how to apply its evaluation tool and has initiated assessments of veterinary services in 15 countries. A senior OIE official indicated that the organization intends to complete assessments of over 100 countries over the next 3 years. Finally, under the terms of a 2005 revision of the International Health Regulations, WHO member countries have agreed to establish international standards for “core capacity” in disease surveillance and response systems and to assess the extent to which their national systems meet these standards. However, guidance on how to conduct such assessments is still being developed. Such assessments would provide consistent information on preparedness in all participating countries. WHO is required to support implementation of these regulations in several ways, including supporting assessments of national capacity. The UN System Influenza Coordinator has identified development of national systems that comply with the new international standards as a key objective of global efforts to improve pandemic preparedness, and WHO has begun developing assessment tools. However, while the regulations enter into force in June 2007, member states are not required to assess their national capacities until 2009 and are not required to come into compliance with the revised regulations until 2012. The United States has prioritized countries for U.S. assistance, with the Homeland Security Council identifying about 20 “priority countries,” and agency officials have determined that the United States should focus in particular on certain of these countries where pandemic risk levels appear comparatively high. In May 2006, the Homeland Security Council categorized countries, using the limited information available on environmental and preparedness- related risks from U.S. and international agencies, and also taking U.S. foreign policy concerns into account. The council differentiated among countries primarily according to available information on H5N1’s presence in these countries or their proximity to countries that have reported the disease. According to agency officials and planning documents, more detailed information on environmental risk factors and country preparedness would have provided a more satisfactory basis for differentiating among countries, but such information was not available. In May 2006 the council grouped 131 countries into four risk categories: At-risk countries: Unaffected countries with insufficient medical, public health, or veterinary capacity to prevent, detect, or contain influenza with pandemic potential. High-risk countries: At-risk countries located in proximity to affected countries, or in which a wildlife case of influenza with pandemic potential has been detected. Affected countries: At-risk countries experiencing widespread and recurring or isolated cases in humans or domestic animals of influenza with human pandemic potential. Priority countries: High-risk or affected countries meriting special attention because of the severity of their outbreaks, their strategic importance, their regional role, or foreign policy priorities. Through this process, the Homeland Security Council initially identified 19 U.S. priority countries. They include countries in Southeast Asia where H5N1 has become well-established (such as Indonesia) as well as countries that have experienced severe outbreaks (such as Egypt); have not yet experienced major outbreaks, but U.S. foreign policy considerations mandate their identification as a priority (such as Afghanistan); or are playing an important regional role in responding to the H5N1 threat (such as Thailand). The council has updated the country categorizations, according to State Department officials, and there have been slight changes since the original list was completed. According to these officials, the council had designated 21 countries as priority countries as of March 2007. In addition, U.S. agency officials stated that certain of these priority countries have emerged as being of especially high concern, and the State Department is coordinating preparation of interagency operating plans for U.S. assistance to these countries. Based on ongoing evaluation of both environmental and preparedness-related factors, agency officials stated that Indonesia, Egypt, Nigeria, and a small number of Southeast Asian countries present comparatively high levels of pandemic risk and thus merit greatest attention. According to the State Department, a plan for Indonesia has been completed and plans are being prepared for Egypt, Nigeria, and three additional Southeast Asian countries, as well as for U.S. assistance to international organizations such as WHO. According to State Department officials, each plan will provide information on a country’s avian and pandemic influenza preparedness strengths and weaknesses and lay out a U.S. interagency strategy for addressing them, taking into account the actions of the host governments and other donors. The country plans are to be laid out according to the three pillars of the U.S. National Strategy for Pandemic Influenza: preparedness and communications, surveillance and detection, and response and containment. The United States has played a prominent role in global efforts to improve avian and pandemic influenza preparedness, committing more funds than any other donor country and creating a framework for monitoring its efforts. According to data assembled by the World Bank, U.S. commitments amounted to about 27 percent of overall donor assistance as of December 2006. U.S. agencies and other donors are supporting efforts to improve preparedness at the country-specific, regional, and global levels, and the bulk of the country-specific assistance has gone to U.S. priority countries. USAID and HHS have provided most of the U.S. funds, while the State Department coordinates the United States’ international efforts. The U.S. National Strategy for Pandemic Influenza Implementation Plan establishes a framework for U.S. efforts to improve international (and domestic) preparedness, listing specific action items, assigning agencies responsibility for completing them, and specifying performance measures and time frames for determining whether they have been completed. The Homeland Security Council is responsible for monitoring the plan’s implementation. The council reported in December 2006 that all action items due to be completed by November had been completed, and provided evidence of timely completion for the majority of the items. As shown in figure 5, the United States has been a leader in financing efforts to improve preparedness for pandemic influenza around the world. Through December 2006, the United States had committed about $377 million to improve global preparedness for avian and pandemic influenza. This amounted to about 27 percent of the $1.4 billion committed by all donors combined; exceeded the amounts other individual donors, including the World Bank, the Asian Development Bank, and Japan, had committed; and was also greater than combined commitments by the European Commission and European Union member countries. In terms of pledged amounts, the United States has pledged $434 million, behind the World Bank and the Asian Development Bank, which offer loans and grant assistance. The United States and other donors are supporting efforts to improve preparedness at the country-specific, regional, and global levels (see fig. 6). According to the World Bank, more than one-third of U.S. and total global commitments have gone to assist individual countries. Substantial shares of U.S. and global commitments also have been directed to regionally focused programs, with primary emphasis on the Asia-Pacific region, and to relevant global organizations, with primary emphasis on WHO and FAO (see app. VI for additional detail). More than half of U.S. funding in the “other” category has been used to stockpile nonpharmaceutical equipment, such as protective suits for workers involved in addressing outbreaks in birds or humans. The other category also includes support for research, wild bird surveillance, and a variety of other purposes. The bulk of U.S. and other donors’ country-specific commitments have been to countries that the United States has designated as priorities, with funding concentrated among certain of these countries (see fig. 7). Of the top 15 recipients of committed international funds, 11 are U.S. priority countries. According to data compiled by the World Bank, about 72 percent of U.S. country-specific commitments and about 76 percent of overall donor country-specific commitments through December 2006 were to U.S. priority countries. As figure 7 shows, Vietnam and Indonesia have been the leading recipients of country-specific commitments from the United States and from other donors. Indonesia, which U.S. officials have indicated is their highest- priority country, has received the largest share of U.S. country-specific commitments (about 18 percent), followed by Vietnam and Cambodia. USAID, HHS, USDA, DOD, and the State Department carry out U.S. international avian and pandemic influenza assistance programs, with USAID and HHS playing the largest roles. According to funding data provided by these agencies, USAID accounts for 51 percent of U.S. planned spending, with funds going to provide technical assistance, equipment, and financing for both animal and human health-related activities. HHS accounts for about 40 percent of the total, with the focus on technical assistance and financing to improve human disease detection and response capacity. USDA provides technical assistance and conducts training and research programs, and DOD stockpiles protective equipment. The State Department leads the federal government’s international engagement on avian and pandemic influenza and coordinates U.S. international assistance activities through an interagency working group. Figure 8 shows planned funding levels by agency. The U.S. National Strategy for Pandemic Influenza Implementation Plan, adopted in May 2006, provides a framework for monitoring U.S. efforts to improve both domestic and international preparedness. The plan assigns agencies responsibility for completing specific action items under the three pillars of the overall U.S. strategy (preparedness and communications, surveillance and detection, and response and containment) and, in most cases, specifies performance measures and time frames for determining whether they have been completed. The Homeland Security Council is responsible for monitoring the plan’s implementation. In its international component, the Implementation Plan identifies 84 action items. It designates HHS as the lead or co-lead agency for 34 of these, the State Department for 25, USAID for 19, USDA for 19, and DOD for 11. Table 2 shows the distribution of planned funding by agency within each of the three pillars in the strategy. Appendix VII provides information on obligations by agency and pillar. Within the preparedness and communications pillar, the Implementation Plan assigns U.S. agencies responsibility for action items that focus on (1) planning for a pandemic; (2) communicating expectations and responsibilities; (3) producing and stockpiling vaccines, antiviral drugs, and other medical material; (4) establishing distribution plans for such supplies; and (5) advancing scientific knowledge about influenza viruses. For example, action item 4.1.5.2 assigns HHS and USAID lead responsibility for setting up stockpiles of protective equipment and essential commodities (other than vaccines and antiviral drugs) with action to be completed within 9 months—that is, by February 2007 (see fig. 9). Through fiscal year 2006, USAID reported spending about $56 million to create a stockpile of personal protective equipment (PPE) kits and other nonmedical commodities to facilitate outbreak investigation and response. The USAID stockpile consisted of 1.5 million PPE kits to be used by personnel investigating or responding to outbreaks, 100 laboratory kits, and 15,000 decontamination kits. As of October 2006, USAID reported having deployed approximately 193,000 PPE kits for immediate or near-term use in more than 60 countries (see app. VIII). To improve global surveillance and detection capacity, the Implementation Plan assigns U.S. agencies responsibility for action items that focus on (1) ensuring rapid reporting of outbreaks and (2) using surveillance to limit their spread. For example, action item 4.2.2.4 assigns HHS lead responsiblity for training foreign health professionals to detect and respond to infectious diseases such as avian influenza with action to be completed within 12 months—that is, by May 2007 (see fig. 10). In 2006, HHS established or augmented five regional global disease detection and response centers located in Egypt ($4.4 million), Guatemala ($2 million), Kenya ($4.5 million), Thailand ($6.5 million), and China ($3.9 million) to enhance global disease surveillance and response capacity. Among other things, these centers provide training in field epidemiology and laboratory applications. For example, in July 2006, the Thailand center conducted a workshop aimed at teaching public health officials what to do when investigating a respiratory disease outbreak that may signal the start of a pandemic. More than 100 participants from 14 countries participated in this workshop, which was cosponsored by WHO and Thai authorities. To improve global response and containment capacity, the Implementation Plan assigns U.S. agencies responsibility for action items that focus on (1) containing outbreaks; (2) leveraging international medical and health surge capacity; (3) sustaining infrastructure, essential services, and the economy; and (4) ensuring effective risk communication. Action item 4.3.1.5, for example, assigns USDA and USAID lead responsibility for supporting operational deployment of response teams when outbreaks occur in poultry (see fig. 11). In 2006, USDA and USAID supported the creation of a crisis management center at FAO to coordinate and respond to avian influenza outbreaks globally. According to FAO, the center is able to dispatch its experts to any location in the world in under 48 hours. USAID and USDA have provided approximately $5 million in support to the center. USDA detailed three veterinary specialists to the center for headquarters operations as well as an official to serve as its deputy director. USDA is also providing experts to respond to outbreaks. USAID has directed its support toward enhancing coordination with WHO on rapid deployment of joint animal health/human health teams and facilitating operations in underresourced African countries. The Homeland Security Council’s first progress report on U.S. pandemic influenza-related efforts reported that agencies had completed all of the 22 international action items scheduled for completion by November 2006. In December 2006, the council issued a compendium of the action items in the Implementation Plan, with updates on the corresponding performance measures. The council reported that all 22 of the international action items in the Implementation Plan that agencies were to complete by November 2006 had been completed. (The 84 action items in the international section of the Implementation Plan have time frames for completion that range from 3 months to 2 years.) The Homeland Security Council’s report did not clearly indicate the basis for determining completion in a number of cases, generally because the report did not fully reflect agency efforts or the wording of the performance measure made it difficult for agency staff to respond. Our review of the progress report found that for 14 of the 22 action items, the report directly addressed the specified performance measures and indicated that these measures had been addressed within the specified time frames. However, for 8 of the action items, the information in the progress report did not directly address the performance measure or did not indicate that the completion deadline had been met. Based on interviews and information we obtained from the responsible agencies, we determined that the lack of clarity in these cases was primarily because of omission of key facts on agency activities or agency difficulties in reporting on poorly worded performance measures. For example, 1 action item directed DOD to prepare to limit the spread of a pandemic- potential strain by controlling official military travel between affected areas and the United States. The performance measure was designation of military facilities that could serve as points of entry from affected areas. The council’s report described the department’s preparedness for controlling travelers’ movements but did not state that DOD had identified facilities that could serve as points of entry. Our review of DOD documents indicated that the department had designated such facilities. A second action item assigned the State Department lead responsibility for developing plans to communicate U.S. avian and pandemic influenza objectives to key stakeholders. The performance measure was the “number and range of target audiences reached” and the impact of relevant efforts on the public. The council’s report provided a rough estimate of the number of people reached through U.S. government communication efforts to date. However, State Department officials told us that the performance measure was difficult to address because they did not have the means to accurately estimate the effective reach or impact of their efforts. Difficulties in obtaining and applying accurate and complete information present an overarching challenge to U.S. efforts to identify countries at greatest risk and effectively target resources against the threat presented by the H5N1 virus. In particular, although country preparedness is a primary consideration in determining relative risk levels, U.S. determinations on priority countries have relied primarily on information about environmental risks, which is itself incomplete. While the United States, the UN, and the World Bank, as well as WHO and OIE, are refining and expanding their efforts to gather useful information, substantial gaps remain in our understanding of both environmental and preparedness- related risks in countries around the world. With strong leadership from the United States, the international community has launched diverse efforts to increase global preparedness to forestall an influenza pandemic. These efforts constitute a substantial response to the threat presented by H5N1. They reflect significant international cooperation, and the U.S. National Strategy for Pandemic Influenza Implementation Plan provides a useful framework for managing U.S. agencies’ participation in these efforts. The Homeland Security Council’s first update on U.S. efforts and UN reports on donor efforts in general suggest that U.S. and global efforts to improve preparedness are producing results, but challenges remain in accurately measuring their impact. Many countries remain relatively unprepared to recognize or respond to highly pathogenic influenza in poultry or humans, and sustained efforts will be required to overcome these challenges. USAID, HHS, and USDA provided written comments on a draft of this report. These comments are reproduced in appendixes II, III and IV. In addition, Treasury provided oral comments. HHS and Treasury also provided technical comments, as did the Department of State, DOD, WHO, the World Bank, and the United Nations System Influenza Coordinator. The Coordinator’s comments included comments from FAO and OIE, and the latter organization also provided us with technical comments independently. These agencies generally concurred with our findings, and we incorporated their technical comments in the report as appropriate. USAID briefly reviewed progress in improving global preparedness, citing, for example, reductions in outbreaks among poultry and humans in Vietnam and Thailand. The agency observed, however, that the practices employed in small-scale “backyard farms” continue to present a major challenge to efforts to control the spread of H5N1. USAID will therefore be paying particular attention to this challenge in the coming months. While acknowledging the information gaps that limit capacity for comparing country-level risks, HHS emphasized its support for targeting resources according to the Homeland Security Council’s country prioritization decisions. In this context, HHS stressed the importance of improved information sharing among countries, as called for under the revised International Health Regulations, and noted the particular importance of sharing influenza virus samples and surveillance data. In addition, HHS commented that limited human-to-human transmission of H5N1 could not be ruled out in some clusters of cases in Indonesia, and explained certain differences in the roles played by HHS, USDA and USAID under the response and containment pillar of the U.S. National Strategy for Pandemic Influenza. In response, we clarified the information in the background section of this report on human-to-human transmission and our presentation on the roles played by the HHS, USDA, and USAID in responding to poultry and human outbreaks. In its technical comments, HHS elaborated upon our concluding observation regarding the need for sustained effort to overcome challenges in improving global preparedness. We added a footnote to our concluding observations to summarize the HHS comments in this area. USDA stated that the report provides a comprehensive evaluation of pandemic influenza and global efforts needed to improve avian and pandemic influenza preparedness. USDA also stated that it found the report accurate in its description of USDA’s role and involvement in global efforts to improve preparedness. In oral comments, Treasury stated that it has been actively engaged in the U.S. government’s efforts to respond to avian influenza and increase readiness to address a potential influenza pandemic, both internationally and within the United States. To coordinate the department’s activities, Treasury created an informal avian influenza working group that includes staff from its domestic and internationally focused offices. Among other things, the working group ensures that Treasury is fully engaged in all Homeland Security Council-led initiatives against avian and pandemic influenza. Treasury also stated that, in coordination with U.S. executive directors at the various international financial institutions (including the World Bank), it has encouraged and supported these institutions in their efforts to develop adequate responses to the threat of an influenza pandemic. However, Treasury stated that its efforts in this area have been constrained by U.S. legislation that requires the United States to vote against multilateral development bank programs in cases where Burma might receive support. According to Treasury, this has occurred two times with respect to Asian Development Bank regionally-focused projects. While these matters were largely outside the scope of our report, we modified the text to acknowledge Treasury efforts to encourage and support international financial institution efforts against avian and pandemic influenza. Treasury also stated that, building on experiences drawn from the 2003 severe acute respiratory syndrome outbreak, the international financial institutions (including the World Bank) have responded to the H5N1 epidemic by providing financing, and also by helping countries develop national strategies, providing relevant technical assistance and training, serving as focal points for donor and regional coordination, tracking and reporting on donor commitments, preparing impact analyses, and hosting international conferences. Treasury further noted that in addition to providing financing for individual countries, the multilateral development banks have provided financial and technical support to international and regional technical organizations working in this area, including WHO and FAO. We are sending copies of this report to the Secretaries of Agriculture, Defense, Health and Human Services, State, and the Treasury; the Administrator of the U.S. Agency for International Development; appropriate congressional committees; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions, please contact David Gootnick at (202) 512-3149 or gootnickd@gao.gov or Marcia Crosse at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IX. We provided relevant background information on the spread of the H5N1 virus, factors that may affect the comparative risk that this virus presents in different countries, methods that health systems traditionally employ to respond to influenza in animals and humans, and the overall strategy that the United States and its international partners have developed to respond to the threats presented by H5N1. To describe how H5N1 has spread internationally, we used country-specific data on cases among humans assembled by the United Nations World Health Organization (WHO), and on cases and outbreaks in humans and in wild and domestic birds assembled by the United Nations (UN) World Food Program. World Food Program officials told us their data on human cases were provided by WHO, while their data on cases in birds were provided by the World Organization for Animal Health (OIE) and the UN Food and Agriculture Organization (FAO). WHO, OIE, and FAO have cautioned that global surveillance is imperfect, and some human and animal cases and outbreaks may go unrecorded. However, these organizations work with a wide variety of global partners, including national governments, to identify and verify outbreaks of this disease. We determined that these data on human and animal outbreaks were sufficiently reliable for the purposes of this report, which were to convey a general sense of the manner in which the disease has spread across international boundaries and the extent to which it has infected humans. However, these data should not be relied upon to precisely identify countries where the disease has occurred or to indicate with absolute certainty the number of human cases that have occurred. To identify and describe factors that affect the level of risk that H5N1 presents in different countries and the methods that animal and health systems generally employ against influenza, we interviewed officials and consulted documents produced by avian and human disease experts in relevant U.S. government agencies, international organizations, academic institutions, and nongovernmental organizations. To describe the overall strategy that the United States and its international partners have developed to respond to the H5N1 epidemic, we interviewed and examined relevant documents from U.S. and UN agencies, including the U.S. National Strategy for Pandemic Influenza and strategy statements and progress reports produced by the UN System Influenza Coordinator and the World Bank. To examine the extent to which U.S. and international agencies have been able to assess the pandemic risk that H5N1 presents in individual countries and prioritize them for international assistance, we reviewed and analyzed assessments of environmental risk and preparedness. Specifically, we reviewed assessments prepared by the U.S. Agency for International Development (USAID), the Department of State, the UN, and the World Bank and spoke with cognizant officials at these agencies and organizations about how they were conducted. These assessments evaluated country-level pandemic risk deriving from environmental conditions, country preparedness for responding to avian and pandemic influenza, or both. We analyzed a sample of 17 country-specific avian influenza preparedness assessments compiled by USAID and the State Department to provide summary information on capacity in several regions. (See app. V for a detailed description of the scope and methodology for our analysis of sampled USAID and State Department assessments.) We also reviewed the U.S. Homeland Security Council Country Prioritization Matrix as of May 3, 2006, which designates country priority levels for U.S. actions to address the avian and pandemic influenza threat. We discussed this priority ranking with officials from the State Department and USAID. We requested a meeting with officials from the council, but the council declined, stating that we could obtain needed information from other agencies and departments. In addition, we reviewed analyses of environmental risk factors prepared by U.S. intelligence community analysts during 2006 and early 2007 and discussed these analyses with U.S. agency officials. We also reviewed assessments of risks in particular countries prepared by a U.S. intelligence agency. To determine the actions U.S. agencies and their international partners took to address these risks, we examined funding, planning, and reporting documents and spoke with cognizant officials. To determine the overall level of financial support that the donor community is providing for efforts to improve global avian and pandemic influenza preparedness, we examined World Bank and UN documents detailing donor pledges and commitments resulting from the international pledging conferences on avian and pandemic influenza, including funding levels by donor, by recipient, and by purpose. We also reviewed World Bank and UN documents describing recipient countries, regions, and organizations. To describe the international activities of the U.S. government, we reviewed the National Strategy for Pandemic Influenza and the National Strategy for Pandemic Influenza Implementation Plan. We reviewed pertinent planning, reporting, and funding documents for U.S. international avian influenza control and pandemic preparedness assistance programs. We also consulted cognizant officials from USAID and from the Departments of Agriculture (USDA), Health and Human Services (HHS), Defense (DOD), and State about their efforts. We reviewed the international action items tasked to these U.S. agencies and assessed by the Homeland Security Council in its 6-month status report issued on December 18, 2006. We independently compared the performance measures associated with each action item with the agency responses to it. Finally, we visited the WHO, OIE, and FAO headquarters in Geneva, Paris, and Rome, respectively. To assess the reliability of the pledges and commitments data that national governments and other donors submitted to the World Bank, we spoke with World Bank officials responsible for maintaining these data and reviewed supporting documentation. The pledges and commitments data are self-reported by individual donor countries in response to a standard request template. The World Bank staff responsible for this data collection provided countries with standard definitions of key terms, such as pledges, commitments, and in-kind and cash payments. However, because countries’ data reporting systems vary substantially, World Bank staff conduct ongoing discussions with donor countries to establish the correspondence between those systems and the World Bank terms. World Bank staff also stated that the pledges and commitments totals provided by countries may include funding not strictly related to pandemic influenza and may therefore be somewhat overstated. Therefore, based on our review, we use these data to identify general levels of pledges and commitments made by particular countries or organizations; they should not be relied upon to support precise comparisons of funding by donor or recipient. Overall, we concluded that the World Bank pledges and commitments data were sufficiently reliable for the purposes of this report. To obtain data on U.S. agency funding for international avian and pandemic influenza preparedness by agency and by the three pillars of the overall U.S. pandemic strategy, we requested separate submissions from each of the five U.S. agencies, showing planned, obligated, and expended funds by pillar. Two of the five agencies (USAID and USDA) maintained funding data by pillar prior to our requesting these data. Two others (DOD and the State Department) found it relatively easy to comply with our request, since all of their reported activities fell within the preparedness and communications pillar. However, providing this information was comparatively complex for HHS. The various units within that agency (for example, the Centers for Disease Control and Prevention and the National Institutes of Health) support a wide variety of relevant programs, many of which involve more than one pillar. In addition, HHS can utilize other sources of funding in addition to influenza-specific appropriations for many of these programs. To respond to our request, the HHS Office of Global Health Affairs collected data from relevant HHS units. The Director of the Office of Global Health Affairs reviewed the final HHS submission for accuracy before reporting back to GAO. The pillar-specific totals HHS was able to provide were for planned funds and for obligated funds. Thus, the funding information by agency that we provide is for these two categories of funding data and not for expenditures. We identified a number of limitations in the data that the agencies provided. First, the data are not from consistent periods. USDA and USAID provided information on planned funding levels and obligations through December 2006. HHS, DOD, and the State Department provided data through September 2006. In addition, DOD and the State Department received funding for international avian and pandemic influenza activities through appropriations in 2006 only; whereas, USAID, HHS, and USDA received funding through 2005 and 2006 appropriations. Second, the distribution of funds among the pillars is somewhat imprecise. When programs addressed more than one pillar, agency officials employed their professional judgment to decide which pillar was most significant. This limitation was most pronounced in the HHS data. While HHS decided how to allocate most of its funds, the agency did not specify a pillar for about $15 million of its planned funds. This total included about $5 million to expand staffing levels in key global, regional, and country-level facilities, including the WHO regional offices for Africa and the Western Pacific and surveillance and response facilities in Thailand and Egypt, and about $10 million for HHS headquarters management of its influenza-related initiatives. Third, the total planned and obligated amounts are also somewhat imprecise. Some of the agency funds come from programs that are not dedicated specifically to avian or pandemic influenza. In such cases, agency officials used professional judgment to decide what portion of the funds should be designated as supporting avian or pandemic influenza preparedness. Despite these limitations, we determined that these data were sufficiently reliable for the purpose of this report, which was to provide information on general levels of agency planned and obligated funding by pillar. However, we rounded the funding information that the agencies provided to the nearest million dollars. We conducted our work from January 2006 through March 2007 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of Health and Human Services letter dated June 11, 2007. 1. HHS said that it is inaccurate to state, without qualification, that H5N1 has never circulated among humans; limited human-to-human transmission cannot be ruled out in a few clusters of cases in Indonesia. We agreed with the need to qualify this statement. We revised the background section of this report to acknowledge that limited human-to-human transmission cannot be ruled out in these cases. This appendix presents the results of our analysis of avian influenza preparedness information submitted by USAID and State Department field staff from 17 of more than 100 countries surveyed by USAID and State Department headquarters during late 2005. These characterizations reflect our analysis of information gathered through assessment efforts at that time. For some countries, the assessments may not reflect current capabilities. As figure 12 shows, the field staff charged with providing information identified widespread shortcomings in national preparedness. However, the figure also shows that field staff often could not obtain sufficient information to provide clear or definitive information on every topic. The preparedness and communications section of the figure suggests that most of the countries in our sample were aware of the need to position themselves for effective action, 16 of the 17 were reported to have made at least limited progress in preparing a national plan for responding to the threats presented by avian influenza, and 14 of 15 countries for which data were available were reported to have established national task forces to address these threats. However, the remainder of the figure suggests that there were at the time of the assessments widespread weaknesses in the elements of preparedness. For example, only 9 of the 17 countries were reported to have made at least limited efforts to educate the public about avian influenza. Only 4 of the 12 countries for which data were available were reported to have made at least limited progress toward preparing stockpiles of both antiviral drugs and PPE kits that could be used by those responding to poultry or human outbreaks. Most of the countries were found to be conducting at least limited surveillance for avian influenza. However, many countries were found to have gaps in their capacity to carry out key outbreak response activities. For example, only 4 of the 15 countries for which data were available were reported to have plans for compensating farmers in the event that culling became necessary. The USAID and State Department officials who provided this information reported shortcomings in each of the 17 countries we reviewed. The officials identified multiple shortcomings in Cambodia, Indonesia, and Vietnam, where H5N1 is well-established. In addition, the figure illustrates why there is particular concern about weak capacity in Africa. USAID and State Department officials recorded negative responses in most categories for the 2 of the 3 African countries in the table (Djibouti and Uganda). Additionally, officials recoded limited or negative responses for 11 of 15 categories for Nigeria—the remaining African country in our analysis. The figure also demonstrates the data-gathering and analysis difficulties that field and headquarters staff experienced in completing this exercise. The information provided by field staff was insufficient to allow us to arrive at definitive entries for about 15 percent (39 of 255) of the cells in the figure. Field staff had particular difficulty in providing clear information on response and containment measures, such as stockpile distribution and culling plans and quarantine capacity. Staff in some countries (for example, Vietnam) were able to provide comparatively clear information on all or nearly all issues, while others (for example, India) were unable to provide sufficient information on several matters. The study population for our analysis included rapid country avian influenza preparedness assessment reports prepared by USAID and State Department overseas missions from October to November 2005. USAID maintains country-specific missions in 80 developing countries and regional offices in 6 such countries, and these missions provided USAID headquarters with information on more than 100 countries. The State Department maintains diplomatic missions in about 180 countries and territories. From the population of USAID missions, we drew a nonprobability sample of 17 countries. Of these countries, 14 had reports from USAID and the State Department, 3 had USAID reports only, and 1 had a State Department report only. State Department assessments were missing from the following countries: India, Pakistan, and Indonesia. USAID did not perform a country assessment on Thailand. To select our sample, we took a variety of factors into account. To ensure geographic diversity, we included countries from four regions: Asia, Africa, Eurasia and the Near East, and the Americas. Based on influenza experts’ opinions and congressional interest, we chose to oversample Asian countries and not represent North America or Europe. We sought to include countries in a variety of situations with regard to the presence of H5N1 in animals or humans, concentrations of poultry and humans living in proximity to each other, exposure to migratory patterns that could allow wild birds to transmit H5N1 into the country, political stability, and strength of the public health infrastructure. We did not include China in our table of countries because the relevant reports were classified. USAID and the State Department conducted their assessments by sending out sets of questions to personnel at their respective missions. The questions asked in the two instruments differed in their wording, and as a consequence, our first step in developing our analysis was to identify a set of broader dimensions, or indicators, encompassing data from both sets of assessments. Through a review of these two sets of questions, as well as survey questions recently developed by WHO and the World Bank to assess country preparedness, we identified a set of 15 qualitative indicators covering a wide array of issues within the topic areas of preparedness and communications, surveillance and detection, and response and containment. These indicators then became the dimensions along which we analyzed the data contained in the USAID and State Department assessments. We reviewed USAID rapid country assessments and State Department cables assessing the level of country preparedness for avian influenza. The analysis of the 17 USAID and State Department assessments was performed by two GAO analysts, reviewing the reports separately and recording answers, with justifications, in workpapers. To enhance inter- rater reliability in our analysis of the USAID and State Department assessments, we developed a code book to reflect the specific characteristics needed for a country to be classified in one of three categories for each indicator: yes, no, or limited. Subsequently, the two analysts compared their answers and justifications, reconciled their analyses when they diverged, and modified the code book as needed to ensure consistent coding across indicators and countries. A methodologist performed a final check on the consistency and accuracy of the analysis. The USAID and State Department instruments had a number of limitations. First, the information provided in these assessments is limited by the rapidly evolving dynamic of the H5N1 virus and ongoing efforts to improve capacity. As a consequence the information provided in them is already dated and should be understood as a snapshot of the countries assessed at a particular point in time (fall 2005), rather than directly reflecting the current status of country capacities. Second, the purpose of these assessments was to rapidly assess country capacities in this evolving environment, and as a result, the instruments developed were limited in the design of the questions asked, restricted primarily to open-ended questions that could be interpreted and answered in multiple ways. Third, the instruments were limited in the manner in which they were implemented. In particular, the data reported reflect the individualized data-gathering and assessment efforts of the point of contact at USAID or the State Department rather than a standardized approach to data gathering and assessment. Fourth, while many respondents addressed the indicators we identified for analysis, because the questions were open-ended, there is inconsistency in the depth and coverage of responses. Furthermore, in some cases, the response to a question was simply “yes” or “no” without any details. When this occurred, we recorded the answer the respondent gave. Fifth, some indicators had only one source of information (they were addressed in one report but left blank in another), and we could not compare them for consistency. Sixth, in some instances, respondents did not answer questions sufficiently for us to make determinations or left them blank. We could not determine the level of these indicators based on available data and rated them as missing and left them blank in those cases. Despite these limitations, we determined that the data contained in these statements were sufficient for the purpose of our report, which was to provide information broadly demonstrating the limited capacities of countries at a particular point in time with implications for the challenges posed in subsequent periods. According to data submitted to the World Bank by the United States and other donors, Asia-Pacific regional initiatives have received the largest share of regionally focused funding from international donors, including the United States (see table 3). Approximately 67 percent of committed funds have gone to programs in this region. For example, donors reported providing the Association of Southeast Asian Nations about $50 million in committed funds, including about $47 million from Japan to procure antiviral drugs, PPE kits, and influenza test kits. Examples of support in other regions include HHS’s provision of $3.3 million in committed funds to support the Gorgas Institute, a laboratory network in Panama, and the European Commission’s provision of about $28 million to the African Union. According to data submitted to the World Bank, WHO and FAO have received the greatest shares of overall funding committed to global organizations (see fig. 13). Of the $240 million in reported overall donor commitments for global organizations, the WHO and FAO shares constituted about 35 percent and 27 percent, respectively. U.S. agencies are supporting WHO and FAO with funds, staff, equipment, and technical assistance to improve these organizations’ capacity to support countries. For example, HHS has provided funding to all six WHO regional offices. Some of this assistance is directed at improving collaboration on human and animal components of the response. OIE, the UN Children’s Fund, and the UN System Influenza Coordinator (among others) share the remaining $91 million, with the Children’s Fund accounting for more than half of this amount—about $49 million from Japan, provided primarily to enhance communications on avian and pandemic influenza risks. In response to our request, HHS, USAID, DOD, USDA, and the State Department reported having obligated about 64 percent of their planned funding for international avian and pandemic influenza-related assistance. However, the data are not from consistent time periods. HHS, DOD, and State Department data represent obligations through the end of fiscal year 2006 (that is, through the end of September 2006). USAID and USDA provided data on their obligations through December 2006. (See table 4.) Figure 14 shows USAID’s distribution of PPE kits by country as of the end of fiscal year 2006. As the figure shows, Indonesia accounted for the majority of these kits. According to a USAID official, approximately 193,000 PPE kits were distributed for immediate use in surveillance and response activities in more than 60 countries. Additionally, USAID had begun to create long-term stockpiles of PPE, laboratory, and decontamination kits in 20 countries. Key contributors to this report were Celia Thomas, Assistant Director; Thomas Conahan, Assistant Director; Michael McAtee; Robert Copeland; R. Gifford Howland; Syeda Uddin; David Fox; Jasleen Modi; David Dornisch; Etana Finkler, Debbie Chung, Monica Brym, and Jena Sinkfield. Financial Market Preparedness: Significant Progress Has Been Made, but Pandemic Planning and Other Challenges Remain. GAO-07-399. Washington, D.C.: March 29, 2007. Influenza Pandemic: DOD Has Taken Important Actions to Prepare, but Accountability, Funding, and Communications Need to be Clearer and Focused Departmentwide. GAO-06-1042. Washington, D.C.: September 21, 2006. Influenza Vaccine: Shortages in 2004–05 Season Underscore Need for Better Preparation. GAO-05-984. Washington, D.C.: September 30, 2005. Influenza Pandemic: Challenges in Preparedness and Response. GAO-05- 863T. Washington, D.C.: June 30, 2005. Influenza Pandemic: Challenges Remain in Preparedness. GAO-05-760T. Washington, D.C.: May 26, 2005. Flu Vaccine: Recent Supply Shortages Underscore Ongoing Challenges. GAO-05-177T. Washington, D.C.: November 18, 2004. Infectious Disease Preparedness: Federal Challenges in Responding to Influenza Outbreaks. GAO-04-1100T. Washington, D.C.: September 28, 2004. Public Health Preparedness: Response Capacity Improving, but Much Remains to Be Accomplished. GAO-04-458T. Washington, D.C.: February 12, 2004. Global Health: Challenges in Improving Infectious Disease Surveillance Systems. GAO-01-722. Washington, D.C.: August 31, 2001. Flu Vaccine: Steps Are Needed to Better Prepare for Possible Future Shortages. GAO-01-786T. Washington, D.C.: May 30, 2001. Flu Vaccine: Supply Problems Heighten Need to Ensure Access for High- Risk People. GAO-01-624. Washington, D.C.: May 15, 2001. Influenza Pandemic: Plan Needed for Federal and State Response. GAO-01-4. Washington, D.C.: October 27, 2000.
Since 2003, a global epidemic of avian influenza has raised concern about the risk of an influenza pandemic among humans, which could cause millions of deaths. The United States and its international partners have begun implementing a strategy to forestall (prevent or delay) a pandemic and prepare to cope should one occur. Disease experts generally agree that the risk of a pandemic strain emerging from avian influenza in a given country varies with (1) environmental factors, such as disease presence and certain high-risk farming practices, and (2) preparedness factors, such as a country's capacity to control outbreaks. This report describes (1) U.S. and international efforts to assess pandemic risk by country and prioritize countries for assistance and (2) steps that the United States and international partners have taken to improve the ability to forestall a pandemic. To address these objectives, we interviewed officials and analyzed data from U.S. agencies, international organizations, and nongovernmental experts. The U.S. and international agencies whose efforts we describe reviewed a draft of this report. In general, they concurred with our findings. Several provided technical comments, which we incorporated as appropriate. Assessments by U.S. agencies and international organizations have identified widespread risks of the emergence of pandemic influenza and the United States has identified priority countries for assistance, but information gaps limit the capacity for comprehensive comparisons of risk levels by country. Several assessments we examined, which have considered environmental or preparedness-related risks or both, illustrate these gaps. For example, a U.S. Agency for International Development (USAID) assessment categorized countries according to the level of environmental risk--considering factors such as disease presence and the likelihood of transmission from nearby countries, but factors such as limited understanding of the role of poultry trade or wild birds constrain the reliability of the conclusions. Further, USAID, the State Department, and the United Nations have administered questionnaires to assess country preparedness and World Bank-led missions have gathered detailed information in some countries, but these efforts do not provide a basis for making comprehensive global comparisons. Efforts to get better information are under way but will take time. The U.S. Homeland Security Council has designated priority countries for assistance, and agencies have further identified several countries as meriting the most extensive efforts, but officials acknowledge that these designations are based on limited information. The United States has played a prominent role in global efforts to improve avian and pandemic influenza preparedness, committing the greatest share of funds and creating a framework for managing its efforts. Through 2006, the United States had committed about $377 million, 27 percent of the $1.4 billion committed by all donors. USAID and the Department of Health and Human Services have provided most of these funds for a range of efforts, including stockpiles of protective equipment and training foreign health professionals in outbreak response. The State Department coordinates international efforts and the Homeland Security Council monitors progress. More than a third of U.S. and overall donor commitments have gone to individual countries, with more than 70 percent of those going to U.S. priority countries. The U.S. National Strategy for Pandemic Influenza Implementation Plan provides a framework for U.S. international efforts, assigning agencies specific action items and specifying performance measures and time frames for completion. The Homeland Security Council reported in December 2006 that all international actions due to be completed by November had been completed, and provided evidence of timely completion for the majority of those items.
The federal-aid highway program provides nearly $30 billion annually to the states, most of which are formula grant funds that FHWA distributes through annual apportionments according to statutory formulas; once apportioned, these funds are generally available to each state for eligible projects. The responsibility for choosing which projects to fund generally rests with state departments of transportation and local planning organizations. The states have considerable discretion in selecting specific highway projects and in determining how to allocate available federal funds among the various projects they have selected. For example, section 145 of title 23 of the United States Code describes the federal-aid highway program as a federally assisted state program and provides that the authorization of the appropriation of federal funds or their availability for expenditure, “shall in no way infringe on the sovereign rights of the States to determine which projects shall be federally financed.” A major highway or bridge construction or repair project usually has four stages: (1) planning, (2) environmental review, (3) design and property acquisition, and (4) construction. While FHWA approves state transportation plans, environmental impact assessments, and the acquisition of property for highway projects, its role in approving the design and construction of projects varies. The state’s activities and FHWA’s corresponding approval actions are shown in figure 1. Given the size and significance of the federal-aid highway program’s funding and projects, a key challenge for this program is overseeing states’ expenditure of public funds to ensure that state projects are well managed and successfully financed. Our work—as well as work by the DOT Inspector General and by state audit and evaluation agencies—has documented cost growth on numerous major highway and bridge projects. Let me provide one example. In January 2001, Virginia’s Joint Legislative Audit and Review Commission found that final project costs on Virginia Department of Transportation projects were well above their cost estimates and estimated that the state’s 6-year, $9 billion transportation development plan understated the costs of projects by up to $3.5 billion. The commission attributed these problems to several factors, including, among other things, not adjusting estimates for inflation and expanding the scope of projects. Our work has identified weaknesses in FHWA’s oversight of projects, especially in controlling costs. In 1997, we reported that cost containment was not an explicit statutory or regulatory goal of FHWA’s oversight. While FHWA influenced the cost-effectiveness of projects when it reviewed and approved plans for their design and construction, we found it had done little to ensure that cost containment was an integral part of the states’ project management. According to FHWA officials, controlling costs was not a goal of their oversight, and FHWA had no mandate in law to encourage or require practices to contain the costs of major highway projects. More recently, an FHWA task force concluded that changes in the agency’s oversight role since 1991—when the states assumed greater responsibility for overseeing federal-aid projects—had resulted in conflicting interpretations of the agency’s role in overseeing projects, and that some of the field offices were taking a “hands off” approach to certain projects. In June 2001, FHWA issued a policy memorandum, in part to clarify that FHWA is ultimately accountable for all projects financed with federal funds. As recently as last month, a memorandum posted on FHWA’s Web site discussed the laws establishing FHWA and the federal- aid highway program, along with congressional and public expectations that FHWA “ensure the validity of project cost estimates and schedules.” The memorandum concluded, “These expectations may not be in full agreement with the role that has been established by these laws.” In addition, we have found that FHWA’s oversight process has not promoted reliable cost estimates. While there are many reasons for cost increases, we have found, on projects we have reviewed, that initial cost estimates were not reliable predictors of the total costs and financing needs of projects. Rather, these estimates were generally developed for the environmental review—whose purpose is to compare project alternatives, not to develop reliable cost estimates. In addition, FHWA had no standard requirements for preparing cost estimates, and each state used its own methods and included different types of costs in its estimates. We have also found that costs exceeded initial estimates on projects we have reviewed because (1) initial estimates were modified to reflect more detailed plans and specifications as projects were designed and (2) the projects’ costs were affected by, among other things, inflation and changes in scope to accommodate economic development over time. We also found that highway projects take a long time to complete, and that the amount of time spent on them is of concern to the Congress, the federal government, and the states. Completing a major, new, federally funded highway project that has significant environmental impacts typically takes from 9 to 19 years and can entail as many as 200 major steps requiring actions, approvals, or input from a number of federal, state, and other stakeholders. Finally, we have noted that in many instances, states construct a major project as a series of smaller projects, and FHWA approves the estimated cost of each smaller project when it is ready for construction, rather than agreeing to the total cost of the major project at the outset. In some instances, by the time FHWA considers whether to approve the cost of a major project, a public investment decision may, in effect, already have been made because substantial funds have been spent on designing the project and acquiring property, and many of the increases in the project’s estimated costs have already occurred. Since 1998, FHWA has taken a number of steps to improve the management and oversight of major projects in order to better promote cost containment. For example, FHWA implemented TEA-21’s requirement that states develop an annual finance plan for any highway or bridge project estimated to cost $1 billion or more and established a major projects team that currently tracks and reports each month on 15 such projects. FHWA has also moved to incorporate greater risk-based management into its oversight in order to identify areas of weakness within state transportation programs, set priorities for improvement, and work with the states to meet those priorities. The administration’s May 2001 reauthorization measure contains additional proposed actions. It would introduce more structured FHWA oversight requirements, including mandatory annual reviews of state transportation agencies’ financial management and “project delivery” systems, as well as periodic reviews of states’ practices for estimating costs, awarding contracts, and reducing project costs. To improve the quality and reliability of cost estimates, it would introduce minimum federal standards for states to use in estimating project costs. The measure would also strengthen reporting requirements and take new actions to reduce fraud. Many elements of the administration’s proposal are responsive to problems and options we have described in past reports and testimony. Should the Congress determine that enhancing federal oversight of major highway and bridge projects is needed and appropriate, options we have identified in prior work remain available to build on the administration’s proposal during the reauthorization process. However, adopting any of these options would require balancing the states’ right to select projects and desire for flexibility and more autonomy with the federal government’s interest in ensuring that billions of federal dollars are spent efficiently and effectively. Furthermore, the additional costs of each of these options would need to be weighed against its potential benefits. Options include the following: Have FHWA develop and maintain a management information system on the cost performance of selected major highway and bridge projects, including changes in estimated costs over time and the reasons for such changes. Such information could help define the scope of the problem with major projects and provide insights needed to fashion appropriate solutions. Clarify uncertainties concerning FHWA’s role and authority. As I mentioned earlier, the federal-aid highway program is by law a federally assisted state program, and FHWA continues to question its authority to encourage or require practices to contain the costs of major highway and bridge projects. Should uncertainties about FHWA’s role and authority continue, another option would be to resolve the uncertainties through reauthorization language. Have the states track the progress of projects against their initial baseline cost estimates. The Office of Management and Budget requires federal agencies, for acquisitions of major capital assets, to prepare baseline cost and schedule estimates and to track and report the acquisitions’ cost performance. These requirements apply to programs managed by and acquisitions made by federal agencies, but they do not apply to the federal- aid highway program, a federally assisted state program. Expanding the federal government’s practice to the federally assisted highway program could improve the management of major projects by providing managers with information for identifying and addressing problems early. Establish performance goals and strategies for containing costs as projects move through their design and construction phases. Such performance goals could provide financial or other incentives to the states for meeting agreed-upon goals. Performance provisions such as these have been established in other federally assisted grant programs and have also been proposed for use in the federal-aid highway program. Requiring or encouraging the use of goals and strategies could also improve accountability and make cost containment an integral part of how states manage projects over time. Consider methods for improving the time it takes to plan and construct major federal-aid highway projects—a process that we reported can take up to 19 years to complete. Major stakeholders suggested several approaches to improving the timeliness of these projects, including (1) improving project management, (2) delegating environmental review and permitting authority, and (3) improving agency staffing and skills. We have recommended that FHWA consider the benefits of the most promising approaches and act to foster the adoption of the most cost-effective and feasible approaches. Reexamine the approval process for major highway and bridge projects. This option, which would require federal approval of a major project at the outset, including its cost estimate and finance plan, would be the most far- reaching and the most difficult option to implement. Potential models for such a process include the full funding grant agreement used by FTA for the New Starts program, and, as I testified last year, a DOT task force’s December 2000 recommendation calling for the establishment of a separate funding category for initial design work and a new decision point for advancing highway projects. Over the last 25 years, more than 1.2 million people have died as a result of traffic crashes in the United States—more than 42,000 in 2002. Since 1982, about 40 percent of traffic deaths were from alcohol-related crashes. In addition, traffic crashes are the leading cause of death for people aged 4 though 33. As figure 2 shows, the total number of traffic fatalities has not significantly decreased in recent years. To improve safety on the nation’s highways, NHTSA administers a number of programs, including the core federally funded highway safety program, Section 402 State and Community Grants, and several other highway safety programs that were authorized in 1998 by TEA-21. The Section 402 program, established in 1966, makes grants available for each state, based on a population and road mileage formula, to carry out traffic safety programs designed to influence drivers’ behavior, commonly called behavioral safety programs. The TEA-21 programs include seven incentive programs, which are designed to reduce traffic deaths and injuries by promoting seatbelt use and reducing alcohol-impaired driving, and two transfer programs, which penalize states that have not complied with federal requirements for enacting repeat-offender and open container laws to limit alcohol-impaired driving. Under these transfer programs, noncompliant states are required to shift certain funds from federal-aid highway programs to projects that concern or improve highway safety. In addition, subsequent to TEA-21, the Congress required that, starting later this year, states that do not meet federal requirements for establishing 0.08 blood alcohol content as the state level for drunk driving will have a percentage of their federal aid highway funds withheld. During fiscal years 1998 through 2002, over $2 billion was provided to the states for highway safety programs. NHTSA, which oversees the states’ highway safety programs, adopted a performance-based approach to oversight in 1998. Under this approach, the states and the federal government are to work together to make the nation’s highways safer. Each state sets its own safety performance goals and develops an annual safety plan that describes projects designed to achieve the goals. NHTSA’s 10 regional offices review the states’ annual plans and provide technical assistance, advice, and comments. NHTSA has two tools available to strengthen its monitoring and oversight of the state programs—improvement plans that states not making progress towards their highway safety goals are to develop, which identify programs and activities that a state and NHTSA regional office will undertake to help the state meet its goals; and management reviews, which generally involve sending a team to a state to review its highway safety operations, examine its projects, and determine that it is using funds in accordance with requirements. Among the key challenges in this area are (1) evaluating how well the federally funded state highway safety programs are meeting their goals and (2) determining how well the states are spending and controlling their federal highway safety funds. In April 2003, we issued a report on NHTSA’s oversight of state highway safety programs in which we identified weaknesses in NHTSA’s use of improvement plans and management reviews. Evaluating how well state highway safety programs are meeting their goals is difficult because, under NHTSA’s performance-based oversight approach, NHTSA’s guidance does not establish a consistent means of measuring progress. Although the guidance states that NHTSA can require the development and implementation of an improvement plan when a state fails to make progress toward its highway safety performance goals, the guidance does not establish specific criteria for evaluating progress. Rather, the guidance simply states that an improvement plan should be developed when a state is making little or no progress toward its highway safety goals. As a result, NHTSA’s regional offices have made limited and inconsistent use of improvement plans, and some states do not have improvement plans, even though their alcohol-related fatality rates have increased or their seat-belt usage rates have declined. Without a consistent means of measuring progress, NHTSA and state officials lack common expectations about how to define progress, how long states should have to demonstrate progress, how to set and measure highway safety goals, and when improvement plans should be used to help states meet their highway safety goals. To determine how well the states are spending and controlling their federal highway safety funds, NHTSA’s regional offices can conduct management reviews of state highway safety programs. Management reviews completed in 2001 and 2002 identified weaknesses in states’ highway safety programs that needed correction; however, we found that the regional offices were inconsistent in conducting the reviews because NHTSA’s guidance does not specify when the reviews should be conducted. The identified weaknesses included problems with monitoring subgrantees, poor coordination of programs, financial control problems, and large unexpended fund balances. Such weaknesses, if not addressed, could lead to inefficient or unauthorized uses of federal funds. According to NHTSA officials, management reviews also foster productive relationships with the states that allow the agency’s regional offices to work with the states to correct vulnerabilities. These regions’ ongoing involvement with the states also creates opportunities for sharing and encouraging the implementation of best practices, which may then lead to more effective safety programs and projects. To encourage more consistent use of improvement plans and management reviews, we made recommendations to improve the guidance to NHTSA’s regional offices on when it is appropriate to use these oversight tools. In commenting on a draft of the report, NHTSA officials agreed with our recommendations and said they had begun taking action to develop criteria and guidance for using the tools. The administration’s recent proposal to reauthorize TEA-21 would make some changes to the safety programs that could also have some impact on program efficiencies. For example, the proposal would somewhat simplify the current grant structure for NHTSA’s highway safety programs. The Section 402 program would have four components: core program formula grants, safety belt performance grants, general performance grants, and impaired driving discretionary grants. The safety belt performance grants would provide funds to states that had passed primary safety belt laws or achieved 90 percent safety belt usage. In addition, the general performance grant would provide funds based on overall reductions in (1) motor vehicle fatalities, (2) alcohol-related fatalities, and (3) motorcycle, bicycle, and pedestrian fatalities. Finally, the Section 402 program would have an impaired driving discretionary grant component, which would target funds to up to 10 states that had the highest impaired driving fatality numbers or fatality rates. In addition to changing the Section 402 program, the proposal would expand grants for highway safety information systems and create new emergency medical service grants. The proposal leaves intact existing penalties related to open container, repeat offender, and 0.08 blood-alcohol content laws, and establishes a new transfer penalty for states that fail to pass a primary safety belt law and have safety belt use rates lower than 90 percent by 2005. The proposal would also give the states greater flexibility in using their highway safety funds. A state could move up to half its highway safety construction funds from the Highway Safety Improvement Program into the core Section 402 program. A state would also be able to use 100 percent of its safety belt performance grants for construction purposes if it had a primary safety belt law, or 50 percent if the grant was based on high safety belt use. States could also use up to 50 percent of their general performance grants for safety construction purposes. The New Starts transit program identifies and funds fixed guideway projects, including rail, bus rapid transit, trolley, and ferry projects. The New Starts program provides much of the federal government’s investment in urban mass transportation. TEA-21 and subsequent amendments authorized approximately $10 billion for New Starts projects for fiscal years 1998 through 2003. The administration’s proposal for the surface transportation reauthorization, known as the Safe, Accountable, Flexible, and Efficient Transportation Equity Act of 2003 (SAFETEA), requests that about $9.5 billion be made available for the New Starts program for fiscal years 2004 through 2009. Unlike the federal highway program and certain transit programs, under which funds are automatically distributed to states on the basis of formulas, the New Starts program requires local transit agencies to compete for New Starts project funds on the basis of specific financial and project justification criteria. To obtain New Starts funds, a project must progress through a regional review of alternatives, develop preliminary engineering plans, and meet FTA’s approval for final design. FTA assesses the technical merits of a project proposal and its finance plan and then notifies the Congress that it intends to commit New Starts funding to certain projects through full funding grant agreements. The agreement establishes the terms and conditions for federal participation in the project, including the maximum amount of federal funds—no more than 80 percent of the estimated net cost of the project. While the grant agreement commits the federal government to providing the federal contributions to the project over a number of years, these contributions are subject to the annual appropriations process. State or local sources provide the remaining funding. The grantee is responsible for all costs exceeding the federal share, unless the agreement is amended. To meet the nation’s transportation needs, many states and localities are planning or building large New Starts projects to replace aging infrastructure or build new capacity. They are often costly and require large commitments of public resources, which may take several years to obtain from federal, state, and local sources. The projects can also be technically challenging to construct and require their sponsors to resolve a wide range of social, environmental, land-use, and economic issues before and during construction. It is critical that federal and other transportation officials meet two particular challenges that stem from the costly and lengthy federal funding commitment associated with New Starts projects. First, they must have a sound basis for evaluating and selecting projects. Because many transit projects compete for limited federal transit dollars—there are currently 52 projects in the New Starts “pipeline”—and FTA awards relatively few full funding grant agreements each year, it is crucial that local governments choose the most promising projects as candidates for New Starts funds and that FTA uses a process that effectively selects those projects that most clearly meet the program’s goals. Second, FTA, like FHWA, has the challenge of overseeing the planning, development, and construction of selected projects to ensure they remain on schedule and within budget, and deliver their expected performance. In the early 1990s, we designated the transit grants management oversight program as high risk because it was vulnerable to fraud, waste, abuse, and mismanagement. While we have removed it from the high-risk designation because of improvements FTA has made to this program, we have found that major transit projects continue to experience costs and schedule problems. For example, in August, 1999, we reported that 6 of the 14 transit projects with full funding grant agreements had experienced cost increases, and 3 of those projects had experienced cost increases that were more than 25 percent over the estimates approved by FTA in grant agreements. The key reasons for the increases included (1) higher than anticipated contract costs, (2) schedule delays, and (3) project scope changes and system enhancements. A recent testimony by the Department of Transportation’s Inspector General indicates that major transit projects continue to experience significant problems including cost increases, financing problems, schedule delays, and technical or construction difficulties. FTA has developed strategies to address the twin challenges of selecting the right projects and monitoring their implementation costs, schedule, and performance. First, in response to direction in TEA-21, FTA developed a systematic process for evaluating and rating potential New Starts projects competing for federal funding. Under this process, FTA assigns individual ratings for a variety of financial and project justification criteria and then assigns an overall rating of highly recommended, recommended, not recommended, or not rated. These criteria reflect a broad range of benefits and effects of the proposed projects, including capital and operating finance plans, mobility improvements, environmental benefits, operating efficiencies, cost-effectiveness, land use, and other factors. According to FTA’s New Starts regulations, a project must have an overall rating of at least “recommended” to receive a grant agreement. FTA also considers a number of other “readiness” factors before proposing funding for a project. For example, FTA proposes funding only for projects that are expected to enter the final design phase and be ready for grant agreements within the next fiscal year. Figure 3 illustrates the New Starts evaluation and ratings process. While FTA has made substantial progress in establishing a systematic process for evaluating and rating potential projects, our work has raised some concerns about the process. For example, to assist FTA in prioritizing projects to ensure that the relatively few full funding grant agreements go to the most important projects, we recommended in March 2000 that FTA further prioritize the projects that it rates as highly recommended or recommended and ready for New Starts funds. FTA has not implemented this recommendation. We believe that this recommendation is still valid because the funding requested for the many projects that are expected to compete for grant agreements over the next several years is likely to exceed the available federal dollars. A further concern about the ratings process stems from FTA’s decision during the fiscal year 2004 cycle to propose a project for a full funding grant agreement that had been assigned an overall project rating of “not rated,” even though FTA’s regulations require that projects have at least a “recommended” rating to receive a grant agreement. Finally, we found that FTA needs to provide clearer information and additional guidance about certain changes it made to the evaluation and ratings process for the fiscal year 2004 cycle. In work that addressed the challenge of overseeing ongoing projects once they are selected to receive a full funding grant agreement, we reported in March and September 2000 that FTA had improved the quality of the transit grants management oversight program through strategies that included upgrading its guidance and training of staff and grantees, developing standardized oversight procedures, and employing contractor staff to strengthen its oversight of grantees. FTA also expanded its oversight efforts to include a formal and rigorous assessment of a grantee’s financial capacity to build and operate a new project and of the financial impact of that project on the existing transit system. These assessments, performed by independent accounting firms, are completed before FTA commits funds for construction and are updated as needed until projects are completed. For projects that already have grant agreements, FTA focuses on the grantee’s ability to finish the project on time and within the budget established by the grant agreement. The administration’s fiscal year 2004 budget proposal contains three New Starts initiatives—reducing the maximum federal statutory share to 50 percent, allowing non-fixed-guideway projects to be funded through New Starts, and replacing the “exempt” classification with a streamlined ratings process for projects requesting less than $75 million in New Starts funding. These proposed initiatives have advantages and disadvantages, with implications for the cost-effectiveness and performance of proposed projects. First, the reduced federal funding would require local communities to increase their funding share, creating more incentive for them to propose the most cost-effective projects; however, localities might have difficulties generating the increased funding share, and this initiative could result in funding inequities for transit projects when compared with highway projects. Second, allowing non-fixed guideway projects to be funded under New Starts would give local communities more flexibility in choosing among transit modes and might promote the use of bus rapid transit, whose costs compare favorably with those of light rail systems; however, this initiative would change the original fixed guideway emphasis of New Starts, which some project sponsors we interviewed believe might disadvantage traditional New Starts projects. Finally, replacing the “exempt” classification with a streamlined rating process for all projects requesting less than $75 million might promote greater performance-oriented evaluation since all projects would receive a rating. However, this initiative might reduce the number of smaller communities that would participate in the New Starts program. The Congress established the Essential Air Service (EAS) program as part of the Airline Deregulation Act of 1978. The act guaranteed that communities served by air carriers before deregulation would continue to receive a certain level of scheduled air service. Special provisions guaranteed service to Alaskan communities. In general, the act guaranteed continued service by authorizing DOT to require carriers to continue providing service at these communities. If an air carrier could not continue that service without incurring a loss, DOT could then use EAS funds to award that carrier a subsidy. Subsidies are to cover the difference between a carrier’s projected revenues and expenses and to provide a minimum amount of profit. Under the Airline Deregulation Act, the EAS program was intended to sunset, or end, after 10 years. In 1987, the Congress extended the program for another 10 years, and in 1998, it eliminated the sunset provision, thereby permanently authorizing EAS. To be eligible for subsidized service, a community must meet three general requirements. It must have received scheduled commercial passenger service as of October 1978, may be no closer than 70 highway miles to a medium- or large-hub airport, and must require a subsidy of less than $200 per person (unless the community is more than 210 highway miles from the nearest medium- or large-hub airport, in which case no average per- passenger dollar limit applies). Funding for the EAS program comes from a combination of permanent and annual appropriations. Part of its funding comes from the Federal Aviation Reauthorization Act of 1996 (P.L. 104-264), which authorized the collection of user fees for services provided by the Federal Aviation Administration (FAA) to aircraft that neither take off nor land in the United States, commonly known as overflight fees. The act also permanently appropriated the first $50 million of such fees for EAS and safety projects at rural airports. In fiscal year 2003, total EAS program appropriations were $113 million. As the airline industry has evolved since the industry was deregulated in 1978, the EAS program has faced increasing challenges to remain viable. Since fiscal year 1995, the program’s costs have tripled, rising from $37 million to $113 million, and they are likely to continue escalating. Several factors are likely to affect future subsidy requirements. First, carriers’ operating costs have increased over time, in part because of the costs associated with meeting federal safety regulations for small aircraft beginning in 1996. Second, carriers’ revenues have been limited because many individuals traveling to or from EAS-subsidized communities choose not to fly from the local airport, but rather to use other larger nearby airports, which generally offer more service at lower airfares. On average, in 2000, each EAS flight operated with just over 3 passengers. Finally, the number of communities eligible for EAS subsidies has increased over time, rising from a total of 106 in 1995 to 114 in July 2002 (79 in the continental United States and 35 in Alaska, Hawaii, and Puerto Rico) and again to 133 in April 2003 (96 in the continental United States and 37 in Alaska, Hawaii, and Puerto Rico). The number of subsidy-eligible communities may continue to grow in the near term. Figure 4 shows the increase in the number of communities eligible for EAS-subsidized service between 1995 and April 2003. Over the past year, the Congress, the administration, and we have each identified a number of potential strategies generally aimed at enhancing the EAS program’s long-term sustainability. These strategies broadly address challenges related to the carriers’ cost of providing service and the passenger traffic and revenue that carriers can hope to accrue. In August 2002, in response to a congressional mandate, we identified and evaluated four major categories of options to enhance the long-term viability of the EAS program. In no particular order, the options we identified were as follows: Better match capacity with community use by increasing the use of smaller (i.e., less costly) aircraft and restricting little-used flight frequencies. Target subsidized service to more remote communities (i.e., those where passengers are less likely to drive to another airport) by changing eligibility criteria. Consolidate service to multiple communities into regional airports. Change the form of the federal assistance from carrier subsidies to local grants that would allow local communities to match their transportation needs with individually tailored transportation options. Each of these options could have positive and negative effects, such as lowering the program’s costs but possibly adversely affecting the economies of the communities that would lose some or all of their direct scheduled airline service. This year’s House-passed version of the FAA reauthorization bill, H.R. 2115, also includes various options to restructure air service to small communities now served by the EAS program. The bill proposes an alternative program (the “community and regional choice program”), which would allow communities to opt out of the EAS program and receive a grant that they could use to establish and pay for their own service, whether scheduled air service, air taxi service, surface transportation, or another alternative. The complementary Senate FAA reauthorization bill (also H.R. 2115) also includes specific provisions designed to restructure the EAS program. This bill would set aside some funds for air service marketing to try to attract passengers and create a grant program under which up to 10 individual communities or a consortium of communities could opt out of the existing EAS program and try alternative approaches to improving air service. In addition, the bill would preclude DOT from terminating, before the end of 2004, a community’s eligibility for an EAS subsidy because of decreased passenger ridership and revenue. The administration’s proposal would generally restrict appropriations to the $50 million from overflight fees and would require communities to help pay the costs of funding their service. The proposal would also allow communities to fund transportation options other than scheduled air service, such as on-demand “air taxis” or ground transportation. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions you or other members of the Committee may have. For future contacts regarding this testimony, please contact JayEtta Hecker at (202) 512-2834. Individuals making key contributions to this testimony included Robert Ciszewski, Steven Cohen, Elizabeth Eisenstadt, Rita Grieco, Steven Martin, Katherine Siggerud, Glen Trochelman, and Alwynne Wilbur. Federal-Aid Highways: Cost and Oversight of Major Highway and Bridge Projects—Issues and Options. GAO-03-764T. Washington, D.C.: May 8, 2003. Transportation Infrastructure Cost and Oversight Issues on Major Highway and Bridge Projects. GAO-02-673. Washington, D.C.: May 1, 2002. Surface Infrastructure: Costs, Financing, and Schedules for Large-Dollar Transportation Projects. GAO/RCED-98-64. Washington, D.C.: February 12, 1998. DOT’s Budget: Management and Performance Issues Facing the Department in Fiscal Year 1999. GAO/T-RCED/AIMD-98-76. Washington, D.C.: February 12, 1998. Transportation Infrastructure: Managing the Costs of Large-Dollar Highway Projects. GAO/RCED-97-27. Washington, D.C.: February 27, 1997. Transportation Infrastructure: Progress on and Challenges to Central Artery/Tunnel Project’s Costs and Financing. GAO/RCED-97-170. Washington, D.C.: July 17, 1997. Transportation Infrastructure: Central Artery/Tunnel Project Faces Financial Uncertainties. GAO/RCED-96-1313. Washington, D.C.: May 10, 1996. Central Artery/Tunnel Project. GAO/RCED-95-213R. Washington, D.C.: June 2, 1995. Highway Safety: Research Continues on a Variety of Factors That Contribute to Motor Vehicle Crashes. GAO-03-436. Washington, D.C.: March 31, 2003. Highway Safety: Better Guidance Could Improve Oversight of State Highway Safety Programs. GAO-03-474. Washington, D.C.: April 21, 2003. Highway Safety: Factors Contributing to Traffic Crashes and NHTSA’s Efforts to Address Them. GAO-03-730T. Washington, D.C.: May 22, 2003. Federal Transit Administration: Bus Rapid Transit Offers Communities a Flexible Mass Transit Option. GAO-03-729T. Washington, D.C.: June 24, 2003. Mass Transit: FTA Needs to Provide Clear Information and Additional Guidance on the New Starts Ratings Process. GAO-03-701. Washington, D.C.: June 23, 2003. Mass Transit: FTA’s New Starts Commitments for Fiscal Year 2003. GAO-02-603. Washington, D.C.: April 30, 2002. Mass Transit: FTA Could Relieve New Starts Program Funding Constraints. GAO-01-987. Washington, D.C.: August 15, 2001. Mass Transit: Project Management Oversight Benefits and Future Funding Requirements. GAO/RCED-99-240. Washington, D.C.: August 19, 1999. Mass Transit: Implementation of FTA’s New Starts Evaluation Process and FY 2001 Funding Proposals. GAO/RCED-00-149. Washington, D.C.: April 28, 2000. Mass Transit: Challenges in Evaluating, Overseeing, and Funding Major Transit Projects. GAO/T-RCED-00-104. Washington, DC: Mar. 8, 2000. Mass Transit: Status of New Starts Transit Projects With Full Funding Grant Agreements, GAO/RCED-99-240. Washington, D.C.: Aug. 19, 1999. Mass Transit: FTA’s Progress in Developing and Implementing a New Starts Evaluation Process. GAO/RCED-99-113. Washington, D.C.: April 26, 1999. Commercial Aviation: Issues Regarding Federal Assistance for Enhancing Air Service to Small Communities. GAO-03-540T. Washington, D.C.: March 11, 2003. Commercial Aviation: Factors Affecting Efforts to Improve Air Service at Small Community Airports. GAO-03-330. Washington, D.C.: January 17, 2003. Commercial Aviation: Financial Condition and Industry Responses Affect Competition. GAO-03-171T. Washington, D.C.: October 2, 2002. Options to Enhance the Long-term Viability of the Essential Air Service Program. GAO-02-997R. Washington, D.C.: Aug. 30, 2002. Commercial Aviation: Air Service Trends at Small Communities Since October 2000. GAO-02-432. Washington, D.C.: August 30, 2002. Essential Air Service: Changes in Passenger Traffic, Subsidy Levels, and Air Carrier Costs. T-RCED-00-185. Washington, D.C.: May 25, 2000. Essential Air Service: Changes in Subsidy Levels, Air Carrier Costs, and Passenger Traffic. RCED-00-34. Washington, D.C.: April 14, 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
It is important to ensure that longterm spending on transportation programs meets the goals of increasing mobility and improving transportation safety. In this testimony, GAO discusses what recently completed work on four transportation programs suggests about challenges and strategies for improving the oversight and use of taxpayer funds. These four programs are (1) the federal-aid highway program, administered by the Federal Highway Administration (FHWA); (2) highway safety programs, administered by the National Highway Traffic Safety Administration (NHTSA); (3) the New Starts program, administered by the Federal Transit Administration (FTA); and (4) the Essential Air Service (EAS) program, administered out of the Office of the Secretary of Transportation. Differences in the structure of these programs have contributed to the challenges they illustrate. The federal-aid highway program uses formulas to apportion funds to the states, the highway safety programs use formulas and grants, the New Starts program uses competitive grants, and the EAS program provides subsidies. For each program, GAO describes in general how the program illustrates a particular challenge in managing or overseeing long-term spending and in particular what challenges and strategies for addressing the challenges GAO and others have identified. The federal-aid highway program illustrates the challenge of ensuring that federal funds (nearly $30 billion annually) are spent efficiently when projects are managed by the states. GAO has raised concerns about cost growth on and FHWA's oversight of major highway and bridge projects. Recent proposals to strengthen FHWA's oversight are responsive to issues and options GAO has raised. Options identified in previous GAO work provide the Congress with opportunities to build on recent proposals by, among other things, clarifying uncertainties about FHWA's role and authority. NHTSA's highway safety programs illustrate the challenge of evaluating how well federally funded state programs are meeting their goals. Over 5 years, the Congress provided about $2 billion to the states for programs to reduce traffic fatalities, which numbered over 42,000 in 2002. GAO found that NHTSA was making limited use of oversight tools that could help states better implement their programs and recommended strategies for improving the tools' use that NHTSA has begun to implement. The administration recently proposed performance-based grants in this area. FTA's New Starts program illustrates the challenge of developing effective processes for evaluating grant proposals. Under the New Starts program, which provided about $10 billion in mass transit funding in the past 6 years, local transit agencies compete for project funds through grant proposals. FTA has developed a systematic process for evaluating these proposals. GAO believes that FTA has made substantial progress by implementing this process, but our work has raised some concerns, including the extent to which the process is able to adequately prioritize the projects. The Essential Air Service (EAS) program illustrates the challenge of considering modifications to statutorily defined programs in response to changing conditions. Under the EAS program, many small communities are guaranteed to continue receiving air service through subsidies to carriers. However, the program has faced increasing costs and decreasing average passenger levels. The Congress, the administration, and GAO have all proposed strategies to improve the program's efficiency by better targeting available resources and offering alternatives for sustainable services.
U.S. taxpayers can hold offshore accounts for a number of non-tax reasons, including access to funds while living or working overseas, asset protection, investment portfolio diversification, enhanced investment opportunities, and to facilitate international business transactions. U.S. taxpayers must report whether they have offshore accounts on Schedule B of IRS Form 1040 and pay taxes on income from the offshore accounts at their individual tax rates. Some taxpayers with large offshore account balances are also required to report additional account information, such as the name and location of their bank, by filing a form TD F 90-22.1, Report of Foreign Bank and Financial Accounts (FBAR). Failure to report the existence of offshore accounts or pay taxes on these accounts can lead to civil and criminal penalties. U.S. financial institutions are required to submit to IRS information returns that report income earned by account holders. IRS uses the information to check whether taxpayers are reporting investment earnings and other income correctly. Unlike the reporting requirements for U.S. financial institutions, there has been no reporting regime for foreign financial institutions, and this lack of information has limited IRS’s ability to ensure taxpayers were reporting offshore income accurately (see fig. 1). IRS has begun implementing provisions of the Foreign Account Tax Compliance Act (FATCA), which requires, beginning in 2015, U.S. financial institutions to withhold a portion of certain payments made to foreign financial institutions that have not entered into a specific agreement with IRS to report information on their U.S. clients. It is expected that IRS will use this information to identify noncompliant taxpayers. While IRS officials do not anticipate that FATCA will replace the offshore programs, they do believe that future programs may shift in focus to identifying promoters of offshore tax schemes that are not associated with the financial institutions that will be subject to FATCA reporting requirements. IRS’s offshore programs were designed to encourage taxpayers with undisclosed income from offshore accounts to become current with their tax liabilities. Although the offshore programs differed in details, all four followed a cycle similar to the one illustrated in figure 2. The offshore programs fit into IRS’s larger compliance efforts, which are intended to both detect noncompliance and to encourage voluntary compliance, in part by minimizing the burden for taxpayers to understand their tax obligations and file tax returns every year. While open and intended to attract all noncompliant taxpayers with offshore accounts, the four offshore programs to date all started with IRS identifying a particular group of taxpayers suspected of having unreported offshore accounts. The group might be account holders at a particular bank or in a particular country. Sometimes IRS obtains such information from whistleblowers. In 2007, a whistleblower provided details to the U.S. government about how his employer, Swiss bank UBS, was actively assisting and facilitating U.S. taxpayers’ concealment of taxable income.(See app. II for more information on the UBS whistleblower.) IRS may also use information gathered through prior offshore programs to identify other banks or countries where U.S. taxpayers may be hiding offshore income. The next step is to learn the identities of some of the taxpayers suspected of noncompliance. One technique is to use John Doe summonses. In 2008, prior to the announcement of the 2009 OVDP, a federal court granted IRS permission to serve a John Doe summons to UBS for information on its U.S. customers. As a result of the summons, and subsequent government negotiation and agreement, UBS turned over information on approximately 4,450 accounts held in Switzerland by U.S. persons. This was a partial list of all U.S. UBS account holders with accounts in Switzerland. In other cases, IRS has been able to get client lists from promoters of offshore tax evasion schemes. In order to encourage program participation, IRS publicizes the fact that it knows, or soon will know, the names of some offshore account holders. IRS also publicizes the terms of its offshore programs, which offer incentives to taxpayers who voluntarily disclose their accounts before IRS learns about them. As described later, the offshore programs offer a reduced risk of criminal prosecution, and lower penalties than taxpayers could receive if unreported offshore accounts were discovered in an audit. In this report we refer to the reduced penalty offered as part of an offshore program as the “offshore penalty.” In the 2009 OVDP the offshore penalty was typically 20 percent of the highest aggregate value of the unreported offshore accounts between 2003 and 2008. Provided that they meet certain criteria, taxpayers are accepted into one of IRS’s offshore programs by responding to IRS questions about the nature of their offshore noncompliance in an application letter and filing amended or late tax returns and FBARs. (See app. III for sample application letters.) Investigators from IRS’s Criminal Investigation division generally review applications to verify that taxpayers are not already under investigation, that the offshore income was from legal sources, and that the taxpayer has made a complete and truthful disclosure. Taxpayers’ amended or late returns that are submitted as part of an offshore program are reviewed and certified by IRS examiners who calculate the delinquent taxes, interest, and penalties, and who may request additional documents and information from taxpayers. Taxpayers who did not participate in an offshore program but are known to IRS (perhaps because they were on the list of names IRS identified in Step 2) run the risk of being audited outside of an offshore program. These taxpayers could be subject to substantially greater penalties and increased risk of criminal prosecution. Since 2009, IRS and the Department of Justice (DOJ) have publicized more than 40 prosecutions of UBS clients and UBS bankers. Through data mining, or analyzing, information from offshore program application letters, and reviewing the case files of program participants and auditing nonparticipants, IRS is able to identify new groups of taxpayers suspected of hiding income offshore. IRS can then choose to continue offering offshore programs and encourage these newly identified groups of taxpayers, as well as all taxpayers with unreported offshore accounts, to disclose their accounts voluntarily, repeating the cycle illustrated in figure 2. For example, taxpayers that participated in the 2009 OVDP named other Swiss banks and financial advisors who had assisted them with hiding offshore income. As a result, IRS and DOJ took actions to compel other Swiss banks to name their U.S. customers. To date, some Swiss banks have announced that they are cooperating with U.S. government investigations. One Swiss bank ceased operating after it pleaded guilty to helping U.S. taxpayers hide income offshore and agreed to pay approximately $74 million in fines, restitution, and civil forfeiture. IRS and DOJ are also pursuing other banks in Liechtenstein, Israel, and India, which had been named by 2009 OVDP participants. Each of IRS’s four offshore programs had a slightly different structure, including a higher standard offshore penalty rate for each subsequent program, as shown in table 1. In the 2009 OVDP, the standard offshore penalty was 20 percent. The offshore programs offer participating taxpayers a lower penalty than they could have been subject to if IRS had discovered their offshore account outside of the program. According to IRS, the offshore penalty is in lieu of all other liabilities for tax, interest, and penalties that IRS would not pursue. Taxpayers that do not participate in an offshore program could potentially face penalties that total more than 100 percent of the value of their unreported offshore accounts. These penalties could include FBAR, accuracy-related and/or delinquency, fraud, and foreign information return penalties. Most of the offshore programs also offered taxpayers mitigated penalties at lower rates, generally for taxpayers with small accounts or accounts that were not accessed, also shown in table 1. Many offshore accounts were presumably open for decades, something that we confirmed in our review of 2009 OVDP cases, but practical reasons prevented IRS from auditing and collecting unpaid taxes from all of those years. The standard 2009 OVDP 20 percent offshore penalty was calculated based not on additional taxes assessed, but on the highest aggregate value of the offshore accounts. As a result, the penalty has been described by tax practitioners as “rough justice,” in part because the amount in an account might include decades of tax-free buildup. (See app. IV for hypothetical examples illustrating tax-free build up and penalties for accounts of different ages.) Under the 2003, 2009, and 2011 programs, taxpayers had a specified period of time to join a program. The 2012 program is, at present, open ended. In each program, delinquent taxes and interest were assessed and collected for a limited number of prior years, which varied from four to eight tax years. Taxpayers were typically assessed accuracy-related and/or delinquency penalties for the delinquent taxes assessed in an offshore program, in addition to the offshore penalty described earlier. Despite the significant risks of not coming forward through one of IRS’s offshore programs, some taxpayers decide to do nothing and remain noncompliant. Other taxpayers have attempted to disclose their offshore accounts without paying all the delinquent taxes, interest, and penalties required by the programs. In a quiet disclosure, taxpayers file amended tax returns for all or some of the tax years covered by an offshore program, and report the income from the previously unreported accounts. The taxpayers would generally pay interest and either accuracy-related or delinquency penalties on the newly reported income, but would avoid the higher offshore penalty. At the same time, taxpayers attempting quiet disclosures would file late FBARs, if they had not previously filed FBARs, or amended FBARs, if they had, to disclose the offshore accounts that they had not previously reported. Taxpayers might also try to circumvent some of the taxes, interest, and penalties that would otherwise be owed in offshore programs by reporting the existence of any offshore accounts and any income from the accounts on their current year’s tax return, without amending prior years’ returns. These taxpayers would also likely disclose the existence of the accounts by filing FBARs for the current calendar year. This filing would appear similar to the opening of a new account. Such a taxpayer would avoid paying any delinquent taxes, interest, or penalties, unless audited. As described earlier, taxpayers who are caught disclosing offshore accounts outside of one of IRS’s offshore programs risk steeper penalties and criminal prosecution, based on the facts and circumstances of their cases. Participants in IRS’s 2009 OVDP had offshore accounts that varied considerably in size. Of the 10,439 closed 2009 OVDP cases, we estimate based on penalty data that the bottom 10 percent of the participants had account balances of less than $79,000 and the top 10 percent had balances over $4 million, as shown in table 2. The amount of offshore penalties also varied widely, which reflected the range of account balances. Some taxpayers were assessed an offshore penalty of a few thousand dollars while others were assessed several million dollars. The average offshore penalty assessed was about $376,000 while the median was approximately $108,000. Of the 10,439 closed cases, most were assessed offshore penalties and 96 percent of those assessed penalties received the standard offshore penalty—20 percent of the highest aggregate value of the offshore accounts, which was also the maximum offshore penalty rate in the 2009 OVDP. The 20 percent penalty was generally levied when the total account value was greater than $75,000 and when taxpayers used the accounts (e.g., made deposits or withdrawals) during the period under review (2003 to 2008). See table 3. Fewer than 5 percent of 2009 OVDP participants received one of the mitigated offshore penalties, 12.5 percent or 5 percent, also shown in table 3. (See sidebars for representative examples of mitigated penalty cases.) Consistent with IRS’s enforcements efforts and the design of the 2009 OVDP, we found that the population of participants was more likely to report offshore accounts in Switzerland than the average foreign account holder who filed an FBAR (see fig. 3). Taxpayers with closed cases also had higher incomes than the average taxpayer, were older, and were more likely to use the married filing jointly status. (See app. VI.) About half of the revenues collected through the 2009 OVDP, as of March 30, 2012, came from 378 cases where taxpayers received offshore penalties of $1 million or greater, meaning they had account balances of $5 million or greater. This group, which we refer to as “large penalty cases”, accounted for about 6 percent of the closed 2009 OVDP cases, but the penalties they received amounted to 49 percent of the total $1.9 billion in offshore penalties that had been assessed by IRS at that time. Given this group’s high share of penalties assessed, we selected a random sample of 30 of them for further examination and to obtain a better understanding taxpayers’ noncompliance. For large penalty cases, we estimate that more than 50 percent of taxpayers had one or more bank accounts with Swiss bank UBS. app. VII for detailed information on the location of these taxpayer’s offshore accounts, including country and bank names.) Some of these taxpayers with UBS accounts transferred funds from Swiss bank UBS in 2008—the time when the U.S. government was actively trying to compel UBS to name its U.S. account holders. The funds were often transferred to other, smaller Swiss banks that generally did not operate in the United States. A few taxpayers claimed that they transferred funds at the recommendation of their UBS financial advisors. Taxpayers transferring funds to other banks may have been attempting to keep their offshore accounts hidden before deciding to participate in the 2009 OVDP. The 95 percent confidence interval for the estimated 70 percent of taxpayers receiving large penalties with accounts at Swiss bank UBS is 51 percent to 85 percent. See appendix I for more information on our scope and methodology and appendix VII for more counts by case file. Many taxpayers in the 30 large penalty cases that we reviewed had resided outside the United States for extended periods of time—either as U.S. citizens or prior to obtaining U.S. citizenship. Many taxpayers who disclosed extended periods of non-U.S. residency reported that they had opened their offshore accounts with income earned outside of the United States. A few of these taxpayers had been living and working overseas as U.S. citizens for decades. Others within this group opened accounts before immigrating to the United States. Although some taxpayers in these cases became U.S. residents decades ago, they maintained their offshore accounts and did not disclose them on tax returns or FBARs. Some taxpayers reported opening bank accounts in Switzerland as a means of protecting family assets during periods of war or instability in their native country. Further, a few taxpayers who immigrated to the United States reported that they had been unaware of their FBAR reporting requirements, that they had to state that they had foreign accounts on the Form 1040, Schedule B, or that the United States taxes the worldwide income of its residents, including overseas investment income. (See sidebars for representative examples from our case file reviews.) Taxpayers in some of the cases that we reviewed disclosed that the original source of funds for their offshore accounts came from post-tax U.S. source income. A few of these taxpayers cited family histories or personal fears about the safety of U.S. banks as their reasons for moving savings offshore. Others reasons cited included the need to protect or shelter assets from possible U.S. lawsuits. We estimate that 47 percent of taxpayers receiving large penalties inherited offshore accounts from a parent, spouse, or other relative— some of whom were not U.S. citizens or residents.taxpayers reported inherited accounts that were jointly owned or managed by extended family members, such as siblings and cousins, who also applied to the 2009 OVDP and sometimes split the penalties. Regardless of how taxpayers in the large penalty cases came to own offshore accounts, many maintained but did not disclose offshore account In many instances, balances of several million dollars for many years. Some of these taxpayers did not pay U.S. taxes on income earned from these accounts for decades. We estimate that 40 percent of 2009 OVDP participants receiving large penalties used complex arrangements to indirectly own or manage their offshore accounts. These arrangements involved the use of foreign corporations, foundations, trusts, and other entities in jurisdictions that have been designated as offshore tax havens and financial privacy jurisdictions, some of which were recommended by the taxpayers’ foreign financial advisors. In some cases, the entities were “sham” entities—i.e., entities created to conceal ownership from U.S. tax authorities—which participants in some case files that we reviewed used to conceal the ownership of accounts or disguise the repatriation of offshore funds back to the United States. Another complex arrangement present in several large penalty cases was passive foreign investment companies (PFIC). A PFIC is a type of mutual fund or investment company held outside of the United States. Some foreign bank accounts disclosed through OVDP were in the form of simple interest bearing accounts, but others were foreign mutual funds that would be treated as PFICs under the Internal Revenue Code. PFICs may, in some cases, receive less favorable tax treatment than U.S. entities holding similar assets or earning similar income. Taxpayers who did not disclose PFICs may not have paid the additional taxes on such investments. In many cases, a number of previously unreported investment entities were disclosed through the 2009 OVDP, and IRS decided to accept an alternative tax on all their associated PFIC gains—20 percent of the gain—potentially a much lower tax rate than would otherwise have been available to those taxpayers. As previously discussed, one of the intended purposes of the 2009 OVDP was mining, or analyzing, data collected from OVDP applications and audits of participants and nonparticipants to identify entities and individuals who promoted or otherwise helped U.S. citizens hide assets and income offshore. We found that IRS collected the names of offshore financial institutions, financial advisors, bankers, attorneys, and other promoters from the 2009 OVDP that were involved in hiding U.S. taxpayers’ offshore income, and used the names to (1) identify patterns of noncompliance, (2) encourage banks and other promoters to cooperate with IRS and provide the names of U.S. taxpayers hiding income overseas, and (3) build cases for John Doe summonses. IRS officials from the Offshore Compliance Initiative office told us that publicity from the John Doe summonses has been the most effective tool to increase participation in its offshore programs. They based their conclusion on the correlation between country specific or bank specific John Doe summonses and the locations of 2009 OVDP participants’ accounts. Our case file analysis discussed previously in this report supports IRS’s conclusion. However, IRS officials also determined that data mining the 2009 OVDP applications would not provide IRS with all of the useful information it could get from participants. For taxpayers accepted into the program, responses on the 2009 OVDP applications varied widely in degree of detail, which we confirmed in our case file review. For example, some application letters included very detailed account information, such as the original source of funds, bank name, banker name, and country name, while other case files we reviewed did not contain any optional letter like the one suggested by IRS in its 2009 OVDP Questions & Answers. As a consequence, IRS sent surveys to 2009 OVDP participants to obtain more details about the offshore accounts. The survey included detailed questions about the taxpayer’s financial institutions, bankers, advisors, attorneys, or other promoters’ involvement in hiding offshore income. IRS program officials stated that the additional information they received from the surveys was useful and that they were using it, along with various analyses of voluntary disclosures, to identify particular banks, promoters, professionals, and others who promote, facilitate, or enable U.S. taxpayers in avoiding or evading payment of required U.S. taxes through the use of offshore accounts. According to IRS, these analyses have also been used to identify the foreign countries where the offshore accounts were maintained as well as the schemes being used and offshore structures. Based on data that IRS collected from mining the 2009 OVDP case files and the survey, IRS obtained information on offshore accounts held by U.S. taxpayers at HSBC (India); continued investigations of additional foreign financial institutions in Switzerland, Asia, and the Caribbean; built cases for additional John Doe summonses, should they become expanded its investigations of non-bank entities, such as merchant accounts, which are a type of bank account that allows a business to accept payments by payment cards, such as credit or debit cards;and improved subsequent offshore programs. One lesson that IRS learned from the 2009 OVDP was that the applications sometimes did not contain enough information to allow IRS to understand the nature of the noncompliance. To obtain better information going forward, and as a condition of being accepted into the 2011 and 2012 programs, IRS required applicants to submit additional documents related to their offshore accounts.account information about the original source of funds. In addition, applicants to the 2011 and 2012 programs that had offshore accounts with an aggregate balance of $1 million or more were required to submit a separate statement for each foreign financial institution. These applicants were also required to submit a separate statement for each foreign account or asset listed in their voluntary disclosure. (See app. III for sample 2009 and 2012 application letters and, the new required attachment to the 2012 application letter.) IRS officials from the Offshore Compliance Initiative office told us that they have begun to use data from these additional submissions to improve offshore compliance. Based in part on its experience with the 2009 OVDP, IRS introduced streamlined offshore program filing procedures. These were, in part, intended to provide a less burdensome process for taxpayers with unreported offshore accounts that were small. As shown earlier in table 2, for the 10,439 2009 OVDP cases that we had data for, the account value for the 10th percentile was about $78,000. According to IRS, some of these taxpayers with smaller accounts, and thus relatively low unpaid-tax obligations, were U.S. residents residing overseas, including dual citizens, who most likely did not owe substantial amounts of unpaid taxes, and who indicated to IRS that they did not understand their filing requirements. The standard offshore penalty for such taxpayers would likely be disproportionately high. The streamlined filing procedures that began in September 2012 allow taxpayers with “low compliance risk” to become current with their offshore tax obligations without facing offshore penalties or additional enforcement action. IRS defined “low compliance risk” as taxpayers with simple tax returns, owing less than $1,500 in taxes for each of the years covered by the streamlined procedures. IRS efforts to publicize the 2009 OVDP included notices published in seven languages and outreach to professional tax practitioners. IRS officials from the Offshore Compliance Initiative office told us that they had not formally evaluated the success of these outreach efforts. We recently reported concerns about the complexity of foreign account reporting requirements, and that tax practitioners and taxpayers are confused about what foreign account information should be reported and how. The offshore programs are part of IRS’s larger compliance efforts, which are intended to both detect noncompliance and to encourage voluntary compliance, in part by minimizing the burden for taxpayers to understand their tax obligations and file tax returns every year. Obtaining information on how taxpayers found out about IRS’s offshore voluntary disclosure programs could help IRS better identify populations that could benefit from additional taxpayer education and outreach and potentially improve voluntary compliance by taxpayers with new offshore accounts. Such information could also help IRS evaluate the success of its current outreach efforts. IRS’s 2009 OVDP application, however, did not contain a question on how the taxpayer became aware of the program. IRS made changes to the applications for subsequent programs, as described earlier, but did not consider adding questions on how participants became aware of the program. IRS officials from the Offshore Compliance Initiative office told us that this information would be useful in terms of allocating future resources, and that they would be open to considering a question on how taxpayers found out about the offshore programs. Presently, IRS has not decided to include this question in the 2012 program application. In our case file review, we found examples of immigrants who stated in their 2009 OVDP applications that they were unaware of their FBAR filing requirements. We found they had often opened banks accounts in their home country prior to immigrating to the United States. IRS officials from the Offshore Compliance Initiative office stated that although there are several FBAR education programs, none are specifically targeted at new immigrants. Furthermore, these IRS officials were unaware of any IRS work with other federal agencies such as the State Department or the Department of Homeland Security to educate recent immigrants about their foreign account filing requirements. These officials stated that one of the challenges that they face in their office, which is part of IRS’s Large Business and International Division, is that taxpayer education and outreach is the responsibility of IRS’s Wage and Investment Division and that issues concerning FBARs fall under IRS’s Small Business/Self- Employed Division. IRS officials from the Offshore Compliance Initiative office agree that more could be done to improve taxpayer education and outreach about offshore reporting requirements. They, like us, recognize that multiple outreach efforts could help to draw additional taxpayers into the offshore programs, and that data mining information from the program applications can help identify these groups. Quiet disclosures matter because if IRS does not identify them, it undermines the incentive to participate in the offshore programs. IRS’s offshore compliance enforcement efforts, including the offshore programs, deter taxpayers with noncompliance related to current offshore accounts, or offshore accounts that might be opened in the future. If taxpayers are able to quietly disclose and pay fewer penalties than they would have in an offshore program, the incentive for other noncompliant taxpayers to participate in a program is reduced. When quiet disclosures remain undetected, they also result in lost revenue for the government. Further, if quiet disclosures remain undetected, then IRS will not have information on the characteristics of these taxpayers and their accounts— characteristics such as bank names, country names, and promoter names—used to build cases against others. We identified 10,595 potential quiet disclosures, a number much higher than the potential quiet disclosures identified by IRS. In a series of Questions & Answers that IRS first released on February 8, 2011 to announce the 2011 offshore program, IRS reported that it had identified, and will continue to identify, taxpayers attempting quiet disclosures. In the Questions & Answers, IRS stated that it would be closely reviewing amended tax returns to determine whether enforcement action is appropriate. (See sidebar for one example of a quiet disclosure being detected.) IRS officials told us that the Offshore Compliance Initiative office tested several different methodologies to identify quiet disclosures. First, IRS looked at amended returns during tax year 2003 to tax year 2008, the period covered by the 2009 OVDP, and removed any non-offshore related adjustments, such as filings status changes and additional exemptions. IRS also looked at amended returns with increased tax assessments over an established threshold during tax year 2003 to tax year 2010. The effectiveness of a third effort was questioned by IRS. In this effort IRS compared taxpayers with a history of filing FBARs in non-secrecy jurisdictions between tax year 2003 and tax year 2008 who filed delinquent FBARs processed in 2009 involving a secrecy jurisdiction along with an amended return. In 2012, a fourth effort, which was not designed to detect quiet disclosures, but to reroute misaddressed amended returns sent in by participants in the 2011 offshore program, was the most successful effort to find them. Together, these four efforts led to the review of several thousand tax returns. Of those, several hundred returns were identified as quiet disclosures. An IRS official told us that the tax returns that were identified as part of a quiet disclosure will be examined and that cases already examined had penalties assessed. Because they were quiet disclosures, the official said the taxpayers did not receive the reduced offshore penalty. Given the importance of IRS’s ability to detect quiet disclosures and evidence that they exist, we tested a different methodology to identify potential quiet disclosures, and found many more than IRS detected. Unlike IRS, we looked at all taxpayers who, for the tax years covered by the 2009 OVDP filed amended or late returns, and filed amended or late FBARs. We then excluded 2009 OVDP participants from this population. While only an IRS examination can determine whether a potential quiet disclosure is an actual quiet disclosure, the 10,595 taxpayers that we identified have an unlikely combination of characteristics that could indicate that taxpayers are quietly disclosing. IRS agreed with our methodology as reasonable and appropriate. (See app. I for additional details about our methodology and app. VIII for a full breakout of our results.) Although any of the 10,595 potential quiet disclosures could be actual quiet disclosures, certain subpopulations raised more questions. First, we found 3,386 taxpayers that filed amended or late returns, and filed amended or late FBARs for multiple years. Second, we found that 94 of these taxpayers met the same criteria for all six tax years covered by the 2009 OVDP. IRS officials from the Offshore Compliance Initiative office told us that they had no additional work planned to identify potential quiet disclosures and had not yet decided to broaden the methodologies that they had tested, but they expressed strong interest in researching our methodology to identify taxpayers attempting quiet disclosures. We recognize that there are additional costs to using a methodology such as the one we used, but IRS has already committed resources to identifying quiet disclosures. Moreover, without rigorously and systematically searching for potential quiet disclosures, IRS does not have reasonable assurance that it is controlling such disclosures and collecting the delinquent taxes, interest, and penalties due. Exploring different methodologies that include a systematic evaluation of amended returns or late filed returns, along with amended or late filed FBARs, without too narrowly restricting either the amended return or the FBAR populations, and implementing the best option could provide this assurance. Data from IRS’s SOI division and from FinCEN show that the number of taxpayers reporting offshore accounts on Form 1040, Schedule B and the number of taxpayers filing FBARs has increased significantly in recent years. From tax year 2007 to tax year 2010 (the most recent data available), IRS estimated that the number of taxpayers reporting offshore accounts on Form 1040, Schedule B nearly doubled to 516,000, as shown in figure 4. From tax year 2003 through tax year 2007, only about 1 percent of all taxpayers filing Form 1040, Schedule B checked a “yes” box in response to the question asking if they owned or controlled a foreign financial account, but that share increased to more than 2.5 percent by tax year 2010. Furthermore, FinCEN has reported that the number of FBARs filed more than doubled, as shown in figure 4. Both the increase in the number of foreign accounts reported on Form 1040, Schedule B and the increase in FBAR filings are significantly larger than the approximately 39,000 taxpayers that came forward in one of IRS’s offshore programs. There could be legitimate reasons for these trends. For example, taxpayers could be reporting new offshore accounts or taxpayers who had always reported income from offshore accounts on their tax returns could be filing FBARs and reporting the accounts on Form 1040, Schedule B for the first time. This could be an indication of more taxpayers coming into compliance as a result of IRS’s efforts to combat offshore tax evasion. However, such a sharp increase in foreign account reporting amidst the global economic recession and the publicity surrounding IRS’s offshore programs raises the question whether some of these taxpayers may have attempted to circumvent some of the taxes, interest, and penalties that would otherwise be owed in the offshore programs. Unlike taxpayers attempting a quiet disclosure, who would still pay taxes plus interest on previously unreported income covered by the programs, and possibly an accuracy-related or delinquency penalty, these taxpayers would only be paying taxes on the offshore income earned for the year reported. An IRS official from the Offshore Compliance Initiative office told us that although the office has coordinated with IRS’s Planning, Analysis, Inventory, and Research (PAIR) office, they had not discussed Form 1040, Schedule B or FBAR filing trends, and that he was not aware of the sharp increase. As of January 2013, no projects were planned to research Form 1040, Schedule B filing trends. However, the Offshore Compliance Initiative office has asked PAIR to determine whether taxpayers who reported their offshore income properly, but had not filed FBARs, recently started filing delinquent FBARs, as directed by the 2009 OVDP instructions. who are reporting existing offshore accounts as new. This effort may not capture first time FBAR filers Because the increase in recent years in Form 1040, Schedule B and FBAR reporting of foreign accounts is measured in the hundreds of thousands, we recognize that it may be too costly for IRS to audit all of those filings. A less costly approach could involve, for example, IRS drawing a random sample of those cases and auditing them to understand whether taxpayers are trying to circumvent some of the taxes, interest, and penalties that would otherwise be owed in the offshore programs. One of the things that IRS could look for in such an audit is the date that the offshore account was opened. Such a sample could provide an estimate of the magnitude of any problem. As was the case with quiet disclosures, without such information, it will be difficult for IRS to provide reasonable assurance that taxpayers are not reporting, for the first time, offshore accounts that had been open for years to avoid paying delinquent taxes, interest, and penalties. See Internal Revenue Service, “Voluntary Disclosure: Questions and Answers,” accessed February 8, 2013, http://www.irs.gov/uac/Voluntary-Disclosure:-Questions-and- Answers, Q9. unreported offshore income. Through these programs, IRS has collected more than $5.5 billion to date, brought tens of thousands of taxpayers into compliance, and gained increased information on offshore noncompliance. It is unclear how many additional U.S. taxpayers have undeclared foreign accounts and how much unreported income is associated with those accounts. However, the number of quiet disclosures IRS was able to find (some by accident), the number of potential quiet disclosures we identified, and the sharp upswing in Form 1040, Schedule B and FBAR filings all suggest that the amount of revenue to be collected from previously undisclosed offshore accounts could be significant. We found two key issues that, if addressed, could make IRS’s offshore programs even more successful. IRS has not used program information to identify populations of taxpayers that would benefit from education and outreach regarding their offshore tax reporting obligations. Such information could promote voluntary compliance and reduce the need for enforcement actions. Additionally, IRS does not obtain information on how taxpayers learned about offshore programs. Without this information, IRS cannot fully evaluate its efforts to promote taxpayer participation in offshore programs. IRS may have missed taxpayers attempting to circumvent some of the taxes, interest, and penalties that would otherwise be owed in its offshore programs. Our methodology to identify potential quiet disclosures found many more potential disclosures than IRS detected. IRS may also have missed other attempts at circumvention by not researching the upward trends of taxpayers reporting offshore accounts for the first time. While there would be costs to such efforts, the amount already collected by the offshore programs suggests that considerable additional revenue gains might be possible. By identifying taxpayers attempting to circumvent some of the taxes, interest, and penalties that would otherwise be owed in its offshore programs, and taking appropriate action, IRS could potentially increase revenues, bolster the overall fairness of the program, and have a more informed basis for improving voluntary compliance. We recommend that the Acting Commissioner of Internal Revenue take the following four actions: Use data gained from offshore programs to identify and educate populations of taxpayers that might not be aware of their tax obligations related to offshore income and FBAR filing requirements. Obtain information that can help IRS test offshore program promotion strategies and identify new ones by adding a question to current and future programs to determine how participants found out about the program. Explore options for employing a methodology for identifying and pursuing potential quiet disclosures to provide more assurance that actual quiet disclosures are not being missed and then implement the best option. Conduct an analysis designed to measure the extent that taxpayers are reporting existing foreign accounts on the Form 1040, Schedule B or on FBARs for the first time and circumventing some of the taxes, interest, and penalties that would otherwise be owed, and take appropriate action based on the analysis. We provided a draft of this report to the Acting Commissioner of Internal Revenue for comment. In written comments, reproduced in appendix IX, IRS agreed with our four recommendations. IRS noted that it was pleased that we recognized the overall success of its offshore strategy and provided steps that they are taking to implement our recommendations and address any identified noncompliance, as warranted. IRS also provided technical comments on our draft report, which we incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to report to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We are also sending copies to the Acting Commissioner of Internal Revenue, the Secretary of the Treasury, the Chairman of the IRS Oversight Board, and the Deputy Director for Management of the Office of Management and Budget. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9110 or whitej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix X. The objectives of this report were to (1) describe the nature of the noncompliance of taxpayers participating in the 2009 Offshore Voluntary Disclosure Program (OVDP), (2) determine the extent to which Internal Revenue Service (IRS) used data from the 2009 OVDP in order to better prevent and detect future noncompliance, and (3) assess IRS’s efforts to identify taxpayers who may have attempted quiet disclosures or other ways of circumventing some of the taxes, interest, and penalties that would otherwise be owed in its offshore programs. To describe the characteristics of taxpayers participating in the 2009 OVDP, we relied on data for tax years 2003 through 2008 from four sources: (1) the Criminal Investigation Management Information System (CIMIS) managed by IRS’s Criminal Investigation (CI) division; (2) the Currency and Banking Retrieval System managed by the Treasury Department’s Financial Crimes Enforcement Network (FinCEN); (3) IRS’s Individual Master File and Business Master File; and (4) IRS’s Compliance Data Warehouse (CDW). We used data from four databases in CDW: Enforcement Revenue Information System, Individual Returns Transaction File, Audit Information Management System, Individual, and Business Returns Transaction File. To determine the reliability of IRS’s taxpayer data, we reviewed relevant documentation, conducted interviews with IRS officials knowledgeable of the data, and conducted electronic testing of the data to identify obvious errors or outliers. We determined that these data were sufficiently reliable for our purposes. This population includes 200 participants with an Employer Identification Number (EIN), which IRS uses to identify businesses, instead of an Individual Tax Identification Number or Social Security Number. Since these business entities represented less than 1 percent of the total OVDP participants identified, our use of the term “OVDP participants” in this report generally refers to individual taxpayers participating in the program. one) in situations where only one spouse applied to the 2009 OVDP through CI, but both were liable for the delinquent taxes, interest, and penalties because of their married filing jointly filing status. From the 19,337 participants, we identified 10,439 closed examination cases as of November 29, 2012, which we use in this report for our analysis of penalties. To obtain a better understanding of taxpayer noncompliance, we selected a random sample of 30 2009 OVDP case files for cases that were closed as of March 30, 2012, and that received a 2009 OVDP penalty of $1 million or greater. As part of the 2009 OVDP application, taxpayers were asked to explain their reasons for establishing offshore accounts, the source of funds, the ownership structure, and the history of accounts. Many taxpayers in our sample submitted an IRS optional letter containing this information with their application (referred to in this report as the “application letter.” See appendix III for sample application letters). Some taxpayers were interviewed by IRS investigators, and some responded to IRS follow-up requests for additional information. Additionally, other case file documents that provided key information were: (1) IRS Form 906, Closing Agreement On Final Determination Covering Specific Matters; (2) IRS Form 4549-A, Income Tax Discrepancy Adjustments; (3) OVDP Penalty Computation Workpaper; and (4) form TD F 90-22.1, Report of Foreign Bank and Financial Accounts (FBAR). We used a standard data collection instrument to review each case file to ensure we consistently captured information about the 2009 OVDP participants, their offshore accounts, and their penalties, interest, and additional taxes owed. To ensure reliability, two analysts separately conducted this analysis, and a third analyst compared and reconciled any inconsistencies regarding the categorizations of 2009 OVDP cases. The analysts then tallied the number of observations for each topic or category and all information was traced and verified. We then analyzed the results of this data collection effort to identify main themes and develop summary findings. We determined that these data were sufficiently reliable for our purposes. (See app. VII for a summary of our data collection instrument results.) To determine the extent to which IRS used data from the 2009 OVDP in order to better prevent and detect future noncompliance, we also interviewed IRS officials from the office of the Offshore Compliance Initiative to determine what data they collected from the 2009 OVDP effort and how, if at all, IRS used that data to create taxpayer profile data to identify additional offshore noncompliance and inform future offshore programs. In addition, we reviewed changes that IRS made to the 2011 and 2012 offshore programs. To assess IRS’s efforts to identify taxpayers who may have attempted quiet disclosures, we used the same datasets that we used to identify the 2009 OVDP population, as described above, plus FBAR data from FinCEN. To determine the reliability of FinCEN’s FBAR data, we reviewed relevant documentation, conducted interviews with FinCEN officials knowledgeable of the data, and conducted electronic testing of the data to identify errors or outliers. We determined that these data were sufficiently reliable for our purposes. To identify potential quiet disclosures we conducted a three-step analysis. First, we used IRS tax return data to identify taxpayers who filed late or amended returns for the applicable 2009 OVDP period. We then used FBAR data to identify taxpayers who filed late or amended FBARs during the same time period to create a combined list of taxpayers. Finally, we removed from this combined list any taxpayers that we had previously identified as 2009 OVDP participants. The remaining taxpayers constitute our population of taxpayers who potentially “quietly disclosed” offshore accounts. From this population, we used data from amended tax returns to identify whether the amended returns had positive adjustments to income, and whether taxpayers filed amended returns for multiple years. We confirmed this methodology with IRS officials. The results of our analyses are shown in appendix VIII. To assess other ways taxpayers might be circumventing some of the taxes, interest, and penalties that would be otherwise owed, we analyzed filing trends in FBAR data from FinCEN and in Schedule B, Interest and Ordinary Dividends, of IRS Form 1040, U.S. Individual Income Tax Return, from IRS’s Statistics of Income Division (SOI). To assess the reliability of the SOI data that we analyzed, we reviewed agency documentation and interviewed officials familiar with the data. We determined that these data were sufficiently reliable for our purposes. IRS’s first offshore program started in 2003 as part of an ongoing, multipronged effort to counter offshore tax evasion. Related to the 2003 program was the Offshore Credit Card Program, which stemmed from a series of John Doe summonses issued to a variety of financial and commercial businesses to obtain information on U.S. persons who held credit, debit, or other payment cards issued by offshore banks. IRS used records from the summonses to trace the identities of taxpayers whose use of these payment cards may have been related to hiding taxable income; this drew many other taxpayers to the offshore program.figure 5 for a timeline of key events.) The Internal Revenue Code provides whistleblowers with a significant financial incentive to report noncompliance. It provides for awards up to 30 percent of the collected proceeds that arise from the whistleblower’s information. 26 U.S.C. § 7623. A whistleblower is someone who reports information on potential tax problems, such as fraud, to the IRS. Although not publicly confirmed by IRS, attorneys for the UBS whistleblower reported that he was awarded $104 million. For additional information on tax whistleblowers, see GAO, Tax Whistleblowers: Incomplete Data Hinders IRS’s Ability to Manage Claim Processing Time and Enhance External Communication, GAO-11-683 (Washington, D.C.: Aug. 10, 2011). specific criteria by which the 4,450 accounts would be selected until after the 2009 OVDP deadline passed. This created uncertainty among UBS account holders as to whether their names were on the list to be disclosed. IRS gave taxpayers until October 15, 2009, to enter the program. IRS publicity about the program, and correspondence sent by UBS to all U.S. account holders, emphasized the several criminal and civil penalties applicable to taxpayers who did not make voluntary disclosures before Switzerland turned over the account data. The 2011 and 2012 programs had a similar draw for taxpayers. During the 2011 program, IRS and DOJ were building cases against tax evasion involving foreign banks in several countries, including Switzerland, Liechtenstein, Israel, and India. Many 2011 program participants came forward as a result of criminal enforcement activity and a John Doe summons issued to HSBC, a global banking and financial services firm headquartered in the United Kingdom, with significant business operations in Hong Kong and Asia. The 2012 program, which is still open and as of March 2013 does not have an end date, is expected to draw participants based on further criminal enforcement activity against foreign banks and opportunities for additional John Doe summonses that are being built by IRS and DOJ with information from past offshore programs. Also during this time, as the Foreign Account Tax Compliance Act (FATCA) becomes fully implemented, IRS expects to have increased information reporting from certain taxpayers and from foreign financial institutions on offshore accounts. The 2009 Offshore Voluntary Disclosure Program (OVDP) penalties follow what some tax practitioners have called “rough justice” because of the relationship between the offshore penalties and the original taxes evaded. Figure 6 illustrates how two hypothetical offshore accounts bearing 5 percent interest might grow over time. One account is owned by a compliant taxpayer who reports the interest income and pays U.S. taxes at a 35 percent rate with earnings from the account. The other account is owned by a noncompliant taxpayer who does not report the interest income. Assuming both taxpayers deposited $1 million in 1986, the compliant taxpayer would accumulate a balance of approximately $2.1 million by 2009 and the noncompliant taxpayer would accumulate $3.1 million. The compliant taxpayer would have paid tax in each year the account was open, totaling about $585,000 in cumulative taxes on the reported account’s interest over 23 years. A noncompliant taxpayer who participated in the 2009 OVDP would, after disclosing the account, make a one-time payment in 2009 of about $993,000 in taxes, interest, and penalties. Although the 2009 OVDP participant would pay more in total taxes and penalties, the final account balances for both taxpayers would be roughly the same. Using the same hypothetical model from figure 6 can help illustrate how taxpayers with newer offshore accounts that have not accumulated decades of untaxed interest income are treated. Assuming the hypothetical accounts in figure 6 were opened in 2004 (instead of 1986), the compliant taxpayer would have paid about $93,000 in taxes on the interest income and accumulated a balance of about $1.2 million by 2009, and the noncompliant taxpayer paying no taxes would have accumulated about $1.3 million. If the noncompliant taxpayer came forward through the 2009 OVDP, the penalties, interest, and delinquent taxes would have totaled about $387,000. The 2009 OVDP participant’s ending account balance would be about $890,000, which is less than the original opening deposit amount. We identified 200 2009 OVDP participants with an Employer Identification Number (EIN), which is used by IRS to identify a business entity. We did not have complete information on all of the businesses in our sample. In addition, not all of the businesses had filing requirements in every year covered by the 2009 OVDP. Table 4 shows the tax forms filed by some of the businesses in tax year 2008, and table 5 shows the self-reported North American Industry Classification System (NAICS) code. Taxpayers participating in the 2009 OVDP most often used the married filing jointly filing status, were most often age 55 and over, and had an average adjusted gross income of about $528,000, as show in table 6. As noted in appendix I, we used a standard data collection instrument to capture information from a sample of 30 2009 OVDP cases in which taxpayers received offshore penalties of $1 million or greater. We then analyzed the results to identify main themes, and develop the summary findings presented in this report. The information in this appendix contains information from our case file reviews. We calculated offshore account balances based on penalty information. For our sample of 30 cases, the average account balance was almost $15 million, as shown in table 7 with other key information. Most of the 30 cases we reviewed contained some information about the bank names and country locations of the offshore accounts. In some cases, 2009 OVDP participants disclosed dozens of offshore accounts with multiple banks and in multiple countries; in other cases, participants reported only one account. Only those offshore accounts that were open in tax year 2003 through tax year 2008 were included in the calculation of the 20 percent 2009 OVDP penalty. In compiling our profile, we only included information on accounts that were open during the 2009 OVDP applicable period and included in the penalty calculation. (Some participants disclosed additional offshore accounts that were closed prior to 2003 and not part of the 2009 OVDP penalty calculation.) Figure 7 illustrates the most commonly disclosed country locations. A total of 17 different locations were noted in the 28 cases that disclosed locations, with Switzerland being the most commonly reported location. Figure 8 illustrates the most commonly disclosed bank names. A total of 42 different banks were reported in the 29 cases that contained bank name information, with UBS by far the most commonly disclosed bank name, followed by Swiss banks Julius Baer, and Credit Suisse. Twenty-two of the case files we reviewed contained information about the history of the accounts and the nature of the taxpayer’s noncompliance. Many of the accounts had been opened decades ago. The median period of time that participants had owned but not reported income from these accounts, was 18 years, and the average period was 25 years. In four cases, the participants had owned offshore accounts for 50 years or longer. Table 8 summarizes key information from the data collection instrument we used to collect information on the 30 offshore case files we reviewed. James R. White, (202) 512-9110 or whitej@gao.gov. In addition to the contact named above, Mark Abraham, Tara Carter (Analyst-In-Charge), Andrew Ching, Leon Green, Mark Kehoe, and Libby Mixon (Assistant Director) made contributions to the report. Jeff Arkin, Chuck Fox, Robert Gebhart, George Guttman, Brian James, Sarah McGrath, Donna Miller, John Mingus, Ed Nannenhorn, Karen O’Conor, Robert Robinson, Cynthia Saunders, Andrew Stephens, Wayne Turowski, Jim Ungvarsky, and John Zombro provide key assistance.
Tax evasion by individuals with unreported offshore financial accounts was estimated by one IRS commissioner to be several tens of billions of dollars, but no precise figure exists. IRS has operated four offshore programs since 2003 that offered incentives for taxpayers to disclose their offshore accounts and pay delinquent taxes, interest, and penalties. GAO was asked to review IRS’s second offshore program, the 2009 OVDP. This report (1) describes the nature of the noncompliance of 2009 OVDP participants, (2) determines the extent IRS used the 2009 OVDP to prevent noncompliance, and (3) assesses IRS’s efforts to detect taxpayers trying to circumvent taxes, interests, and penalties that would otherwise be owed. To address these objectives, GAO analyzed tax return data for all 2009 OVDP participants and exam files for a random sample of cases with penalties over $1 million; interviewed IRS Offshore officials; and developed and implemented a methodology to detect taxpayers circumventing monies owed. As of December 2012, the Internal Revenue Service's (IRS) four offshore programs have resulted in more than 39,000 disclosures by taxpayers and over $5.5 billion in revenues. The offshore programs attract taxpayers by offering a reduced risk of criminal prosecution and lower penalties than if the unreported income was discovered by one of IRS's other enforcement programs. For the 2009 Offshore Voluntary Disclosure Program (OVDP), nearly all program participants received the standard offshore penalty--20 percent of the highest aggregate value of the accounts--meaning the account value was greater than $75,000 and taxpayers used the accounts (e.g., made deposits or withdrawals) during the period under review. The median account balance of the more than 10,000 cases closed so far from the 2009 OVDP was $570,000. Participant cases with offshore penalties greater than $1 million represented about 6 percent of all 2009 OVDP cases, but accounted for almost half of all offshore penalties. Taxpayers from these cases disclosed a variety of reasons for having offshore accounts, and more than half of them had accounts at Swiss bank UBS. Using 2009 OVDP data, IRS identified bank names and account locations that helped it pursue additional noncompliance. Based on a review of cases, GAO found examples of immigrants who stated in their 2009 OVDP applications that they were unaware of their offshore reporting requirements. IRS officials from the Offshore Compliance Initiative office said they have not targeted outreach efforts to new immigrants. Using information from the 2009 OVDP, such as the characteristics of taxpayers who were not aware of their reporting requirements, to increase education and outreach to those populations could promote voluntary compliance. IRS has detected some taxpayers with previously undisclosed offshore accounts attempting to circumvent paying the taxes, interest, and penalties that would otherwise be owed, but based on GAO reviews of IRS data, IRS may be missing attempts by other taxpayers attempting to do so. GAO analyzed amended returns filed for tax year 2003 through tax year 2008, matched them to other information available to IRS about taxpayers' possible offshore activities, and found many more potential quiet disclosures than IRS detected. Moreover, IRS has not researched whether sharp increases in taxpayers reporting offshore accounts for the first time is due to efforts to circumvent monies owed, thereby missing opportunities to help ensure compliance. From tax year 2007 through tax year 2010, IRS estimates that the number of taxpayers reporting foreign accounts nearly doubled to 516,000. Taxpayer attempts to circumvent taxes, interest, and penalties by not participating in an offshore program, but instead simply amending past returns or reporting on current returns previously unreported offshore accounts, result in lost revenues and undermine the programs' effectiveness. Among other things, GAO recommends that IRS (1) use offshore data to identify and educate taxpayers who might not be aware of their reporting requirements; (2) explore options for employing a methodology to more effectively detect and pursue quiet disclosures and implement the best option; and (3) analyze first-time offshore account reporting trends to identify possible attempts to circumvent monies owed and take action to help ensure compliance. IRS agreed with all of GAO's recommendations.
We recently issued several reports on acquisition spending and workforce trends. These reports show that spending on services acquisitions is increasing at a time when the acquisition workforce is decreasing. Our report on spending and workforce trends in federal procurement shows that federal agencies continue to buy far more services than goods. Since 1997, spending on services has grown 11 percent. In fiscal year 2001, over 60 percent of the more than $220 billion in goods and services purchased by the federal government was for services. At six agencies, procurement of services exceeded 75 percent of their total spending on contracts; at one agency, the Department of Energy, nearly 100 percent of total spending via contracts was for services (see fig. 1). Spending on services could increase even further, at least in the short term, given the President’s recent request for additional funds for defense and homeland security. The degree to which individual agencies are currently contracting for services and the growth of services spending underscore the importance of ensuring that service acquisitions are managed properly. Industry and government experts alike recognize that the key to a successful transformation toward a more effective acquisition system is having the right people with the right skills. To increase the efficiency and effectiveness of acquiring goods and services, the government is relying more on judgment and initiative versus rigid rules to make purchasing decisions. Agencies have to address governmentwide reductions in the acquisition workforce. At the same time, government contract actions exceeding $25,000 have increased significantly—by 26 percent between fiscal years 1997 and 2001 (see table 1). Over the past year, GAO issued four reports on the management and training of the government’s acquisition workforce. While the agencies we reviewed are taking steps to address their future acquisition workforce needs, each is encountering challenges in their efforts. In particular, shifting priorities, missions, and budgets have made it difficult for agencies to predict, with certainty, the specific skills and competencies the acquisition workforce may need. Training is critical in ensuring that the acquisition workforce has the right skills. To deliver training effectively, leading organizations typically prioritize and set requirements for those in need of training to ensure their training reaches the right people. Agencies we reviewed had developed specific training requirements for their acquisition workforce and had efforts underway to make training available and raise awareness of major acquisition initiatives. However, they did not have processes for ensuring that training reaches all those who need it. And while agencies had also developed a variety of systems to track the training of their personnel, they experienced difficulties with these systems. We have issued a number of reports on key provisions of SARA. These reports address the areas of acquisition leadership, workforce, contract innovations, as well as other proposals. Our discussions with officials from leading companies, which we reported on last year, indicate that a procurement executive or Chief Acquisition Officer plays a critical role in changing an organization’s culture and practices. In response to many of the same challenges faced by the federal government—such as a lack of tools to ensure they receive the best value over time—each of the companies we studied changed how they acquired services in significant ways. For example, each elevated or expanded the role of the company’s procurement organization; designated “commodity” managers to oversee key services; and/or made extensive use of cross- functional teams. Taking a strategic approach paid off. One official, for example, estimated that his company saved over $210 million over a recent 5-year period by pursuing a more strategic approach. Bringing about these new ways of doing business, however, was challenging. To overcome these challenges, the companies found they needed to have sustained commitment from their senior leadership—first, to provide the initial impetus to change, and second, to keep up the momentum. Section 201 of SARA would create a Chief Acquisition Officer (CAO) within each civilian executive agency. We support this provision. By granting the CAO clear lines of authority, accountability, and responsibility for acquisition decision-making, SARA takes a similar approach as leading companies in terms of the responsibility and decision-making authority of these individuals. Comptroller General David Walker testified earlier this month that strategic human capital management must be the centerpiece of any serious government transformation effort and that federal workers can be an important part of the solution to the overall transformation effort. In July 2001, he recommended that Congress explore greater flexibilities to allow federal agencies to enhance their skills mix by leveraging the expertise of private sector employees through innovative fellowship programs. The acquisition professional exchange program proposed in section 103 of SARA could enhance the ability of federal workers to successfully transform the way the federal government acquires services. The program, which is modeled after the Information Technology Exchange Program included in the recently passed E-Government Act of 2002, would permit the temporary exchange of high-performing acquisition professionals between the federal government and participating private-sector entities. We support this provision, which begins to address a key question we face in the federal government: Do we have today, or will we have tomorrow, the ability to manage the procurement of the increasingly sophisticated services the government needs? Following a decade of downsizing and curtailed investments in human capital, federal agencies currently face skills, knowledge, and experience imbalances that, without corrective action, will worsen. The program established by section 103 would allow federal agencies to gain from the knowledge and expertise of private- sector professionals and entities. Section 102 of SARA would establish an acquisition workforce training fund using five percent of the fees generated by governmentwide contract programs. We recently completed a review of fees charged on governmentwide contracts—covering all five designated executive agencies for governmentwide acquisition contracts and the General Services Administration’s Schedules program. The Office of Management and Budget’s guidance directs agencies operating governmentwide information technology contracts to transfer fees in excess of costs to the miscellaneous receipts account of the U.S. Treasury’s General Fund. Further, some of these contracts operate under revolving fund statutes that limit the use of fees to the authorized purposes of the funds. Quality training is important, and we recognize the need for adequate funds for training. In our view, however, the procuring agencies should ensure that adequate funding is available through the normal budgeting process to provide the training the acquisition workforce needs. We are concerned about relying on contract program fees–which can vary from year to year and which are intended to cover other requirements–as a source of funding for such an important priority as workforce training. Several sections of SARA would encourage the use of innovative contract types that could provide savings to the government. For example, performance-based contracts can offer significant benefits, such as encouraging contractors to find cost-effective ways of delivering services. Share-in-savings contracting, one type of performance-based contracting, is an agreement in which a client compensates a contractor from the financial benefits derived as a result of the contract performance. Share-in-savings contracting can motivate contractors to generate savings and revenues for their clients. We issued a report earlier this year in response to your request that we determine how the commercial sector uses share-in-savings contracting. We examined four commercial share- in-savings contracts and identified common characteristics that made them successful. In the commercial share-in-savings contracts we reviewed, we found four conditions that facilitated success: An expected outcome is clearly specified. By outcomes, we mean such things as generating savings by eliminating inefficient business practices or identifying new revenue centers. It is critical that a client and contractor have a clear understanding of what they are trying to achieve. Incentives are defined. Both the client and contractor need to strike a balance between the level of risk and reward they are willing to pursue. Performance measures are established. By its nature, share-in-savings cannot work without having a baseline and good performance measures to gauge exactly what savings or revenues are being achieved. Agreement must be reached on how metrics are linked to contractor intervention. Top management commitment is secured. A client’s top executives need to provide contractors with the authority needed to carry out solutions, since change from the outside is often met with resistance. They also need to help sustain a partnership over time since relationships between the contractor and client can be tested in the face of changing market conditions and other barriers. The companies in our study found that successful arrangements have generated savings and revenues. In one case highlighted in our report, $980,000 was realized in annual energy savings. We have not found share-in-savings contracting to be widespread in the commercial sector or the federal government. Excluding the energy industry, we found limited references to companies or state agencies that use or have used the share-in-savings concept. In addition, there are few documented examples of share-in-savings contracting in the federal government. Officials in federal agencies we spoke with noted that such arrangements may be difficult to pursue given potential resistance and the lack of good baseline performance data. In addition, in previous work, Department of Energy headquarters officials told us they believe such contracts can be best used when federal funding is unavailable. To achieve the potential benefits from the use of share-in-savings contracting, it may be worthwhile to examine ways to overcome potential issues. For example, in a letter to the Office of Federal Procurement Policy in March of this year, we recognized that share-in-savings contracting represents a significant change in the way the federal government acquires services. To address this challenge, we underscored the need for the Office of Federal Procurement Policy to develop guidance and policies that could ensure that (1) appropriate data are collected and available to meet mandated reporting requirements regarding the effective use of share-in-savings contracting, and (2) members of the federal acquisition workforce understand and appropriately apply this new authority. Section 401 authorizes agencies to treat a contract or task order as being for a commercial item if it is performance-based—that is, it describes each task in measurable, mission-related terms, and identifies the specific outputs—and the contractor provides similar services and terms to the public. This provision, which would only apply if the contract or task order were valued at $5 million or less, would provide another tool to promote greater use of performance-based contracting. Our spending and workforce trends report shows that in fiscal year 2001, agencies reported that 24 percent of their eligible service contracts, by dollar value, were performance-based. However, there was wide variation in the extent to which agencies used performance-based contracts. As figure 2 shows, 3 of the 10 agencies in our review fell short of the Office of Management and Budget’s goal that 10 percent of eligible service contracts be performance-based. In our September 2002 report, we recommended that the Administrator of the Office of Federal Procurement Policy clarify existing guidance to ensure that performance-based contracting is appropriately used, particularly when acquiring more unique and complex services that require strong government oversight. If section 401 is enacted, we believe that clear guidance will be needed to ensure effective implementation. The Office of Federal Procurement Policy might be assisted in developing and updating meaningful guidance by establishing a center for excellence to identify best practices in service contracting, as required by section 401. A center for excellence may help federal agencies learn about successful ways to implement performance-based contracting. Section 501 would authorize those civilian agencies approved by the Office of Management and Budget to use so-called “other transactions” for projects related to defense against or recovery from terrorism, or nuclear, biological, chemical, or radiological attacks. Other transactions are agreements that are not contracts, grants, or cooperative agreements. This authority would be similar to that currently available to the Departments of Homeland Security and Defense. Because statutes that apply only to procurement contracts do not apply to other transactions, this authority may be useful to agencies in attracting firms that traditionally decline to do business with the government. In fact, our work shows that the Department of Defense has had some success in using other transactions to attract nontraditional firms to do business with the government. Our work also has shown, however, that there is a critical need for guidance on when and how other transactions may best be used. The guidance developed by the Department of Defense may prove helpful to other agencies should the Congress decide to expand the availability of other transaction authority. Section 211 provides for a streamlined payment process under which service contractors could submit invoices for payment on a biweekly or a monthly basis. Biweekly invoices would be required to be submitted electronically. While we support the intent of this proposal—to make payments to government contractors more timely—implementation of this provision could result in increased improper payments and stress already weak systems and related internal controls. Agency efforts to address improper payment problems have been hampered by high payment volume, speed of service, inadequate payment systems and processes, internal control weaknesses, and downsizing in the acquisition and financial management community. Until federal agencies make significant progress in eliminating their payment problems, requirements to accelerate service contract payments would likely increase the risk of payment errors, backlogs, and late payment interest. Section 213 would provide for agency-level protests of acquisition decisions alleged to violate law or regulation. An agency would have 20 working days to issue a decision on a protest, during which time the agency would be barred from awarding a contract or continuing with performance if a contract already had been awarded. If an agency-level protest were denied, a subsequent protest to GAO that raised the same grounds and was filed within 5 days would trigger a further stay pending resolution of that protest. We believe that a protest process that is effective, expeditious, and independent serves the interests of all those involved in or affected by the procurement system. Section 213 appears to address each of these criteria. First, although protests currently may be filed with the procuring agencies, section 213 would provide for a more effective agency-level protest process by requiring that an agency suspend, or “stay,” the procurement until the protest is resolved. Second, the process would be relatively expeditious because decisions would be required within 20 working days. Having an expeditious process at the agency is especially important because section 213 would provide for a stay both during the agency-level protest and then during any subsequent GAO protest. It should be noted, though, that 20 working days may not be adequate for a thorough review, particularly in complex procurements. Finally, requiring protests to be decided by the head of the agency may help to mitigate longstanding concerns about a perceived lack of independence when decisions on agency-level protests are issued by officials closely connected with the decision being protested. Section 402 would provide for a change to the Federal Acquisition Regulation to include the use of time-and-materials and labor-hour contracts for commercial services commonly sold to the general public. This change would make it clear that such contracts are specifically authorized for commercial services. The Federal Acquisition Regulation states that a time-and-materials contract may be used only when it is not possible to estimate accurately the extent or duration of the work or to anticipate costs with any reasonable degree of confidence. Therefore, adequate surveillance is required to give reasonable assurance that the contractor is using efficient methods and effective cost controls. Section 404 would designate as a commercial item any product or service sold by a commercial entity that over the past 3 years made 90 percent of its sales to private sector entities. We are concerned that the provision allows for products or services that had never been sold or offered for sale in the commercial marketplace to be considered a commercial item. In such cases, the government may not be able to rely on the assurances of the marketplace in terms of the quality and pricing of the product or service. The growth in spending on service contracts, combined with decreases in the acquisition workforce and an increase in the number of high-dollar procurement actions, create a challenging acquisition environment. It is important that agencies have the authorities and tools they need to maximize their performance in this new environment. The initiatives contained in SARA address a number of longstanding issues in contracting for services, and should enable agencies to improve their performance in this area. Mr. Chairman, this concludes my statement. I will be happy to answer any questions you may have. Contact and Acknowledgments For further information, please contact William T. Woods at (202) 512-4841. Individuals making key contributions to this testimony include Blake Ainsworth, Christina Cromley, Timothy DiNapoli, Gayle Fischer, Paul Greeley, Oscar Mardis, and Karen Sloan.
Since 1997, federal spending on services has grown 11 percent and now represents more than 60 percent of contract spending governmentwide. Several significant changes in the government--including funding for homeland security--are expected to further increase spending on services. Adjusting to this new environment has proven difficult. Agencies need to improve in a number of areas: sustaining executive leadership, strengthening the acquisition workforce, and encouraging innovative contracting approaches. Improving these areas is a key goal of the proposed Services Acquisition Reform Act (SARA). The growth in spending on service contracts, combined with decreases in the acquisition workforce and an increase in the number of high-dollar procurement actions, create a challenging acquisition environment. It is important that agencies have the authorities and tools they need to maximize their performance in this new environment. The initiatives contained in SARA address a number of longstanding issues in contracting for services, and should enable agencies to improve their performance in this area. For example: (1) Section 201: Chief Acquisition Officers: Appointing a Chief Acquisition Officer would establish a clear line of authority, accountability, and responsibility for acquisition decisionmaking; and (2) Section 103: Government-Industry Exchange Program: A professional exchange program would allow federal agencies to gain from the knowledge and expertise of the commercial acquisition workforce. At the same time, GAO is concerned about some provisions in SARA. For example: (1) Section 211: Ensuring Efficient Payment: While GAO supports the intent of this proposal to make payments to government contractors more timely, GAO has reservations concerning its implementation. GAO's work shows that agencies have been hampered by problems such as high payment volume, inadequate payment systems, and weak controls.
While providing many benefits to our economy and citizens’ lives, financial services activities can also cause harm if left unsupervised. As a result, the United States and many other countries have found that regulating financial markets, institutions, and products is more efficient and effective than leaving the fairness and integrity of these activities to be ensured solely by market participants themselves. The federal laws related to financial regulation set forth specific authorities and responsibilities for regulators, although these authorities typically do not contain provisions explicitly linking such responsibilities to overall goals of financial regulation. Nevertheless, financial regulation generally has sought to achieve four broad goals: Ensure adequate consumer protections. Because financial institutions’ incentives to maximize profits can in some cases lead to sales of unsuitable or fraudulent financial products, or unfair or deceptive acts or practices, U.S. regulators take steps to address informational disadvantages that consumers and investors may face, ensure consumers and investors have sufficient information to make appropriate decisions, and oversee business conduct and sales practices to prevent fraud and abuse. Ensure the integrity and fairness of markets. Because some market participants could seek to manipulate markets to obtain unfair gains in a way that is not easily detectable by other participants, U.S. regulators set rules for and monitor markets and their participants to prevent fraud and manipulation, limit problems in asset pricing, and ensure efficient market activity. Monitor the safety and soundness of institutions. Because markets sometimes lead financial institutions to take on excessive risks that can have significant negative impacts on consumers, investors, and taxpayers, regulators oversee risk-taking activities to promote the safety and soundness of financial institutions. Act to ensure the stability of the overall financial system. Because shocks to the system or the actions of financial institutions can lead to instability in the broader financial system, regulators act to reduce systemic risk in various ways, such as by providing emergency funding to troubled financial institutions. Although these goals have traditionally been their primary focus, financial regulators are also often tasked with achieving other goals as they carry out their activities. These can include promoting economic growth, capital formation, and competition in our financial markets. Regulators have also taken actions with an eye toward ensuring the competitiveness of regulated U.S. financial institutions with those in other sectors or with others around the world. In other cases, financial institutions may be required by law or regulation to foster social policy objectives such as fair access to credit and increased home ownership. In general, these goals are reflected in statutes, regulations, and administrative actions, such as rulemakings or guidance, by financial institution supervisors. Laws and regulatory agency policies can set a greater priority on some roles and missions than others. Regulators are usually responsible for multiple regulatory goals and often prioritize them differently. For example, state and federal bank regulators generally focus on the safety and soundness of depository institutions; federal securities and futures regulators focus on the integrity of markets, and the adequacy of information provided to investors; and state securities regulators primarily address consumer protection. State insurance regulators focus on the ability of insurance firms to meet their commitments to the insured. The degrees to which regulators oversee institutions, markets, or products also vary depending upon, among other things, the regulatory approach Congress has fashioned for different sectors of the financial industry. For example, some institutions, such as banks, are subject to comprehensive regulation to ensure their safety and soundness. Among other things, they are subject to examinations and limitations on the types of activities they may conduct. Other institutions conducting financial activities are less regulated, such as by only having to register with regulators or by having less extensive disclosure requirements. Moreover, some markets, such as those for many over-the-counter derivatives markets, as well as activities within those markets, are not subject to oversight regulation at all. As a result of 150 years of changes in financial regulation in the United States, the regulatory system has become complex and fragmented. (See fig. 1.) Our regulatory system has multiple financial regulatory bodies, including five federal and multiple state agencies that oversee depository institutions. Securities activities are overseen by federal and state government entities, as well as by private sector organizations performing self-regulatory functions. Futures trading is overseen by a federal regulator and also by industry self-regulatory organizations. Insurance activities are primarily regulated at the state level with little federal involvement. Overall, responsibilities for overseeing the financial services industry are shared among almost a dozen federal banking, securities, futures, and other regulatory agencies, numerous self-regulatory organizations (SRO), and hundreds of state financial regulatory agencies. The following sections describe how regulation evolved in various sectors, including banking, securities, thrifts, credit unions, futures, insurance, secondary mortgage markets, and other financial institutions. The accounting and auditing environment for financial institutions, and the role of the Gramm-Leach- Bliley Act in financial regulation, are also discussed. Since the early days of our nation, banks have allowed citizens to store their savings and used these funds to make loans to spur business development. Until the middle of the 1800s, banks were chartered by states and state regulators supervised their activities, which primarily consisted of taking deposits and issuing currency. However, the existence of multiple currencies issued by different banks, some of which were more highly valued than others, created difficulties for the smooth functioning of economic activity. In an effort to finance the nation’s Civil War debt and reduce financial uncertainty, Congress passed the National Bank Act of 1863, which provided for issuance of a single national currency. This act also created the Office of the Comptroller of the Currency (OCC), which was to oversee the national currency and improve banking system efficiency by granting banks national charters to operate and conducting oversight to ensure the sound operations of these banks. As of 2007, of the more than 16,000 depository institutions subject to federal regulation in the United States, OCC was responsible for chartering, regulating, and supervising nearly 1,700 commercial banks with national charters. In the years surrounding 1900, the United States experienced troubled economic conditions and several financial panics, including various instances of bank runs as depositors attempted to withdraw their funds from banks whose financial conditions had deteriorated. To improve the liquidity of the U.S. banking sector and reduce the potential for such panics and runs, Congress passed the Federal Reserve Act of 1913. This act created the Federal Reserve System, which consists of the Board of Governors of the Federal Reserve System (Federal Reserve), and 12 Federal Reserve Banks, which are congressionally chartered semiprivate entities that undertake a range of actions on behalf of the Federal Reserve, including supervision of banks and bank holding companies, and lending to troubled banks. The Federal Reserve was given responsibility to act as the federal supervisory agency for state-chartered banks—banks authorized to do business under charters issued by states—that are members of the Federal Reserve System. In addition to supervising and regulating bank and financial holding companies and nearly 900 state- chartered banks, the Federal Reserve also develops and implements national monetary policy, and provides financial services to depository institutions, the U.S. government, and foreign official institutions, including playing a major role in operating the nation’s payments system. Several significant changes to the U.S. financial regulatory system again were made as a result of the turbulent economic conditions in the late 1920s and 1930s. In response to numerous bank failures resulting in the severe contraction of economic activity of the Great Depression, the Banking Act of 1933 created the Federal Deposit Insurance Corporation (FDIC), which administers a federal program to insure the deposits of participating banks. Subsequently, FDIC’s deposit insurance authority expanded to include thrifts. Additionally, FDIC provides primary federal oversight of any insured state-chartered banks that are not members of the Federal Reserve System, and it serves as the primary federal regulator for over 5,200 state-chartered institutions. Finally, FDIC has backup examination and enforcement authority over all of the institutions it insures in order to mitigate losses to the deposit insurance funds. Prior to the 1930s, securities markets were overseen by various state securities regulatory bodies and the securities exchanges themselves. In the aftermath of the stock market crash of 1929, the Securities Exchange Act of 1934 created a new federal agency, the Securities and Exchange Commission (SEC) and gave it authority to register and oversee securities broker-dealers, as well as securities exchanges, to strengthen securities oversight and address inconsistent state securities rules. In addition to regulation by SEC and state agencies, securities markets and the broker- dealers that accept and execute customer orders in these markets continue to be regulated by SROs, including those of the exchanges and the Financial Industry Regulatory Authority, that are funded by the participants in the industry. Among other things, these SROs establish rules and conduct examinations related to market integrity and investor protection. SEC also registers and oversees investment companies and advisers, approves rules for the industry, and conducts examinations of broker-dealers and mutual funds. State securities regulators—represented by the North American Securities Administrators Association—are generally responsible for registering certain securities products and, along with SEC, investigating securities fraud. SEC is also responsible for overseeing the financial reporting and disclosures that companies issuing securities must make under U.S. securities laws. SEC was also authorized to issue and oversee U.S. accounting standards for entities subject to its jurisdiction, but has delegated the creation of accounting standards to a private-sector organization, the Financial Accounting Standards Board, which establishes generally accepted accounting principles. The economic turmoil of the 1930s also prompted the creation of federal regulators for other types of depository institutions, including thrifts and credit unions. These institutions previously had been subject to oversight only by state authorities. However, the Home Owners’ Loan Act of 1933 empowered the newly created Federal Home Loan Bank Board to charter and regulate federal thrifts, and the Federal Credit Union Act of 1934 created the Bureau of Federal Credit Unions to charter and supervise credit unions. Congress amended the Federal Credit Union Act in 1970 to establish the National Credit Union Administration (NCUA), which is responsible for chartering and supervising over 5,000 federally chartered credit unions, as well as insuring deposits in these and more than 3,000 state-chartered credit unions. Oversight of these state-chartered credit unions is managed by 47 state regulatory agencies, represented by the National Association of State Credit Union Supervisors. From 1980 to 1990, over 1,000 thrifts failed at a cost of about $100 billion to the federal deposit insurance funds. In response, the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 abolished the Federal Home Loan Bank Board and, among other things, established the Office of Thrift Supervision (OTS) to improve thrift oversight. OTS charters about 750 federal thrifts and oversees these and about 70 state- chartered thrifts, as well as savings and loan holding companies. Oversight of the trading of futures contracts, which allow their purchasers to buy or sell a specific quantity of a commodity for delivery in the future, has also changed over the years in response to changes in the marketplace. Under the Grain Futures Act of 1922, the trading of futures contracts was overseen by the Grain Futures Administration, an office within the Department of Agriculture, reflecting the nature of the products for which futures contracts were traded. However, futures contracts were later created for nonagricultural commodities, such as energy products like oil and natural gas, metals such as gold and silver, and financial products such as Treasury bonds and foreign currencies. In 1974, a new independent federal agency, the Commodity Futures Trading Commission (CFTC), was created to oversee the trading of futures contracts. Like SEC, CFTC relies on SROs, including the futures exchanges and the National Futures Association, to establish and enforce rules governing member behavior. In 2000, the Commodity Futures Modernization Act of 2000 established a principles-based structure for the regulation of futures exchanges and derivatives clearing organizations, and clarified that some off-exchange derivatives trading—and in particular trading on facilities only accessible to large, sophisticated traders—was permitted and would be largely unregulated or exempt from regulation. Unlike most other financial services, insurance activities traditionally have been regulated at the state level. In 1944, a U.S. Supreme Court decision determined that the insurance industry was subject to interstate commerce laws, which could then have allowed for federal regulation, but Congress passed the McCarran-Ferguson Act in 1945 to explicitly return insurance regulation to the states. As a result, as many as 55 state, territorial, or other local jurisdiction authorities oversee insurance activities in the United States, although state regulations and other activities are often coordinated nationally by the National Association of Insurance Commissioners (NAIC). The recent financial crisis in the credit and housing markets has prompted the creation of a new, unified federal financial regulatory oversight agency, the Federal Housing Finance Agency (FHFA), to oversee the government-sponsored enterprises (GSE) Fannie Mae, Freddie Mac, and the Federal Home Loan Banks. Fannie Mae and Freddie Mac are private, federally chartered companies created by Congress to, among other things, provide liquidity to home mortgage markets by purchasing mortgage loans, thus enabling lenders to make additional loans. The system of 12 Federal Home Loan Banks provides funding to support housing finance and economic development. Until enactment of the Housing and Economic Recovery Act of 2008, Fannie Mae and Freddie Mac had been overseen since 1992 by the Office of Federal Housing Enterprise Oversight (OFHEO), an agency within the Department of Housing and Urban Development, and the Federal Home Loan Banks were subject to supervision by the Federal Housing Finance Board (FHFB), an independent regulatory agency. OFHEO regulated Fannie Mae and Freddie Mac on matters of safety and soundness, while HUD regulated their mission-related activities. FHFB served as the safety and soundness and mission regulator of the Federal Home Loan Banks. In July 2008, the Housing and Economic Recovery Act of 2008 created FHFA to establish more effective and more consistent oversight of the three housing GSEs— Fannie Mae, Freddie Mac, and the Federal Home Loan Banks. With respect to Fannie Mae and Freddie Mac, the law gives FHFA such new regulatory authorities as the power to regulate the retained mortgage portfolios, to set more stringent capital standards, and to place a failing entity in receivership. In addition, the law provides FHFA with funding outside the annual appropriations process. The law also combined the regulatory authorities for all the housing GSEs that were previously distributed among OFHEO, FHFB, and the Department of Housing and Urban Development. In September 2008, Fannie Mae and Freddie Mac were placed in conservatorship, with FHFA serving as the conservator under powers provided in the 2008 act. Treasury also created a backstop lending facility for the Federal Home Loan Banks, should they decide to use it. In November 2008, the Federal Reserve announced plans to purchase mortgage-backed securities guaranteed by Fannie Mae and Freddie Mac on the open market. Changes in the types of financial activities permitted for depository institutions and their affiliates have also shaped the financial regulatory system over time. Under the Glass-Steagall provisions of the Banking Act of 1933, financial institutions were prohibited from simultaneously offering commercial and investment banking services. However, in the Gramm-Leach-Bliley Act of 1999 (GLBA), Congress permitted financial institutions to fully engage in both types of activities and, in addition, provided a regulatory process allowing for the approval of new types of financial activity. Under GLBA, qualifying financial institutions are permitted to engage in banking, securities, insurance, and other financial activities. When these activities are conducted within the same bank holding company structure, they remain subject to regulation by “functional regulators,” which are the federal authorities having jurisdiction over specific financial products or services, such as SEC or CFTC. As a result, multiple regulators now oversee different business lines within a single institution. For example, broker-dealer activities are generally regulated by SEC even if they are conducted within a large financial conglomerate that is subject to the Bank Holding Company Act, which is administered by the Federal Reserve. The functional regulator approach was intended to provide consistency in regulation, focus regulatory restrictions on the relevant functional area, and avoid the potential need for regulatory agencies to develop expertise in all aspects of financial regulation. In addition to the creation of various regulators over time, the accounting and auditing environment for financial institutions and market participants—a key component of financial oversight—has also seen substantial change. In the early 2000s, various companies with publicly traded securities were found to have issued materially misleading financial statements. These companies included Enron and WorldCom, both of which filed for bankruptcy. When the actual financial conditions of these companies became known, their auditors were called into question, and one of the largest, Arthur Andersen, was dissolved after the Department of Justice filed criminal charges related to its audits of Enron. As a result of these and other corporate financial reporting and auditing scandals, the Sarbanes-Oxley Act of 2002 was enacted. Among other things, Sarbanes- Oxley expanded public company reporting and disclosure requirements and established new ethical and corporate responsibility requirements for public company executives, boards of directors, and independent auditors. The act also created a new independent public company audit regulator, the Public Company Accounting Oversight Board, to oversee the activities of public accounting firms. The activities of this board are, in turn, overseen by SEC. Some entities that provide financial services are not regulated by any of the existing federal financial regulatory bodies. For example, entities such as mortgage brokers, automobile finance companies, and payday lenders that are not bank subsidiaries or affiliates primarily are subject to state oversight, with the Federal Trade Commission acting as the primary federal agency responsible for enforcing their compliance with federal consumer protection laws. Several key developments in financial markets and products in the past few decades have significantly challenged the existing financial regulatory structure. (See fig. 2.) First, the last 30 years have seen waves of mergers among financial institutions within and across sectors, such that the United States, while still having large numbers of financial institutions, also has several very large globally active financial conglomerates that engage in a wide range of activities that have become increasingly interconnected. Regulating these large conglomerates has proven challenging, particularly in overseeing their risk management activities on a consolidated basis and in identifying and mitigating the systemic risks they pose. A second development has been the emergence of large and sometimes less-regulated market participants, such as hedge funds and credit rating agencies, which now play key roles in our financial markets. Third, the development of new and complex products and services has challenged regulators’ abilities to ensure that institutions are adequately identifying and acting to mitigate risks arising from these new activities and that investors and consumers are adequately informed of the risks. In light of these developments, ensuring that U.S. accounting standards have kept pace has also proved difficult, and the impending transition to conform to international accounting standards is likely to create additional challenges. Finally, despite the increasingly global aspects of financial markets, the current fragmented U.S. regulatory structure has complicated some efforts to coordinate internationally with other regulators. Overseeing large financial conglomerates that have emerged in recent decades has proven challenging, particularly in regulating their consolidated risk management practices and in identifying and mitigating the systemic risks they pose. These systemically important institutions in many cases have tens of thousands or more customers and extensive financial linkages with each other through loans, derivatives contracts, or trading positions with other financial institutions or businesses. The activities of these large financial institutions, as we have seen by recent events, can pose significant systemic risks to other market participants and the economy as a whole, but the regulatory system was not prepared to adequately anticipate and prevent such risks. Largely as the result of waves of mergers and consolidations, the number of financial institutions today has declined. However, the remaining institutions are generally larger and more complex, provide more and varied services, offer similar products, and operate in increasingly global markets. Among the most significant of these changes has been the emergence and growth of large financial conglomerates or universal banks that offer a wide range of products that cut across the traditional financial sectors of banking, securities, and insurance. A 2003 IMF study highlighted this emerging trend. Based on a worldwide sample of the top 500 financial services firms in assets, the study found that the percentage of the largest financial institutions in the United States that are conglomerates— financial institutions having substantial operations in more than one of the sectors (banking, securities, and insurance)—increased from 42 percent of the U.S. financial institutions in the sample in 1995 to 62 percent in 2000. This new environment contrasts with that of the past in which banks primarily conducted traditional banking activities such as deposit taking and lending; securities broker-dealers were largely focused on brokerage and underwriting activities; and insurance firms offered a more limited set of insurance products. In a report that analyzed the regulatory structures of various countries, The Group of Thirty noted that the last 25 years have been a period of enormous transformation in the financial services sector, with a marked shift from firms engaging in distinct banking, securities, and insurance businesses to one in which more integrated financial services conglomerates offer a broad range of financial products across the globe. These fundamental changes in the nature of the financial service markets around the world have exposed the shortcomings of financial regulatory models, some of which have not been adapted to the changes in business structures. While posing challenges to regulators, these changes have resulted in some benefits in the United States financial services industry. For example, the ability of financial institutions to offer products of varying types increased the options available to consumers for investing their savings and preparing for their retirement. Conglomeration has also made it more convenient for consumers to conduct their financial activities by providing opportunities for one-stop shopping for most or all of their needs, and by promoting the cross-selling of new innovative products of which consumers may otherwise not have been aware. However, the rise of large financial conglomerates has also posed risks that our current financial regulatory system does not directly address. First, although the activities of these large interconnected financial institutions often cross traditional sector boundaries, financial regulators under the current U.S. regulatory system did not always have full authority or sufficient tools and capabilities to adequately oversee the risks that these financial institutions posed to themselves and other institutions. As we noted in a 2007 report, the activities of the Federal Reserve, SEC, and OTS to conduct consolidated supervision of many of the largest U.S. financial institutions were not as efficient and effective as needed because these agencies were not collaborating more systematically. In addition, the recent market crisis has revealed significant problems with certain aspects of these regulators’ oversight of financial conglomerates. For example, some of the top investment banks were subject to voluntary and limited oversight at the holding-company level—the level of the institution that generally managed its overall risks—as part of SEC’s Consolidated Supervised Entity (CSE) Program. SEC’s program was created in 2004 as a way for global investment bank conglomerates that lack a supervisor under law to voluntarily submit to regulation. This supervision, which could include SEC examinations of the parent companies’ and affiliates’ operations and monitoring of their capital levels, enabled the CSEs to qualify for alternative capital rules in exchange for consenting to supervision at the holding company level. Being subject to consolidated supervision was perceived as necessary for these financial institutions to continue operating in Europe under changes implemented by the European Union in 2005. However, according to a September 2008 report by SEC’s Inspector General, this supervisory program failed to effectively oversee these institutions for several reasons, including the lack of an effective mechanism for ensuring that these entities maintained sufficient capital. In comparison to commercial bank conglomerates, these investment banks were holding much less capital in relation to the activities exposing them to financial risk. For example, at the end of 2007, the five largest investment banks had assets to equity capital leverage ratios of between 26 and 34 to 1—meaning that for every dollar of capital capable of absorbing losses, these institutions held between $26 and $34 of assets subject to loss. In contrast, the largest commercial bank conglomerates, which were subject to different regulatory capital requirements, tended to be significantly less leveraged, with the average leverage ratio of the top five largest U.S. bank conglomerates at the end of 2007 only about 13 to 1. Moreover, because the program SEC used to oversee these investment bank conglomerates was voluntary, it had no authority to compel these institutions to address any problems that may have been identified. Instead, SEC’s only means for coercing an institution to take corrective actions was to disqualify an institution from CSE status. SEC also lacked the ability to provide emergency funding for these investment bank conglomerates in a similar way that the Federal Reserve could for commercial banks. As a result, these CSE firms, whose activities resulted in their being significant and systemically important participants with vast interconnections with other financial institutions, were more vulnerable to market disruptions that could create risks to the overall financial system, but not all were subject to full and consistent oversight by a supervisor with adequate authority and resources. For example, one of the ways that the bankruptcy filing of Lehman Brothers affected other institutions was that 25 money market fund advisers had to act to protect their investors against losses arising from their investments in that company’s debt, with at least one of these funds having to be liquidated and distributed to its investors. Following the sale of Bear Stearns to JPMorgan Chase, the Lehman bankruptcy filing, and the sale of Merrill Lynch to Bank of America, the remaining CSEs opted to become bank holding companies subject to Federal Reserve oversight. SEC suspended its CSE program and the Chairman stated that “the last six months have made it abundantly clear that voluntary regulation does not work.” Recent events have also highlighted difficulties faced by the Federal Reserve and OTS in their roles in overseeing risk management at large financial and thrift holding companies, respectively. In June 2008 testimony, a Federal Reserve official acknowledged such supervisory lessons, noting that under the current U.S. regulatory structure consisting of multiple supervisory agencies, challenges can arise in assessing risk profiles of large, complex financial institutions operating across financial sectors, particularly given the growth in the use of sophisticated financial products that can generate risks across various legal entities. He also noted that recent events have highlighted the importance of enterprisewide risk management, noting that supervisors need to understand risks across a consolidated entity and assess the risk management tools being applied across the financial institution. Our own work had raised concerns over the adequacy of supervision of these large financial conglomerates. For example, one of the large entities that OTS oversaw was the insurance conglomerate AIG, which was subject to a government takeover necessitated by financial difficulties the firm experienced as the result of OTC derivatives activities related to mortgages. In a 2007 report, we expressed concerns over the appropriateness of having OTS oversee diverse global financial institutions given the size of the agency relative to the institutions for which it was responsible. We had also noted that although OTS oversaw a number of holding companies that are primarily in the insurance business, including AIG, it had only one specialist in this area as of March 2007. An OTS official noted, however, that functional regulation established by Gramm- Leach-Bliley avoided the need for regulatory agencies to develop expertise in all aspects of financial regulation. Second, the emergence of these large institutions with financial obligations with thousands of other entities has revealed that the existing U.S. regulatory system is not well-equipped for identifying and addressing risks across the financial system as a whole. In the current environment, with multiple regulators primarily responsible for just individual institutions or markets, no one regulator is tasked with assessing the risks posed across the entire financial system by a few institutions or by the collective activities of the industry. For example, multiple factors contributed to the subprime mortgage crisis, and many market participants played a role in these events, including mortgage brokers, real estate professionals, lenders, borrowers, securities underwriters, investors, rating agencies and others. The collective activities of these entities, rather than one particular institution, likely all contributed to the overall market collapse. In particular, the securitization process created incentives throughout the chain of participants to emphasize loan volume over loan quality, which likely contributed to the problem as lenders sold loans on the secondary market, passing risks on to investors. Similarly, once financial institutions began to fail and the full extent of the financial crisis began to become clear, no formal mechanism existed to monitor market trends and potentially stop or help mitigate the fallout from these events. Ad hoc actions by the Department of the Treasury, the Federal Reserve, other members of the President’s Working Group on Financial Markets, and FDIC were aimed at helping to mitigate the fallout once events began to unfold. However, even given this ad hoc coordination, our past work has repeatedly identified limitations of the current U.S. federal regulatory structure to adequately coordinate and share information to monitor risks across markets or “functional” areas to identify potential systemic crises. Whether a greater focus on systemwide risks would have fully prevented the recent financial crises is unclear, but it is reasonable to conclude that such a mechanism would have had better prospects of identifying the breadth of the problem earlier and been better positioned to stem or soften the extent of the market fallout. A second dramatic development in U.S. financial markets in recent decades has been the increasingly critical roles played by less-regulated entities. In the past, consumers of financial products generally dealt with entities such as banks, broker-dealers, and insurance companies that were regulated by a federal or state regulator. However, in the last few decades, various entities—nonbank lenders, hedge funds, credit rating agencies, and special-purpose investment entities—that are not always subject to full regulation by such authorities have become important participants in our financial services markets. These unregulated or less-regulated entities can provide substantial benefits by supplying information or allowing financial institutions to better meet demands of consumers, investors or shareholders but pose challenges to regulators that do not fully or cannot oversee their activities. The role of nonbank mortgage lenders in the recent financial collapse provides an example of a gap in our financial regulatory system resulting from activities of institutions that were generally subject to little or no direct oversight by federal regulators. The significant participation by these nonbank lenders in the subprime mortgage market—which targeted products with riskier features to borrowers with limited or poor credit history—contributed to a dramatic loosening in underwriting standards leading up to the crisis. In recent years, nonbank lenders came to represent a large share of the consumer lending market, including for subprime mortgages. Specifically, as shown in figure 3, of the top 25 originators of subprime and other nonprime loans in 2006 (which accounted for more than 90 percent of the dollar volume of all such originations), all but 4 were nonbank lenders, accounting for 81 percent of origination by dollar volume. Although these lenders were subject to certain federal consumer protection and fair lending laws, they were generally not subject to the same routine monitoring and oversight by federal agencies that their bank counterparts were. From 2003 to 2006, subprime lending grew from about 9 percent to 24 percent of mortgage originations (excluding home equity loans), and Alt-A lending (nonprime loans considered less risky than subprime) grew from about 2 percent to almost 16 percent, according to data from the trade publication Inside Mortgage Finance. The resulting sharp rise in defaults and foreclosures that occurred as subprime and other homeowners were unable to make mortgage payments led to the collapse of the subprime mortgage market and set off a series of events that led to today’s financial turmoil. In previous reports, we noted concerns that existed about some of these less-regulated nonbank lenders and recommended that federal regulators actively monitor their activities. For example, in a 2004 report, we reported that some of these nonbank lenders had been the targets of notable federal and state enforcement actions involving abusive lending. As a result, we recommended to Congress that the Federal Reserve should be given a greater role in monitoring the activities of some nonbank mortgage lenders that are subsidiaries of bank holding companies that the Federal Reserve regulates. Only recently, in the wake of the subprime mortgage crisis, the Federal Reserve began a pilot program in conjunction with OTS and the Conference of State Bank Supervisors to monitor the activities of nonbank subsidiaries of holding companies, with the states conducting examinations of independent state-licensed lenders. Nevertheless, other nonbank lenders continue to operate under less rigorous federal oversight and remain an example of the risks posed by less-regulated institutions in our financial regulatory system. The increased role in recent years of investment banks securitizing and selling mortgage loans to investors further illustrates gaps in the regulatory system resulting from less-regulated institutions. Until recently, GSEs Fannie Mae and Freddie Mac were responsible for the vast majority of mortgage loan securitization. The securitization of loans that did not meet the GSEs’ congressionally imposed loan limits or regulator-approved quality standards—such as jumbo loans that exceeded maximum loan limits and subprime loans—was undertaken by investment firms that were subject to little or no standards to ensure safe and sound practices in connection with the purchase or securitization of loans. As the volume of subprime lending grew dramatically from around 2003 through 2006, investment firms took over the substantial share of the mortgage securitization market. As shown in figure 4, this channel of mortgage funding—known as the private label mortgage-backed securities market— grew rapidly and in 2005 surpassed the combined market share of the GSEs and Ginnie Mae—a government corporation that guarantees mortgage-backed securities. As the volume of subprime loans increased, a rapidly growing share was packaged into private label securities, reaching 75 percent in 2006, according to the Federal Reserve Bank of San Francisco. As shown in figure 4, this growth allowed private label securities to become approximately 55 percent of all mortgage-backed security issuance by 2005. This development serves as yet another example of how a less-regulated part of the market, private label securitization, played a significant role in fostering risky subprime mortgage lending, exposing a gap in the financial regulatory structure. The role of mortgage brokers in the sale of mortgage products in recent years has also been a key focus of attention of policymakers. In past work, we noted that the role of mortgage brokers grew in the years leading up to the current crisis. By one estimate, the number of brokerages rose from about 30,000 firms in 2000 to 53,000 firms in 2004. In 2005, brokers accounted for about 60 percent of originations in the subprime market (compared with about 25 percent in the prime market). In 2008, in the wake of the subprime mortgage crisis, Congress enacted the Secure and Fair Enforcement for Mortgage Licensing Act, as part of the Housing and Economic Recovery Act, to require enhanced licensing and registration of mortgage brokers. Hedge funds, which are professionally managed investment funds for institutional and wealthy investors, have become significant participants in many important financial markets. For example, hedge funds often assume risks that other more regulated institutions are unwilling or unable to assume, and therefore generally are recognized as benefiting markets by enhancing liquidity, promoting market efficiency, spurring financial innovation, and helping to reallocate financial risk. But hedge funds receive less-direct oversight than other major market participants such as mutual funds, another type of investment fund that manages pools of assets on behalf of investors. Hedge funds generally are structured and operated in a manner that enables them to qualify for exemptions from certain federal securities laws and regulations. Because their participants are presumed to be sophisticated and therefore not require the full protection offered by the securities laws, hedge funds have not generally been subject to direct regulation. Therefore, hedge funds are not subject to regulatory capital requirements, are not restricted by regulation in their choice of investment strategies, and are not limited by regulation in their use of leverage. By soliciting participation in their funds from only certain large institutions and wealthy individuals and refraining from advertising to the general public, hedge funds are not required to meet the registration and disclosure requirements of the Securities Act of 1933 or the Securities Exchange Act of 1934, such as providing their investors with detailed prospectuses on the activities that their fund will undertake using investors’ proceeds. Hedge fund managers that trade on futures exchanges and that have U.S. investors are required to register with CFTC and are subject to periodic reporting, recordkeeping, and disclosure requirements of their futures activities, unless they notify the Commission that they qualify for an exemption from registration. The activities of many, but not all, hedge funds have recently become subject to greater oversight from SEC, although the rule requiring certain hedge fund advisers to register as investment advisers was recently vacated by a federal appeals court. In December 2004, SEC amended its rules to require certain hedge fund advisers that had been exempt from registering with SEC as investment advisers under its “private adviser” exemption to register as investment advisers. In August 2006, SEC estimated that over 2,500 hedge fund advisers were registered with the agency, although what percentage of all hedge fund advisers active in the United States that this represents is not known. Registered hedge fund advisers are subject to the same requirements as all other registered investment advisers, including providing current information to both SEC and investors about their business practices and disciplinary history, maintaining required books and records, and being subject to periodic SEC examinations. Some questions exist over the extent of SEC’s authority over these funds. In June 2006, the U.S. Court of Appeals for the District of Columbia overturned SEC’s amended rule, concluding that the rule was arbitrary because it departed, without reasonable justification, from SEC’s long-standing interpretation of the term “client” in the private adviser exemption as referring to the hedge fund itself, and not to the individual investors in the fund. However, according to SEC, most hedge fund advisers that previously registered have chosen to retain their registered status as of April 2007. Although many hedge fund advisers are now subject to some SEC oversight, some financial regulators and market participants remain concerned that hedge funds’ activities can create systemic risk by threatening the soundness of other regulated entities and asset markets. Hedge funds have important connections to the financial markets, including significant business relationships with the largest regulated commercial banks and broker-dealers. They act as trading counterparties with many of these institutions and constitute in many markets a significant portion of trading activity, from stocks to distressed debt and credit derivatives. The far-reaching consequences of potential hedge fund failures first became apparent in 1998. The hedge fund Long Term Capital Management (LTCM) experienced large losses related to the considerable positions— estimated to be as large as $100 billion—it had taken in various sovereign debt and other markets, and regulators coordinated with market participants to prevent a disorderly collapse that could have led to financial problems among LTCM’s lenders and counterparties and potentially to the rest of the financial system. No taxpayer funds were used as part of this effort; instead, the various large financial institutions with large exposures to this hedge fund agreed to provide additional funding of $3.6 billion until the fund could be dissolved in an orderly way. Since LTCM, other hedge funds have experienced near collapses or failures, including two funds owned by Bear Stearns, but these events have not had as significant impact on the broader financial markets as LTCM. Also, since LTCM’s near collapse, investors, creditors, and counterparties have increased their efforts to impose market discipline on hedge funds. According to regulators and market participants, creditors and counterparties have been conducting more extensive due diligence and monitoring risk exposures to their hedge fund clients. In addition, hedge fund advisers have improved disclosure and become more transparent about their operations, including their risk-management practices. However, we reported in 2008 that some regulators continue to be concerned that the counterparty credit risk created when regulated financial institutions transact with hedge funds can be a primary channel for potentially creating systemic risk. Similar to hedge funds, credit rating agencies have come to play a critical role in financial markets, but until recently they received little regulatory oversight. While not acting as direct participants in financial markets, credit ratings are widely used by investors for distinguishing the creditworthiness of bonds and other securities. Additionally, credit ratings are used in local, federal, and international laws and regulations as a benchmark for permissible investments by banks, pension funds, and other institutional investors. Leading up to the recent crisis, some investors had come to rely heavily on ratings in lieu of conducting independent assessments on the quality of assets. This overreliance on credit ratings of subprime mortgage-backed securities and other structured credit products contributed to the recent turmoil in financial markets. As these securities started to incur losses, it became clear that their ratings did not adequately reflect the risk that these products ultimately posed. According to the trade publication Inside B&C Lending, the three major credit rating agencies have each downgraded more than half of the subprime mortgage-backed securities they originally rated between 2005 and 2007. However, despite the critical nature of these rating agencies in our financial system, the existing regulatory system failed to adequately foresee and manage their role in recent events. Until recently, credit rating agencies received little direct oversight and thus faced no explicit requirements to provide information to investors about how to understand and appropriately use ratings, or to provide data on the accuracy of their ratings over time that would allow investors to assess their quality. In addition, concerns have been raised over whether the way in which credit rating agencies are compensated by the issuers of the securities that they rate affects the quality of the ratings awarded. In a July 2008 report, SEC noted multiple weaknesses in the management of these conflicts of interest, including instances where analysts expressed concerns over fees and other business interests when issuing ratings and reviewing ratings criteria. However, until 2006, no legislation had established statutory regulatory authority or disclosure requirements over credit rating agencies. Then, to improve the quality of ratings in response to events such as the failures of Enron and Worldcom—which highlighted the limitations of credit ratings in identifying companies’ financial strength— Congress passed the Credit Rating Agency Reform Act of 2006, which established limited SEC oversight, requiring their registration and certain recordkeeping and reporting requirements. Since the financial crisis began, regulators have taken steps to address the important role of rating agencies in the financial system. In December 2008, in response to the subprime mortgage crisis and resulting credit market strains, SEC adopted final rule amendments and proposed new rule amendments that would impose additional requirements on nationally recognized statistical rating organizations in order to address concerns raised about the policies and procedures for, transparency of, and potential conflicts of interest relating to ratings. Determining the most appropriate government role in overseeing credit rating activities is difficult. For example, SEC has expressed concerns that too much government intervention—such as regulatory requirements of credit ratings for certain investments or examining the underlying methodology of ratings—would unintentionally provide an unofficial “seal of approval” on the ratings and therefore be counterproductive to reducing overreliance on ratings. Whatever the solution, it is clear that the current regulatory system did not properly recognize and address the risks associated with the important role these entities played. The use by financial institutions of special-purpose entities provides another example of how less-regulated aspects of financial markets came to play increasingly important roles in recent years, creating challenges for regulators in overseeing risks at their regulated institutions. Many financial institutions created and transferred assets to these entities as part of securitizations for mortgages or to hold other assets and produce fee income for the institution that created it—known as the sponsor. For example, after new capital requirements were adopted in the late 1980s, some large banks began creating these entities to hold assets for which they would have been required to hold more capital against if the assets were held within their institutions. As a result, these entities are also known as off-balance sheet entities because they generally are structured in such a way that their assets and liabilities are not required to be consolidated and reported as part of the overall balance sheet of the sponsoring financial institution that created them. The amount of assets accumulated in these entities resulted in them becoming significant market participants in the last few years. For example, one large commercial bank reported that its off-balance sheet entities totaled more than $1 trillion in assets at the end of 2007. Trditionlly, prodct receiving the highet credit rting, such as AAA, were ll et of corporte nd overeign ond tht were deemed to e the safend moable det invetment. However, credit rting gencie assigned imilrly high credit rting to mny of the newer mortgge-relted prodct even thogh thee prodct did not hve the same chcteritic as previously highly rted ecritie. A result of thee rting, intittion were able to successlly mrket ny of thee prodct, inclding to other finncil firm nd intittionl invetor in the United Ste nd rond the world. Rting were een to provide common measure of credit ricross ll det prodct, llowing trctred credit prodct tht lcked ctive econdry mrket to e ved inimilrly rted prodct with ilable price. Srting in mid-2007, increasing default on reidentil mortgge, prticrly thoe for subprime orrower, led to widepred, rpid, nd evere erie of downgrde y rting gencie on subprime-relted trctred credit prodct. Thee downgrde ndermined confidence in the quality of rting on thend relted prodct. Along with increasing default, the ncertinty over credit rting led to rp repricing of asset cross the finncitem nd contributed to lrge writedown in the mrket ve of asset bank nd other finncil intittion. Thi contributed to the nwilling- ness of mny mrket prticipnt to trsact with ech other de to concern over the ctual ve of asset nd the finncil condition of other finncil intittion. businesses. To obtain the funds to purchase these assets, these special- purpose vehicles often borrowed using shorter-term instruments, such as commercial paper and medium-term notes. The difference between the interest paid to the commercial paper or note holders and the income earned on the entity’s assets produced fee and other income for the sponsoring institution. However, these structures carried the risk that the entity would find it difficult or costly to renew its debt financing under less-favorable market conditions. Although structured as off-balance sheet entities, when the turmoil in the markets began in 2007, many financial institutions that had created these entities had to take back the loans and securities in certain types of these off-balance sheet entities. (See fig. 5.) In general, banks stepped in to finance the assets held by these entities when they were unable to refinance their expiring debt due to market concerns over the quality of the assets. In some cases, off-balance sheet entities relied on emergency financing commitments that many sponsoring banks had extended to these entities. In other cases, financial institutions supported troubled off-balance sheet entities to protect their reputations with clients even when no explicit requirement to do so existed. This, in turn, contributed to the reluctance of banks to lend as they had to fund additional troubled assets on their balance sheets. Thus, although the use of these entities seemingly had removed the risk of these assets from these institutions, their inability to obtain financing resulted in the ownership, risks, and losses of these entities’ assets coming back into many of the sponsoring financial institutions. According to a 2008 IMF study, financial institutions’ use of off-balance sheet entities made it difficult for regulators, as well as investors, to fully understand the associated risks of such activities. In response to these developments, regulators and others have begun to reassess the appropriateness of the regulatory and accounting treatment for these entities. In January 2008, SEC asked the Financial Accounting Standards Board (FASB), which establishes U.S. financial accounting and reporting standards, to consider further improvements to the accounting and disclosure for off-balance sheet transactions involving securitization. FASB and the International Accounting Standards Board both have initiated projects to improve the criteria for determining when financial assets and related liabilities that institutions transfer to special-purpose entities should be included on the institutions’ own balance sheets— known as consolidation—and to enhance related disclosures. As part of this effort, FASB issued proposed standards that would eliminate a widely used accounting exception for off-balance sheet entities, introduce a new accounting model for determining whether special-purpose entities should be consolidated that is less reliant on mathematical calculations and more closely aligned with international standards, and require additional disclosures about institutions’ involvement with certain special-purpose entities. On December 18, 2008, the International Accounting Standards Board also issued a proposed standard on consolidation of special- purpose entities and related risk disclosures. In addition, in April 2008, the Basel Committee on Banking Supervision announced new measures to capture off-balance sheet exposures more effectively. Nevertheless, this serves as another example of the failure of the existing regulatory system to recognize the problems with less-regulated entities and take steps to address them before they escalate. Existing accounting and disclosure standards had not required banks to extensively disclose their holdings in off-balance sheet entities and allowed for very low capital requirements. As a March 2008 study by the President’s Working Group on Financial Markets noted, before the recent market turmoil, supervisory authorities did not insist on appropriate disclosures of firms’ potential exposure to off-balance sheet entities. Another development that has revealed limitations in the current regulatory structure has been the proliferation of more complex financial products. Although posing challenges, these new products also have provided certain benefits to financial markets and consumers. For example, the creation of securitized products such as mortgage-backed securities increased the liquidity of credit markets by providing additional funds to lenders and a wider range of investment returns to investors with excess funds. Other useful product innovations included OTC derivatives, such as currency options, which provide a purchaser the right to buy a specified quantity of a currency at some future date, and interest rate swaps, which allow one party to exchange a stream of fixed interest rate payments for a stream of variable interest rate payments. These products help market participants hedge their risks or stabilize their cash flows. Alternative mortgage products, such as interest-only loans, originally were used by a limited subset of the population, mainly wealthy borrowers, to obtain more convenient financing for home purchases. Despite these advantages, the complexity and expanded use of new products has made it difficult for the current regulatory system to oversee risk management at institutions and adequately protect individual consumers and investors. Collateralized debt obligations (CDO) are one of the new products that proliferated and created challenges for financial institutions and regulators. In a basic CDO, a group of loans or debt securities are pooled and securities are then issued in different tranches that vary in risk and return depending on how the underlying cash flows produced by the pooled assets are allocated. If some of the underlying assets defaulted, the more junior tranches—and thus riskier ones—would absorb these losses first before the more senior, less-risky tranches. Purchasers of these CDO securities included insurance companies, mutual funds, commercial and investment banks, and pension funds. Many CDOs in recent years largely consisted of mortgage-backed securities, including subprime mortgage- backed securities. Although CDOs have existed since the 1980s, recent changes in the underlying asset mix of these products led to increased risk that was poorly understood by the financial institutions involved in these investments. CDOs had consisted of simple securities like corporate bonds or loans, but more recently have included subprime mortgage-backed securities, and in some cases even lower-rated classes of other equally complex CDOs. Some of these CDOs included investments in 100 or more asset-backed securities, each of which had its own large pool of loans and specific payment structures. A large share of the total value of the securities issued were rated AA or AAA—designating them as very safe investments and unlikely to default—by the credit rating agencies. In part because of their seemingly high returns in light of their rated risk, demand for these new CDOs grew rapidly and on a large scale. Between 2004 and 2007, nearly all adjustable-rate subprime mortgages were packaged into mortgage-backed securities, a large portion of which were structured into CDOs. As housing prices in the United States softened in the last 2 years, default and foreclosure rates on the mortgages underlying many CDOs rose and the credit rating agencies downgraded many CDO ratings, causing investors to become unwilling to purchase these products in the same quantities or at the prices previously paid. Many financial institutions, including large commercial and investment banks, struggled to realize the size of their exposure to subprime credit risk. Many of these institutions appeared to have underestimated the amount of risk and potential losses that they could face from creating and investing in these products. Reductions in the value of subprime-backed CDOs have contributed to reported losses by financial institutions totaling more than $750 billion globally, as of September 2008, according to the International Monetary Fund, which estimates that total losses on global holdings of U.S. loans and securities could reach $1.4 trillion. Several factors could explain why institutions—and regulators—did not effectively monitor and limit the risk that CDOs represented. Products like CDOs have risk characteristics that differ from traditional investments. First, the variation and complexity of the CDO structures and the underlying assets they contain often make estimating potential losses and determining accurate values for these products more difficult than for traditional securities. Second, although aggregating multiple assets into these structures can diversify and thus reduce the overall risk of the securities issued from them, their exposure to the overall housing market downturn made investors reluctant to purchase even the safest tranches, which produced large valuation losses for the holders of even the highest- rated CDO securities. Finally, Federal Reserve staff noted that an additional reason these securities performed worse than expected was that rating agencies and investors did not believe that housing prices could have fallen as significantly as they have. The lack of historical performance data for these new instruments also presented challenges in estimating the potential value of these securities. For example, the Senior Supervisors Group—a body comprising senior financial supervisors from France, Germany, Switzerland, the United Kingdom, and the United States—reported that some financial institutions substituted price and other data associated with traditional corporate debt in their loss estimation models for similarly rated CDO debt, which did not have sufficient historical data. As a report by a group of senior representatives of financial regulators and institutions has noted, the absence of historical information on the performance of CDOs created uncertainty around the standard risk-management tools used by financial institutions. Further, structured products such as CDOs may lack an active and liquid market, as in the recent period of market stress, forcing participants to look for other sources of valuation information when market prices are not readily available. For instance, market participants often turned to internal models and other methods to value these products, which raised concerns about the consistency and accuracy of the resulting valuation information. The rapid growth in OTC derivatives—or derivatives contracts that are traded outside of regulated exchanges—is another example of how the emergence of large markets for increasingly complex products has challenged our financial regulatory system. OTC derivatives, which began trading in the 1980s, have developed into markets with an estimated notional value—which is the amount underlying a financial derivatives contract—of about $596 trillion, as of December 2007, according to the Bank for International Settlements. OTC derivatives transactions are generally not subject to regulation by SEC, CFTC, or any other U.S. financial regulator and in particular are not subject to similar disclosure and other requirements that are in place for most securities and exchange- traded futures products. Institutions that conduct derivatives transactions may be subject to oversight of their lines of business by their regulators. For example, commercial banks that deal in OTC derivatives are subject to full examinations by their respective regulators. On the other hand, investment banks generally conducted their OTC derivatives activities in affiliates or subsidiaries that traditionally—since most OTC derivatives are not securities—were not subject to direct oversight by SEC, although SEC did review how the largest investment banks that were subject to its CSE program were managing the risk of such activities. Although OTC derivatives and their markets are not directly regulated, the risk exposures that these products created among regulated financial institutions can be sometimes large enough to raise systemic risk concerns among regulators. For example, Bear Stearns, the investment bank that experienced financial difficulties as the result of its mortgage-backed securities activities, was also one of the largest OTC derivatives dealers. According to regulators, one of the primary reasons the Federal Reserve, which otherwise had no regulatory authority over this securities firm, facilitated the sale of Bear Stearns rather than let it go bankrupt was to avoid a potentially large systemic problem because of the firm’s large OTC derivatives obligations. More than a decade ago, we reported that the large financial interconnections between derivatives dealers posed risk to the financial system and recommended that Congress and financial regulators take action to ensure that the largest firms participating in the OTC derivatives markets be subject to similar regulatory oversight and requirements. The market for one type of OTC derivative—credit default swaps—had grown so large that regulators became concerned about its potential to create systemic risks to regulated financial institutions. Credit default swaps are contracts that act as a type of insurance, or a way to hedge risks, against default or another type of credit event associated with a security such as a corporate bond. One party in the contract—the seller of protection—agrees, in return for a periodic fee, to compensate the other party—the protection buyer—if the bond or other underlying entity defaults or another specified credit event occurs. In recent years, the size of the market for credit default swaps (in terms of the notional amount of outstanding contracts) has increased almost tenfold from just over $6 trillion in 2004 to almost $58 trillion at the end of 2007, according to the Bank for International Settlements. As this market has grown, regulators increasingly have become concerned about the adequacy of the infrastructure in place for clearing and settling these contracts, especially the ability to quickly resolve contracts in the event of a large market participant failure. For example, in September 2008, concerns over the effects that a potential bankruptcy of AIG—which was a large seller of credit default swaps—would have on this firm’s swap counterparties contributed to a decision by the Federal Reserve to lend the firm up to $85 billion. The Federal Reserve expressed concern at the time that a disorderly failure of AIG could add to already significant levels of financial market fragility and lead to substantially higher borrowing costs, reduced household wealth, and materially weaker economic performance. As with other OTC derivatives, credit default swaps are not regulated as products, but many of the large U.S. and internationally regulated financial institutions act as dealers. Despite the credit default market’s rapid growth, as recently as 2005 the processing of transactions was still paper-based and decentralized. Regulators have put forth efforts over the years to strengthen clearing and settlement mechanisms. For example, in September 2005, the Federal Reserve Bank of New York began working with dealers and market participants to strengthen arrangements for clearing and settling these swap transactions. Regulators began focusing on reducing a large backlog of unconfirmed trades, which can inhibit market participants’ ability to manage their risks if errors are not found quickly or if uncertainty exists about how other institutions would be affected by the failure of a firm with which they hold credit default swap contracts. Regulators continue to monitor dealers’ progress on these efforts to reduce operational risk arising from these products, and recently have begun holding discussions with the largest credit derivatives dealers and other entities, including certain exchanges, regarding the need to establish a centralized clearing facility, which could reduce the risk of any one dealer’s failure to the overall system. In November 2008, the President’s Working Group on Financial Markets announced policy objectives to guide efforts to address challenges associated with OTC derivatives, including recommendations to enhance the market infrastructure for credit default swaps. However, as of December 2008, no such entity had begun operations. The regulations requiring that investors receive adequate information about the risks of financial assets being marketed to them are also being challenged by the development of some of these new and complex products. For some of the new products that have been created, market participants sometimes had difficulty obtaining clear and accurate information on the value of these assets, their risks, and other key information. In some cases, investors did not perform needed due diligence to fully understand the risks associated with their investment. In other cases, investors have claimed they were misled by broker-dealers about the advantages and disadvantages of products. For example, investors for municipal governments in Australia have accused Lehman Brothers of misleading them regarding the risks of CDOs. As another example, the treasurer of Orange County who oversaw investments leading to the county’s 1994 bankruptcy claimed to have relied on the advice of a large securities firm for his decision to pursue leveraged investments in complex structured products. Finally, a number of financial institutions—including Bank of America, Wachovia, Merrill Lynch, and UBS—have recently settled SEC allegations that these institutions misled investors in selling auction-rate securities, which are bonds for which the interest rates are regularly reset through auctions. In one case, Bank of America, in October 2008, reached a settlement in principle in response to SEC charges that it made misrepresentations to thousands of businesses, charities, and institutional investors when it told them that the products were safe and highly liquid cash and money market alternative investments. Similarly, the introduction and expansion of increasingly complicated retail products to new and broader consumer populations has also raised challenges for regulators in ensuring that consumers are adequately protected. Consumers face growing difficulty in understanding the relative advantages and disadvantages of products such as mortgages and credit cards with new and increasingly complicated features, in part because of limitations on the part of regulatory agencies to improve consumer disclosures and financial literacy. For example, in the last few years many borrowers likely did not understand the risks associated with taking out their loans, especially in the event that housing prices would not continue to increase at the rate at which they had been in recent years. In particular, a significant majority of subprime borrowers from 2003 to 2006 took out adjustable-rate mortgages whose interest rates were fixed for the first 2 or 3 years but then adjusted to often much higher interest rates and correspondingly higher mortgage payments. In addition, many borrowers took out loans with interest-only features that resulted in significant increases in mortgage payments later in the loan. The combination of reduced underwriting standards and a slowdown in house price appreciation led many borrowers to default on their mortgages. Alternative mortgage products such as interest-only or payment option loans, which allow borrowers to defer repayment of principal and possibly part of the interest for the first few years of the loan, grew in popularity and expanded greatly in recent years. From 2003 through 2005, originations of these types of mortgage products grew threefold, from less than 10 percent of residential mortgage originations to about 30 percent. For many years, lenders had primarily marketed these products to wealthy and financially sophisticated borrowers as financial management tools. However, lenders increasingly marketed alternative mortgage products as affordability products that enabled a wider spectrum of borrowers to purchase homes they might not have been able to afford using a conventional fixed-rate mortgage. Lenders also increased the variety of such products offered after interest rates rose and adjustable rate mortgages became less attractive to borrowers. In past work, we found that most of the disclosures for alternative mortgage products that we reviewed did not always fully or effectively explain the risks associated with these products and lacked information on some important loan features. Some evidence suggests more generally that existing mortgage disclosures were inadequate, a problem that is likely to grow with the increased complexity of products. A 2007 Federal Trade Commission report found that both prime and subprime borrowers failed to understand key loan terms when viewing current disclosures. In addition, some market observers have been critical of regulators’ oversight of these products and whether products with such complex features were appropriate for some of the borrowers to which they were marketed. For example, some were critical of the Federal Reserve for not acting more quickly to use its authority under the 1994 Home Ownership and Equity Protection Act to prohibit unfair or deceptive acts or practices in the mortgage market. Although the Federal Reserve took steps in 2001 to ban some practices, such as engaging in a pattern or practice of refinancing certain high-cost loans when it is not in the borrower’s interest, it did not act again until 2008, when it banned additional products and practices, such as certain loans with limited documentation. In a 2007 testimony, a Federal Reserve official noted that writing such rules is difficult, particularly since determinations of unfairness or deception depend heavily on the facts of an individual case. Efforts by regulators to respond to the increased risks associated with new mortgage products also have sometimes been slowed in part because of the need for five federal regulators to coordinate their response. In late 2005, regulators began crafting regulatory guidance to strengthen lending practices and improve disclosures for loans that start with relatively low payments but leave borrowers vulnerable to much higher ones later. The regulators completed their first set of such standards in September 2006, with respect to the disclosure of risks associated with nontraditional mortgage products, and a second set, applicable to subprime mortgage loans, in June 2007. Some industry observers and consumer advocacy groups have criticized the length of time it took for regulators to issue these changes, noting that the second set of guidance was released well after many subprime lenders had already gone out of business. As variations in the types of credit card products and terms have proliferated, consumers also have faced difficulty understanding the rates and terms of their credit card accounts. Credit card rate and fee disclosures have not always been effective at clearly conveying associated charges and fees, creating challenges to informed financial decision making. Although credit card issuers are required to provide cardholders with information aimed at facilitating informed use of credit, these disclosures have serious weaknesses that likely reduce consumers’ ability to understand the costs of using credit cards. Because the pricing of credit cards is not generally subject to federal regulation, these disclosures are the primary federal consumer protection mechanism against inaccurate and unfair credit card practices. However, we reported in 2006 that the disclosures in materials provided by four of the largest credit card issuers were too complicated for many consumers to understand. Following our report, Federal Reserve staff began using consumer testing to involve them to a greater extent in the preparation of potentially new and revised disclosures, and in May 2007, issued proposed changes to credit card disclosure requirements. Nonetheless, the Federal Reserve recognizes the challenge of presenting the information that consumers may need to understand the costs of their cards in a clear way, given the increasingly complicated terms of credit card products. In December 2008, the Federal Reserve, OTS, and NCUA finalized rules to ban various unfair credit card practices, such as allocating payments in a way that unfairly maximizes interest charges. The expansion of new and more complex products also raises challenges for regulators in addressing financial literacy. We have also noted in past work that even a relatively clear and transparent system of disclosures may be of limited use to borrowers who lack sophistication about financial matters. In response to increasing evidence that many Americans are lacking in financial literacy, the federal government has taken steps to expand financial education efforts. However, attempts by the Financial Literacy and Education Commission to coordinate federal financial literacy efforts have sometimes proven difficult due, in part, to the need to reach consensus among its 20 participating federal agencies, which have different missions and perspectives. Moreover, the commission’s staff and funding resources are relatively small, and it has no legal authority to require agencies to redirect their resources or take other actions. As new and increasingly complex financial products have become more common, FASB and SEC have also faced challenges in trying to ensure that accounting and financial reporting requirements appropriately meet the needs of investors and other financial market participants. The development and widespread use of increasingly complex financial products has heightened the importance of having effective accounting and financial reporting requirements that provide interested parties with information that can help them identify and assess risk. As the pace of financial innovation increased in the last 30 years, accounting and financial reporting requirements have also had to keep pace, with 72 percent of the current 163 standards having been issued since 1980—some of which were revisions and amendments to recently established standards, which evidences the challenge of establishing accounting and financial reporting requirements that respond to needs created by financial innovation. As a result of the growth in complex financial instruments and a desire to improve the usefulness of financial information about them, U.S. standard setters and regulators currently are dealing with accounting and auditing challenges associated with recently developed standards related to valuing financial instruments and special-purpose entities. Over the last year, owners and issuers of financial instruments have expressed concerns about implementing the new fair value accounting standard, which requires that financial assets and liabilities be recorded at fair or market value. SEC and FASB have recently issued clarifications of measuring fair value when there is not an active market for the financial instrument. In addition, market participants raised concerns about the availability of useful accounting and financial reporting information to assess the risks posed by special-purpose entities. Under current accounting rules, publicly traded companies that create qualifying special-purpose entities are allowed to move qualifying assets and liabilities associated with certain complex financial instruments off the issuing company’s balance sheets, which results in virtually no accounting and financial reporting information being available about the entities’ activities. Due to the accounting and financial reporting treatment for these special-purpose entities, as the subprime crisis worsened, banks initially refused to negotiate loans with homeowners because banks were concerned that the accounting and financial reporting requirements would have the banks put the assets and liabilities back onto their balance sheets. In response to questions regarding modification of loans in special-purpose entities, the SEC’s Chief Accountant issued a letter that concluded his office would not object to loans being modified pursuant to specific screening criteria. In response to these concerns, FASB expedited its standards-setting process in order to reduce the amount of time before the issuance of a new accounting standard that would effectively eliminate qualified special- purpose entities. Standard setters and regulators also face new challenges in dealing with global convergence of accounting and auditing standards. The rapid integration of the world’s capital markets has made establishing a single set of effective accounting and financial reporting standards increasingly relevant. FASB and SEC have acknowledged the need to address the convergence of U.S. and international accounting standards, and SEC has proposed having U.S. public companies use International Financial Reporting Standards by 2014. As the globalization of accounting standards moves forward, U.S. standard setters and regulators need to anticipate and manage the challenges posed by their development and implementation, such as how to apply certain standards in unique legal and regulatory environment frameworks in the United States as well as in certain unique industry niches. Ensuring that auditing standards applicable to U.S. public companies continue to provide the financial markets with the important and independent assurances associated with existing U.S. auditing standards will also prove challenging to the Public Company Accounting Oversight Board. Just as global accounting and auditing standards are converging, financial markets around the world are becoming increasingly interlinked and global in nature, requiring U.S. regulators to work with each other and other countries to effectively adapt. To effectively oversee large financial services firms that have operations in many countries, regulators from various countries must coordinate regulation and supervision of financial services across national borders and must communicate regularly. Although financial regulators have effectively coordinated in a number of ways to accommodate some changes, the current fragmented regulatory structure has complicated some of these efforts. For example, the current U.S. regulatory system complicates the ability of financial regulators to convey a single U.S. position in international discussions, such as those related to the Basel Accords process for developing international capital standards. Each federal regulator involved in these efforts oversees a different set of institutions and represents an important regulatory perspective, which has made reaching consensus on some issues more difficult than others. Although U.S. regulators generally agree on the broad underlying principles at the core of Basel II, including increased risk sensitivity of capital requirements and capital neutrality, in a 2004 report we noted that although regulators communicated and coordinated, they sometimes had difficulty agreeing on certain aspects of the process. As we reported, in November 2003, members of the House Financial Services Committee warned in a letter to the bank regulatory agencies that the discord surrounding Basel II had weakened the negotiating position of the United States and resulted in an agreement that was less than favorable to U.S. financial institutions. International officials have also indicated that the lack of a single point of contact on, for example, insurance issues has complicated regulatory decision making. However, regulatory officials told us that the final outcome of the Basel II negotiations was better than it would have been with a single U.S. representative because of the agencies’ varying perspectives and expertise. In particular, one regulator noted that, in light of the magnitude of recent losses at banks and the failure of banks and rating agencies to predict such losses, the additional safeguards built into how U.S. regulators adopted Basel II are an example of how more than one regulatory perspective can improve policymaking. The U.S. regulatory system is a fragmented and complex system of federal and state regulators—put into place over the past 150 years—that has not kept pace with the major developments that have occurred in financial markets and products in recent decades. In 2008, the United States finds itself in the midst of one of the worst financial crises ever, with instability threatening global financial markets and the broader economy. While much of the attention of policymakers understandably has been focused on taking short-term steps to address the immediate nature of the crisis, attention has also turned to the need to consider significant reforms to the financial regulatory system to keep pace with existing and anticipated challenges in financial regulation. While the current U.S. system has many features that could be preserved, the significant limitations of the system, if not addressed, will likely fail to prevent future crises that could be as harmful as or worse than those that have occurred in the past. Making changes that better position regulators to oversee firms and products that pose risks to the financial system and consumers and to adapt to new products and participants as these arise would seem essential to ensuring that our financial services sector continues to serve our nation’s needs as effectively as possible. We have conducted extensive work in recent decades reviewing the impacts of market developments and overseeing the effectiveness of financial regulators’ activities. In particular, we have helped Congress address financial crises dating back to the savings and loan and LTCM crises, and more recently over the past few years have issued several reports citing the need to modernize the U.S. financial regulatory structure. In this report, consistent with our past work, we are not proposing the form and structure of what a new financial regulatory system should look like. Instead, we are providing a framework, consisting of the following nine elements, that Congress and others can use to evaluate or craft proposals for financial regulatory reform. By applying the elements of this framework to proposals, the relative strengths and weaknesses of each one should be better revealed. Similarly, the framework we present could be used to craft a proposal or to identify aspects to be added to existing proposals to make them more effective and appropriate for addressing the limitations of the current system. The nine elements could be addressed in a variety of ways, but each is critically important in establishing the most effective and efficient financial regulatory system possible. 1. Clearly defined regulatory goals. A regulatory system should have goals that are clearly articulated and relevant, so that regulators can effectively conduct activities to implement their missions. A critical first step to modernizing the regulatory system and enhancing its ability to meet the challenges of a dynamic financial services industry is to clearly define regulatory goals and objectives. In the background of this report, we identify four broad goals of financial regulation that regulators have generally sought to achieve. These include ensuring adequate consumer protections, ensuring the integrity and fairness of markets, monitoring the safety and soundness of institutions, and acting to ensure the stability of the overall financial system. However, these goals are not always explicitly set in the federal statutes and regulations that govern these regulators. Having specific goals clearly articulated in legislation could serve to better focus regulators on achieving their missions with greater certainty and purpose, and provide continuity over time. Given some of the key changes in financial markets discussed earlier in this report—particularly the increased interconnectedness of institutions, the increased complexity of products, and the increasingly global nature of financial markets—Congress should consider the benefits that may result from re-examining the goals of financial regulation and making explicit a set of comprehensive and cohesive goals that reflect today’s environment. For example, it may be beneficial to have a clearer focus on ensuring that products are not sold with unsuitable, unfair, deceptive, or abusive features; that systemic risks and the stability of the overall financial system are specifically addressed; or that U.S. firms are competitive in a global environment. This may be especially important given the history of financial regulation and the ad hoc approach through which the existing goals have been established, as discussed earlier. We found varying views about the goals of regulation and how they should be prioritized. For example, representatives of some regulatory agencies and industry groups emphasized the importance of creating a competitive financial system, whereas members of one consumer advocacy group noted that reforms should focus on improving regulatory effectiveness rather than addressing concerns about market competitiveness. In addition, as the Federal Reserve notes, financial regulatory goals often will prove interdependent and at other times may conflict. Revisiting the goals of financial regulation would also help ensure that all involved entities—legislators, regulators, institutions, and consumers—are able to work jointly to meet the intended goals of financial regulation. Such goals and objectives could help establish agency priorities and define responsibility and accountability for identifying risks, including those that cross markets and industries. Policymakers should also carefully define jurisdictional lines and weigh the advantages and disadvantages of having overlapping authorities. While ensuring that the primary goals of financial regulation—including system soundness, market integrity, and consumer protection—are better articulated for regulators, policymakers will also have to ensure that regulation is balanced with other national goals, including facilitating capital raising, innovation, and other benefits that foster long-term growth, stability, and welfare of the United States. Once these goals are agreed upon, policymakers will need to determine the extent to which goals need to be clarified and specified through rules and requirements, or whether to avoid such specificity and provide regulators with greater flexibility in interpreting such goals. Some reform proposals suggest “principles-based regulation” in which regulators apply broad-based regulatory principles on a case-by-case basis. Such an approach offers the potential advantage of allowing regulators to better adapt to changing market developments. Proponents also note that such an approach would prevent institutions in a more rules-based system from complying with the exact letter of the law while still engaging in unsound or otherwise undesirable financial activities. However, such an approach has potential limitations. Opponents note that regulators may face challenges to implement such a subjective set of principles. A lack of clear rules about activities could lead to litigation if financial institutions and consumers alike disagree with how regulators interpreted goals. Opponents of principles-based regulation note that industry participants who support such an approach have also in many cases advocated for bright-line standards and increased clarity in regulation, which may be counter to a principles-based system. The most effective approach may involve both a set of broad underlying principles and some clear technical rules prohibiting specific activities that have been identified as problematic. Key issues to be addressed: Clarify and update the goals of financial regulation and provide sufficient information on how potentially conflicting goals might be prioritized. Determine the appropriate balance of broad principles and specific rules that will result in the most effective and flexible implementation of regulatory goals. 2. Appropriately comprehensive. A regulatory system should ensure that financial institutions and activities are regulated in a way that ensures regulatory goals are fully met. As such, activities that pose risks to consumer protection, financial stability, or other goals should be comprehensively regulated, while recognizing that not all activities will require the same level of regulation. A financial regulatory system should effectively meet the goals of financial regulation, as articulated as part of this process, in a way that is appropriately comprehensive. In doing so, policymakers may want to consider how to ensure that both the breadth and depth of regulation are appropriate and adequate. That is, policymakers and regulators should consider how to make determinations about which activities and products, both new and existing, require some aspect of regulatory involvement to meet regulatory goals, and then make determinations about how extensive such regulation should be. As we have noted, gaps in the current level of federal oversight of mortgage lenders, credit rating agencies, and certain complex financial products such as CDOs and credit default swaps likely have contributed to the current crisis. Congress and regulators may also want to revisit the extent of regulation for entities such as banks that have traditionally fallen within full federal oversight but for which existing regulatory efforts, such as oversight related to risk management and lending standards, have been proven in some cases inadequate by recent events. However, overly restrictive regulation can stifle the financial sectors’ ability to innovate and stimulate capital formation and economic growth. Regulators have struggled to balance these competing objectives, and the current crisis appears to reveal that the proper balance was not in place in the regulatory system to date. Key issues to be addressed: Identify risk-based criteria, such as a product’s or institution’s potential to harm consumers or create systemic problems, for determining the appropriate level of oversight for financial activities and institutions. Identify ways that regulation can provide protection but avoid hampering innovation, capital formation, and economic growth. 3. Systemwide focus. A regulatory system should include a mechanism for identifying, monitoring, and managing risks to the financial system regardless of the source of the risk or the institutions in which it is created. A regulatory system should focus on risks to the financial system, not just institutions. As noted earlier, with multiple regulators primarily responsible for individual institutions or markets, none of the financial regulators is tasked with assessing the risks posed across the entire financial system by a few institutions or by the collective activities of the industry. As we noted earlier in the report, the collective activities of a number of entities—including mortgage brokers, real estate professionals, lenders, borrowers, securities underwriters, investors, rating agencies and others—likely all contributed to the recent market crisis, but no one regulator had the necessary scope of oversight to identify the risks to the broader financial system. Similarly, once firms began to fail and the full extent of the financial crisis began to become clear, no formal mechanism existed to monitor market trends and potentially stop or help mitigate the fallout from these events. Having a single entity responsible for assessing threats to the overall financial system could prevent some of the crises that we have seen in the past. For example, in its Blueprint for a Modernized Financial Regulatory Structure, Treasury proposed expanding the responsibilities of the Federal Reserve to create a “market stability regulator” that would have broad authority to gather and disclose appropriate information, collaborate with other regulators on rulemaking, and take corrective action as necessary in the interest of overall financial market stability. Such a regulator could assess the systemic risks that arise at financial institutions, within specific financial sectors, across the nation, and globally. However, policymakers should consider that a potential disadvantage of providing the agency with such broad responsibility for overseeing nonbank entities could be that it may imply an official government support or endorsement, such as a government guarantee, of such activities, and thus encourage greater risk taking by these financial institutions and investors. Regardless of whether a new regulator is created, all regulators under a new system should consider how their activities could better identify and address systemic risks posed by their institutions. As the Federal Reserve Chairman has noted, regulation and supervision of financial institutions is a critical tool for limiting systemic risk. This will require broadening the focus from individual safety and soundness of institutions to a systemwide oversight approach that includes potential systemic risks and weaknesses. A systemwide focus should also increase attention on how the incentives and constraints created by regulations affects risk taking throughout the business cycle, and what actions regulators can take to anticipate and mitigate such risks. However, as the Federal Reserve Chairman has noted, the more comprehensive the approach, the more technically demanding and costly it would be for regulators and affected institutions. Key issues to be addressed: Identify approaches to broaden the focus of individual regulators or establish new regulatory mechanisms for identifying and acting on systemic risks. Determine what additional authorities a regulator or regulators should have to monitor and act to reduce systemic risks. 4. Flexible and adaptable. A regulatory system should be adaptable and forward-looking such that regulators can readily adapt to market innovations and changes and include a mechanism for evaluating potential new risks to the system. A regulatory system should be designed such that regulators can readily adapt to market innovations and changes and include a formal mechanism for evaluating the full potential range of risks of new products and services to the system, market participants, and customers. An effective system could include a mechanism for monitoring market developments— such as broad market changes that introduce systemic risk, or new products and services that may pose more confined risks to particular market segments—to determine the degree, if any, to which regulatory intervention might be required. The rise of a very large market for credit derivatives, while providing benefits to users, also created exposures that warranted actions by regulators to rescue large individual participants in this market. While efforts are under way to create risk-reducing clearing mechanisms for this market, a more adaptable and responsive regulatory system might have recognized this need earlier and addressed it sooner. Some industry representatives have suggested that principles-based regulation, as discussed above, would provide such a mechanism. Designing a system to be flexible and proactive also involves determining whether Congress, regulators, or both should make such determinations, and how such an approach should be clarified in laws or regulations. Important questions also exist about the extent to which financial regulators should actively monitor and, where necessary, approve new financial products and services as they are developed to ensure the least harm from inappropriate products. Some individuals commenting on this framework, including industry representatives, noted that limiting government intervention in new financial activities until it has become clear that a particular activity or market poses a significant risk and therefore warrants intervention may be more appropriate. As with other key policy questions, this may be answered with a combination of both approaches, recognizing that a product approval approach may be appropriate for some innovations with greater potential risk, while other activities may warrant a more reactive approach. Key issues to be addressed: Determine how to effectively monitor market developments to identify potential risks; the degree, if any, to which regulatory intervention might be required; and who should hold such a responsibility. Consider how to strike the right balance between overseeing new products as they come onto the market to take action as needed to protect consumers and investors, without unnecessarily hindering innovation. 5. Efficient and effective. A regulatory system should provide efficient oversight of financial services by eliminating overlapping federal regulatory missions, where appropriate, and minimizing regulatory burden while effectively achieving the goals of regulation. A regulatory system should provide for the efficient and effective oversight of financial services. Accomplishing this in a regulatory system involves many considerations. First, an efficient regulatory system is designed to accomplish its regulatory goals using the least amount of public resources. In this sense, policymakers must consider the number, organization, and responsibilities of each agency, and eliminate undesirable overlap in agency activities and responsibilities. Determining what is undesirable overlap is a difficult decision in itself. Under the current U.S. system, financial institutions often have several options for how to operate their business and who will be their regulator. For example, a new or existing depository institution can choose among several charter options. Having multiple regulators performing similar functions does allow for these agencies to potentially develop alternative or innovative approaches to regulation separately, with the approach working best becoming known over time. Such proven approaches can then be adopted by the other agencies. On the other hand, this could lead to regulatory arbitrage, in which institutions take advantage of variations in how agencies implement regulatory responsibilities in order to be subject to less scrutiny. Both situations have occurred under our current structure. With that said, recent events clearly have shown that the fragmented U.S. regulatory structure contributed to failures by the existing regulators to adequately protect consumers and ensure financial stability. As we noted earlier, efforts by regulators to respond to the increased risks associated with new mortgage products were sometimes slowed in part because of the need for five federal regulators to coordinate their response. The Chairman of the Federal Reserve has similarly noted that the different regulatory and supervisory regimes for lending institutions and mortgage brokers made monitoring such institutions difficult for both regulators and investors. Similarly, we noted earlier in the report that the current fragmented U.S. regulatory structure has complicated some efforts to coordinate internationally with other regulators. One first step to addressing such problems is to seriously consider the need to consolidate depository institution oversight among fewer agencies. Since 1996, we have been recommending that the number of federal agencies with primary responsibilities for bank oversight be reduced. Such a move would result in a system that was more efficient and improve consistency in regulation, another important characteristic of an effective regulatory system. In addition, Congress could consider the advantages and disadvantages of providing a federal charter option for insurance and creating a federal insurance regulatory entity. We have not studied the issue of an optional federal charter for insurers, but have through the years noted difficulties with efforts to harmonize insurance regulation across states through the NAIC-based structure. The establishment of a federal insurance charter and regulator could help alleviate some of these challenges, but such an approach could also have unintended consequences for state regulatory bodies and for insurance firms as well. Also, given the challenges associated with increasingly complex investment and retail products as discussed earlier, policymakers will need to consider how best to align agency responsibilities to better ensure that consumers and investors are provided with clear, concise, and effective disclosures for all products. Organizing agencies around regulatory goals as opposed to the existing sector-based regulation may be one way to improve the effectiveness of the system, especially given some of the market developments discussed earlier. Whatever the approach, policymakers should seek to minimize conflict in regulatory goals across regulators, or provide for efficient mechanisms to coordinate in cases where goals inevitably overlap. For example, in some cases, the safety and soundness of an individual institution may have implications for systemic risk, or addressing an unfair or deceptive act or practice at a financial institution may have implications on the institution’s safety and soundness by increasing reputational risk. If a regulatory system assigns these goals to different regulators, it will be important to establish mechanisms for them to coordinate. Proposals to consolidate regulatory agencies for the purpose of promoting efficiency should also take into account any potential trade-offs related to effectiveness. For example, to the extent that policymakers see value in the ability of financial institutions to choose their regulator, consolidating certain agencies may reduce such benefits. Similarly, some individuals have commented that the current system of multiple regulators has led to the development of expertise among agency staff in particular areas of financial market activities that might be threatened if the system were to be consolidated. Finally, policymakers may want to ensure that any transition from the current financial system to a new structure should minimize as best as possible any disruption to the operation of financial markets or risks to the government, especially given the current challenges faced in today’s markets and broader economy. A financial system should also be efficient by minimizing the burden on regulated entities to the extent possible while still achieving regulatory goals. Under our current system, many financial institutions, and especially large institutions that offer services that cross sectors, are subject to supervision by multiple regulators. While steps toward consolidated supervision and designating primary supervisors have helped alleviate some of the burden, industry representatives note that many institutions face significant costs as a result of the existing financial regulatory system that could be lessened. Such costs, imposed in an effort to meet certain regulatory goals such as safety and soundness and consumer protection, can run counter to other goals of a financial system by stifling innovation and competitiveness. In addressing this concern, it is also important to consider the potential benefits that might result in some cases from having multiple regulators overseeing an institution. For example, representatives of state banking and other institution regulators, and consumer advocacy organizations, note that concurrent jurisdiction— between two federal regulators or a federal and state regulator—can provide needed checks and balances against individual financial regulators who have not always reacted appropriately and in a timely way to address problems at institutions. They also note that states may move more quickly and more flexibly to respond to activities causing harm to consumers. Some types of concurrent jurisdiction, such as enforcement authority, may be less burdensome to institutions than others, such as ongoing supervision and examination. Key issues to be addressed: Consider the appropriate role of the states in a financial regulatory system and how federal and state roles can be better harmonized. Determine and evaluate the advantages and disadvantages of having multiple regulators, including nongovernmental entities such as SROs, share responsibilities for regulatory oversight. Identify ways that the U.S. regulatory system can be made more efficient, either through consolidating agencies with similar roles or through minimizing unnecessary regulatory burden. Consider carefully how any changes to the financial regulatory system may negatively impact financial market operations and the broader economy, and take steps to minimize such consequences. 6. Consistent consumer and investor protection. A regulatory system should include consumer and investor protection as part of the regulatory mission to ensure that market participants receive consistent, useful information, as well as legal protections for similar financial products and services, including disclosures, sales practice standards, and suitability requirements. A regulatory system should be designed to provide high-quality, effective, and consistent protection for consumers and investors in similar situations. In doing so, it is important to recognize important distinctions between retail consumers and more sophisticated consumers such as institutional investors, where appropriate considering the context of the situation. Different disclosures and regulatory protections may be necessary for these different groups. Consumer protection should be viewed from the perspective of the consumer rather than through the various and sometimes divergent perspectives of the multitude of federal regulators that currently have responsibilities in this area. As discussed earlier, many consumers that received loans in the last few years did not understand the risks associated with taking out their loans, especially in the event that housing prices would not continue to increase at the rate they had in recent years. In addition, increasing evidence exists that many Americans are lacking in financial literacy, and the expansion of new and more complex products will continue to create challenges in this area. Furthermore, as noted above, regulators with existing authority to better protect consumers did not always exercise that authority effectively. In considering a new regulatory system, policymakers should consider the significant lapses in our regulatory system’s focus on consumer protection and ensure that such a focus is prioritized in any reform efforts. For example, policymakers should identify ways to improve upon the existing, largely fragmented, system of regulators that must coordinate to act in these areas. As noted above, this should include serious consideration of whether to consolidate regulatory responsibilities to streamline and improve the effectiveness of consumer protection efforts. Another way that some market observers have argued that consumer protections could be enhanced and harmonized across products is to extend suitability requirements—which require securities brokers making recommendations to customers to have reasonable grounds for believing that the recommendation is suitable for the customer—to mortgage and other products. Additional consideration could also be given to determining whether certain products are simply too complex to be well understood and make judgments about limiting or curtailing their use. Key issues to be addressed: Consider how prominent the regulatory goal of consumer protection should be in the U.S. financial regulatory system. Determine what amount, if any, of consolidation of responsibility may be necessary to enhance and harmonize consumer protections, including suitability requirements and disclosures across the financial services industry. Consider what distinctions are necessary between retail and wholesale products, and how such distinctions should affect how products are regulated. Identify opportunities to protect and empower consumers through improving their financial literacy. 7. Regulators provided with independence, prominence, authority, and accountability. A regulatory system should ensure that regulators have independence from inappropriate influence; have sufficient resources, clout, and authority to carry out and enforce statutory missions; and are clearly accountable for meeting regulatory goals. A regulatory system should ensure that any entity responsible for financial regulation is independent from inappropriate influence; has adequate prominence, authority, and resources to carry out and enforce its statutory mission; and is clearly accountable for meeting regulatory goals. With respect to independence, policymakers may want to consider advantages and disadvantages of different approaches to funding agencies, especially to the extent that agencies might face difficulty remaining independent if they are funded by the institutions they regulate. Under the current structure, for example, the Federal Reserve primarily is funded by income earned from U.S. government securities that it has acquired through open market operations and does not assess charges to the institutions it oversees. In contrast, OCC and OTS are funded primarily by assessments on the firms they supervise. Decision makers should consider whether some of these various funding mechanisms are more likely to ensure that a regulator will take action against its regulated institutions without regard to the potential impact on its own funding. With respect to prominence, each regulator must receive appropriate attention and support from top government officials. Inadequate prominence in government may make it difficult for a regulator to raise safety and soundness or other concerns to Congress and the administration in a timely manner. Mere knowledge of a deteriorating situation would be insufficient if a regulator were unable to persuade Congress and the administration to take timely corrective action. This problem would be exacerbated if a regulated institution had more political clout and prominence than its regulator because the institution could potentially block action from being taken. In considering authority, agencies must have the necessary enforcement and other tools to effectively implement their missions to achieve regulatory goals. For example, as noted earlier, in a 2007 report we expressed concerns over the appropriateness of having OTS oversee diverse global financial firms given the size of the agency relative to the institutions for which it was responsible. It is important for a regulatory system to ensure that agencies are provided with adequate resources and expertise to conduct their work effectively. A regulatory system should also include adequate checks and balances to ensure the appropriate use of agency authorities. With respect to accountability, policymakers may also want to consider different governance structures at agencies—the current system includes a combination of agency heads and independent boards or commissions—and how to ensure that agencies are recognized for successes and held accountable for failures to act in accordance with regulatory goals. Key issues to be addressed: Determine how to structure and fund agencies to ensure each has adequate independence, prominence, tools, authority and accountability. Consider how to provide an appropriate level of authority to an agency while ensuring that it appropriately implements its mission without abusing its authority. Ensure that the regulatory system includes effective mechanisms for holding regulators accountable. 8. Consistent financial oversight. A regulatory system should ensure that similar institutions, products, risks, and services are subject to consistent regulation, oversight, and transparency, which should help minimize negative competitive outcomes while harmonizing oversight, both within the United States and internationally. A regulatory system should ensure that similar institutions, products, and services posing similar risks are subject to consistent regulation, oversight, and transparency. Identifying which institutions and which of their products and services pose similar risks is not easy and involves a number of important considerations. Two institutions that look very similar may in fact pose very different risks to the financial system, and therefore may call for significantly different regulatory treatment. However, activities that are done by different types of financial institutions that pose similar risks to their institutions or the financial system should be regulated similarly to prevent competitive disadvantages between institutions. Streamlining the regulation of similar products across sectors could also help prepare the United States for challenges that may result from increased globalization and potential harmonization in regulatory standards. Such efforts are under way in other jurisdictions. For example, at a November 2008 summit in the United States, the Group of 20 countries pledged to strengthen their regulatory regimes and ensure that all financial markets, products, and participants are consistently regulated or subject to oversight, as appropriate to their circumstances. Similarly, a working group in the European Union is slated by the spring of 2009 to propose ways to strengthen European supervisory arrangements, including addressing how their supervisors should cooperate with other major jurisdictions to help safeguard financial stability globally. Promoting consistency in regulation of similar products should be done in a way that does not sacrifice the quality of regulatory oversight. As we noted in a 2004 report, different regulatory treatment of bank and financial holding companies, consolidated supervised entities, and other holding companies may not provide a basis for consistent oversight of their consolidated risk management strategies, guarantee competitive neutrality, or contribute to better oversight of systemic risk. Recent events further underscore the limitations brought about when there is a lack of consistency in oversight of large financial institutions. As such, Congress and regulators will need to seriously consider how best to consolidate responsibilities for oversight of large financial conglomerates as part of any reform effort. Key issues to be addressed: Identify institutions and products and services that pose similar risks. Determine the level of consolidation necessary to streamline financial regulation activities across the financial services industry. Consider the extent to which activities need to be coordinated internationally. 9. Minimal taxpayer exposure. A regulatory system should have adequate safeguards that allow financial institution failures to occur while limiting taxpayers’ exposure to financial risk. A regulatory system should have adequate safeguards that allow financial institution failures to occur while limiting taxpayers’ exposure to financial risk. Policymakers should consider identifying the best safeguards and assignment of responsibilities for responding to situations where taxpayers face significant exposures, and should consider providing clear guidelines when regulatory intervention is appropriate. While an ideal system would allow firms to fail without negatively affecting other firms— and therefore avoid any moral hazard that may result—policymakers and regulators must consider the realities of today’s financial system. In some cases, the immediate use of public funds to prevent the failure of a critically important financial institution may be a worthwhile use of such funds if it ultimately serves to prevent a systemic crisis that would result in much greater use of public funds in the long run. However, an effective regulatory system that incorporates the characteristics noted above, especially by ensuring a systemwide focus, should be better equipped to identify and mitigate problems before it become necessary to make decisions about whether to let a financial institution fail. An effective financial regulatory system should also strive to minimize systemic risks resulting from interrelationships between firms and limitations in market infrastructures that prevent the orderly unwinding of firms that fail. Another important consideration in minimizing taxpayer exposure is to ensure that financial institutions provided with a government guarantee that could result in taxpayer exposure are also subject to an appropriate level of regulatory oversight to fulfill the responsibilities discussed above. Key issues to be addressed: Identify safeguards that are most appropriate to prevent systemic crises while minimizing moral hazard. Consider how a financial system can most effectively minimize taxpayer exposure to losses related to financial instability. Finally, although significant changes may be required to modernize the U.S. financial regulatory system, policymakers should consider carefully how best to implement the changes in such a way that the transition to a new structure does not hamper the functioning of the financial markets, individual financial institutions’ ability to conduct their activities, and consumers’ ability to access needed services. For example, if the changes require regulators or institutions to make systems changes, file registrations, or other activities that could require extensive time to complete, the changes could be implemented in phases with specific target dates around which the affected entities could formulate plans. In addition, our past work has identified certain critical factors that should be addressed to ensure that any large-scale transitions among government agencies are implemented successfully. Although all of these factors are likely important for a successful transformation for the financial regulatory system, Congress and existing agencies should pay particular attention to ensuring there are effective communication strategies so that all affected parties, including investors and consumers, clearly understand any changes being implemented. In addition, attention should be paid to developing a sound human capital strategy to ensure that any new or consolidated agencies are able to retain and attract additional quality staff during the transition period. Finally, policymakers should consider how best to retain and utilize the existing skills and knowledge base within agencies subject to changes as part of a transition. We provided the opportunity to review and comment on a draft of this report to representatives of 29 agencies and other organizations, including federal and state financial regulatory agencies, consumer advocacy groups, and financial service industry trade associations. A complete list of organizations that reviewed the draft is included in appendix II. All reviewers provided valuable input that was used in finalizing this report. In general, reviewers commented that the report represented a high-quality and thorough review of issues related to regulatory reform. We made changes throughout the report to increase its precision and clarity and to provide additional detail. For example, the Federal Reserve provided comments indicating that our report should emphasize that the traditional goals of regulation that we described in the background section are incomplete unless their ultimate purpose is considered, which is to promote the long-term growth, stability, and welfare of the United States. As a result, we expanded the discussion of our framework element concerning the need to have clearly defined regulatory goals to emphasize that policymakers will need to ensure that such regulation is balanced with other national goals, including facilitating capital raising and fostering innovation. In addition, we received formal written responses from the American Bankers Association, the American Council of Life Insurers, the Conference of State Bank Supervisors, Consumers Union, the Credit Union National Association, the Federal Deposit Insurance Corporation, the Mortgage Bankers Association, and the National Association of Federal Credit Unions, and a joint letter from the Center for Responsible Lending, the National Consumer Law Center, and U.S. PIRG; all formal written responses are included as appendixes to this report. Among the letters we received, various commenters raised additional issues regarding consumer protection and risky products. For example, in a joint letter, the Center for Responsible Lending, the National Consumer Law Center, and the U.S. PIRG noted that the best way to avoid systemic risk is to address problems that exist at the level of individual consumer transactions, before they pose a threat to the system as a whole. They also noted that although most of the subprime lending was done by nonbank lenders, overly aggressive practices for other loan types and among other lenders also contributed to the current crisis. In addition, they noted that to effectively protect consumers, the regulatory system must prohibit unsustainable lending and that disclosures and financial literacy are not enough. The letter from FDIC agreed that effective reform of the U.S. financial regulatory system would help avoid a recurrence of the economic and financial problems we are now experiencing. It also noted that irresponsible lending practices were not consistent with sound banking practices. FDIC’s letter also notes that the regulatory structure collectively permitted excessive levels of leverage in the nonbank financial system and that statutory mandates that address consumer protection and aggressive lending practices and leverage among firms would be equally important for improving regulation as would changing regulatory structure. In a letter from Consumers Union, that group urged that consumer protection be given equal priority as safety and soundness and that regulators act more promptly to address emerging risks rather than waiting until a problem has become national in scope. The letter indicates that Consumers Union supports an independent federal consumer protection agency for financial services and the ability of states to also develop and enforce consumer protections. We made changes in response to many of these comments. For example, we enhanced our discussion of weaknesses in regulators’ efforts to oversee the sale of mortgage products that posed risks to consumers and the stability of the financial system, and we made changes to the framework to emphasize the importance of consumer protection. Several of the letters addressed issues regarding potential consolidation of regulatory agencies and the role of federal and state regulation. The letter from the American Bankers Association said that the current system of bank regulation and oversight has many advantages and that any reform efforts should build on those advantages. The letter also noted that there are benefits to having multiple federal regulators, as well as a dual banking system. The letter from the Conference of State Bank Supervisors agreed with our report that the U.S. regulatory system is complex and clearly has gaps, but cautioned that consolidating regulation and making decisions that could indirectly result in greater industry consolidation could exacerbate problems. The letter also indicates concern that our report does not fully acknowledge the importance of creating an environment that promotes a diverse industry to serve the nation’s diverse communities and prevents concentration of economic power in a handful of institutions. Our report does discuss the benefits of state regulation of financial institutions, but we did not address the various types of state institutions because we focused mainly on the federal role over our markets. In the past, our work has acknowledged the dual banking system has benefits and that concentration in markets can have disadvantages. The Conference of State Bank Supervisors letter also notes that state efforts to respond to consumer abuses were stymied by federal pre-emption and that a regulatory structure should preserve checks and balances, avoid concentrations of power, and be more locally responsive. In response to this letter, we also added information about the enactment of the Secure and Fair Enforcement for Mortgage Licensing Act, as part of the Housing and Economic Recovery Act, which requires enhanced licensing and registration of mortgage brokers. The letter from the National Association of Federal Credit Unions urged that an independent regulator for credit unions be retained because of the distinctive characteristics of federal credit unions. A letter from the Credit Union National Association also strongly opposes combining the credit union regulator or its insurance function with another agency. The letter from the Mortgage Bankers Association urges that a federal standard for mortgage lending be developed to provide greater uniformity than the currently diffuse set of state laws. They also supported consideration of federal regulation of independent mortgage bankers and mortgage brokers as a way of improving uniformity and effectiveness of the regulation of these entities. A letter from the American Council of Life Insurers noted that the lack of a federal insurance regulatory office provides for uneven consumer protections and policy availability nationwide and hampers the country’s ability to negotiate internationally on insurance industry issues, and urged that we include a discussion of the need to consider a greater federal role in the regulation of insurance. As a result, in the section where we discuss the need for efficient and effective regulation we noted that harmonizing insurance regulation across states has been difficult, and that Congress could consider the advantages and disadvantages of providing a federal charter option for insurance and creating a federal insurance regulatory entity. We are sending copies of this report to interested congressional committees and members. In addition, we are sending copies to the federal financial regulatory agencies and associations representing state financial regulators, financial industry participants, and consumers, as well as to the President and Vice President, the President-Elect and Vice President-Elect, and other interested parties. The report also is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Orice M. Williams at (202) 512-8678 or williamso@gao.gov, or Richard J. Hillman at (202) 512-8678 or hillmanr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XII. Our report objectives were to (1) describe the origins of the current financial regulatory system, (2) describe various market developments and changes that have raised challenges for the current system, and (3) present an evaluation framework that can be used by Congress and others to craft or evaluate potential regulatory reform efforts going forward. To address all of these objectives, we synthesized existing GAO work on challenges to the U.S. financial regulatory structure and on criteria for developing and strengthening effective regulatory structures. These reports are referenced in footnotes in this report and noted in the Related GAO Products appendix. In particular, we relied extensively on our recent body of work examining the financial regulatory structure, culminating in reports issued in 2004 and 2007. We also reviewed existing studies, government documents, and other research for illustrations of how current and past financial market events have revealed limitations in our existing regulatory system and suggestions for regulatory reform. In addition, to gather input on challenges with the existing system and important considerations in evaluating reforms, we interviewed several key individuals with broad and substantial knowledge about the U.S. financial regulatory system—including a former Chairman of the Board of Governors of the Federal Reserve System (Federal Reserve), a former high-level executive at a major investment bank that had also served in various regulatory agencies, and an international financial organization official that also served in various regulatory agencies. We selected these individuals from a group of notable officials, academics, legal scholars, and others we identified as part of this and other GAO work, including a 2007 expert panel on financial regulatory structure. We selected individuals to interview in an effort to gather government, industry, and academic perspectives, including on international issues. In some cases, due largely to the market turmoil at the time of our study, we were unable to or chose not to reach out to certain individuals, but took steps to ensure that we selected other individuals that would meet our criteria. To develop the evaluation framework, we also convened a series of three forums in which we gathered comments on a preliminary draft of our framework from a wide range of representatives of federal and state financial regulatory agencies, financial industry associations and institutions, and consumer advocacy organizations. In particular, at a forum held on August 19, 2008, we gathered comments from representatives of financial industry associations and institutions, including the American Bankers Association, the American Council of Life Insurers, The Clearing House, Columbia Bank, the Independent Community Bankers of America, The Financial Services Roundtable, Fulton Financial Corporation, the Futures Industry Association, the Managed Funds Association, the Mortgage Bankers Association, the National Association of Federal Credit Unions, the Securities Industry and Financial Markets Association, and the U.S. Chamber of Commerce. We worked closely with representatives at the American Bankers Association—which hosted the forum at its Washington, D.C., headquarters—to identify a comprehensive and representative group of industry associations and institutions. At a forum held on August 27, 2008, we gathered comments from representatives of consumer advocacy organizations, including the Center for Responsible Lending, the Consumer Federation of America, the Consumers Union, the National Consumer Law Center, and the U.S. PIRG. We invited a comprehensive list of consumer advocacy organization representatives—compiled based on extensive dealings with these groups from current and past work—to participate in this forum and hosted it at GAO headquarters in Washington, D.C. At a forum held on August 28, 2008, we gathered comments from representatives of federal and state banking, securities, futures, insurance and housing regulatory oversight agencies, including the Commodity Futures Trading Commission, the Conference of State Bank Supervisors, the Department of the Treasury, the Federal Deposit Insurance Corporation, the Federal Housing Finance Agency, the Federal Reserve, the Financial Industry Regulatory Authority, the National Association of Insurance Commissioners, the National Credit Union Administration, the North American Securities Administrators Administration, the Office of the Comptroller of the Currency, the Office of Thrift Supervision, the Public Company Accounting Oversight Board, and the Securities and Exchange Commission. We worked closely with officials at the Federal Reserve—which hosted the forum at its Washington, D.C., headquarters— to identify a comprehensive and representative group of federal and state financial regulatory agencies. We conducted this work from April 2008 to December 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix V: Comments from the Conference of State Bank Supervisors Orice M. Williams Director Financial Markets and Community Investment U.S. Government Accountability Office 441 G Street, NW Washington, DC 20548 Thank you for the opportunity to submit a second written comment in response to the GAO’s upcoming report on the financial regulatory framework of the United States. The Conference of State Bank Supervisors (CSBS) recognizes the current regulatory structure at both the state and federal level is sometimes complex for the industry, regulators, consumers, and policymakers to navigate. As financial institutions and service providers increase in size, complexity, and operations, our regulatory system must reflect this evolution. The current economic stresses have also shown that our financial regulatory system must better address the interconnected risks of the capital markets and our banking system. CSBS is committed to working with the GAO, our federal counterparts, Congress, industry associations, and consumer advocates to further the development of a fair and efficient regulatory system that provides sufficient consumer protection and serves the interests of financial institutions and financial service providers, while ultimately strengthening the U.S. economy as a whole. We believe that changes are needed in both regulation and the way our regulatory structure functions to better respond to consumer needs and address systemic risks and market integrity. We are very concerned, however, that federal policy that addresses nationwide and global regulatory business models continues to threaten—or perhaps eliminate—the greatest strengths of our system. Specifically, we see policies that promote the needs of the very largest financial institutions at the expense of consumers, important federal checks and balances and diversity of banking and other financial institutions that are critical to our state economies. The current financial regulatory structure allows for a diverse universe of financial institutions of varying sizes. While the financial industry continues to consolidate at a rapid pace, there are still well over 8,000 financial institutions operating within the United CONFERENCE OF STATE BANK SUPERVISORS 1155 Connecticut Ave., NW, 5th Floor Washington DC 20036-4306 (202) 296-2840 Fax: (202) 296-1928 States, some of which are as small as $1 million in assets. Obviously, our nation’s largest money center banks play a critical role in the economy. However, even the smallest bank in the country is absolutely critical to the economic health of the community in which it operates. The complexity of the system is presented as a major source of the current financial crisis. While there are clearly gaps in our regulatory system and the system is undeniably complex, CSBS has observed that the greater failing of the system has been one of insufficient political and regulatory will, primarily at the federal level. We believe that decisions to consolidate regulation do not fix, but rather exacerbate this problem. Moreover, CSBS is deeply concerned that the GAO study does not fully appreciate the importance of creating an environment that promotes a diverse industry which serves our nation’s diverse communities and avoids a concentration of economic and political power in a handful of institutions. Specifically, we are offering the following comments to the elements of a successful supervisory framework. Clearly Defined Regulatory Goals Generally, we agree with the GAO’s goals of a regulatory system that ensures adequate consumer protections, ensures the integrity and fairness of markets, monitors the safety and soundness of institutions, and acts to ensure the stability of the overall financial system. We disagree, however, with the GAO’s claim that the safety and soundness goal is necessarily in direct conflict with the goal of consumer protection. It has been the experience of state regulators that the very opposite can be true. Indeed, consumer protection should be recognized as integral to safety and soundness of financial institutions and service providers. The health of a financial institution ultimately is connected to the health of its customers. However, we have observed that federal regulators, without the checks and balances of more locally responsive state regulators or state law enforcement do not always give fair weight to consumer issues or have the perspective to understand consumer issues. We consider this a significant weakness of the current system. Federal preemption of state law and state law enforcement by the Office of the Comptroller of the Currency and the Office of Thrift Supervision has resulted in less responsive consumer protections and institutions that are much less responsive to needs of consumers in our states. Appropriately Comprehensive CSBS disagrees that federal regulators were unable to identify the risks to the financial system because they did not have the necessary scope of oversight. As previously noted, we believe it was a failure of regulatory will and a philosophy of self-regulating markets that allowed for risks to develop. CSBS strongly believes a “comprehensive” system of regulation should not be construed as a consolidated regime under one single regulator. Instead, “comprehensive” should describe a regulatory system that is able to adequately supervise a broad, diverse, and dynamic financial industry. We believe that the checks and balances of the dual system of federal and state supervision are more likely to result in 2 comprehensive and meaningful coverage of the industry. From a safety and soundness perspective and from a consumer protection standpoint, the public is better served by a coordinated regulatory network that benefits from both the federal and state perspectives. We believe the Federal Financial Institutions Examination Council (FFIEC) could be much better utilized to accomplish this approach. Systemwide Focus The GAO report states “a regulatory system should include a mechanism for identifying, monitoring, and managing risks to the financial system regardless of the source of the risk or the institutions in which it was created.” CSBS agrees with this assessment. Our current crisis has shown us that our regulatory structure was incapable of effectively managing and regulating the nation’s largest institutions. CSBS believes the solution, however, is not to expand the federal government bureaucracy by creating a new super regulator. Instead, we should enhance coordination and cooperation among the federal government and the states. We believe regulators must pool resources and expertise to better manage systemic risk. The FFIEC provides a vehicle for working towards this goal of seamless federal and state cooperative supervision. In addition, CSBS provides significant coordination among the states as well as with federal regulators. This coordinating role reached new levels when Congress adopted the Riegle-Neil Act to allow for interstate banking and branching. The states, through CSBS, quickly followed suit by developing the Nationwide Cooperative Agreement and the State- Federal Supervisory Agreement for the supervision of multi-state banks. Most recently, the states launched the Nationwide Mortgage Licensing System (NMLS) and a nationwide protocol for mortgage supervision. Further, the NMLS is the foundation for the recently enacted Secure and Fair Enforcement for Mortgage Licensing Act of 2008, or the S.A.F.E. Act. The S.A.F.E. Act establishes minimum mortgage licensing standards and a coordinated network of state and federal mortgage supervision. Flexible and Adaptable CSBS agrees that a regulatory system should be adaptable and forward-looking so that regulators can readily adapt to market innovations and chances to include a mechanism for evaluating potential new risks to the system. In fact, this is one of the greatest strengths of the state system. The traditional dynamic of the dual-banking system of regulation has been that the states experiment with new products, services, and practices that, upon successful implementation, Congress later enacts on a nationwide basis. In addition, state bank examiners are often the first to identify and address economic problems. Often, states are the first responders to almost any problem in the financial system. The states can—and do—respond to these problems much more quickly than the federal government as evidenced by escalating state responses to the excesses and abuses of mortgage lending over the past decade. Unfortunately, the federal response was to thwart rather than encourage these policy responses. 3 Efficient and Effective In the report, GAO asserts that a system should provide for efficient and effective oversight by eliminating overlapping federal regulatory missions and minimizing regulatory burden. CSBS believes efficiency must not be achieved at the cost of protecting consumers, providing for a competitive industry that serves all communities or maintaining the safety and soundness of financial institutions. We recognize that our regulatory structure is complex and may not be as efficient as some in the industry would prefer. There is undoubtedly a need for improved coordination and cooperation among functional regulators. However, this efficiency must not be met through the haphazard consolidation or destruction of supervisory agencies and authorities. CSBS strongly believes that it is more important to preserve a regulatory framework with checks and balances among and between regulators. This overlap does not need to be a negative characteristic of our system. Instead, it has most often offered additional protection for our consumers and institutions. We believe that the weakening of these overlays in recent years weakened our system and contributed to the current crisis. In addition, we should consider how “efficient” is defined. Efficient does not inherently mean effective. Our ideal regulatory structure should balance what is efficient for large and small institutions as well as what is efficient for consumers and our economy. While a centralized and consolidated regulatory system may look efficient on paper or benefit our largest institutions, the outcomes may be inflexible and be geared solely at the largest banks at the expense of the small community institutions, the consumer or our diverse economy. Consistent Consumer and Investor Protection The states have long been regarded as leaders in the consumer protection arena. This is an area where the model of states acting as laboratories of innovation is clearly working. State authorities often discover troubling practices, trends, or warning signs before the federal agencies can identify these emerging concerns. State authorities and legislature then are able to respond quickly to protect consumers. Ultimately, Congress and federal regulators can then rely on state experience to develop uniform and nationwide standards or best practices. Ultimately, we believe the federal government is simply not able to respond quickly enough to emerging threats and consumer protection issues. State authorities have also been frustrated by federal preemption of state consumer protection laws. If Congress were to act to repeal or more clearly limit these preemptions, states would be able to more effectively and consistently enforce consumer protection laws. CSBS also agrees that there were significant loopholes and unequal regulation and examination of the mortgage industry. In fact, the states led the way to address these regulatory gaps. However, in describing where subprime lending occurred, we believe the report should acknowledge the fact that subprime lending took place in nearly equal parts between nonbank lenders and institutions subject to federal bank regulation. Federal regulation of operating subsidiaries has been inconsistent at best and nonexistent at worst. As acknowledged in the report, affiliate regulation for consumer compliance simply did 4 not exist at the federal level until a recent pilot project led by the Federal Reserve was initiated. The report also fails to acknowledge the very significant reforms of mortgage regulation adopted by Congress under the S.A.F.E. Act or the major efforts the states have engaged in to regulate the nonbank mortgage lenders and originators. Regulators Provided with Independence, Prominence, Authority, and Accountability The dual-banking system helps preserve both regulator independence and accountability. The state system of chartering, with an independent primary federal regulator probably serves as the best model for this goal. Consistent Financial Oversight Consistency in regulation is important, but our financial system must also be flexible enough to allow our diverse institutions all to flourish. The diversity of our nation’s banking system has created the most dynamic and powerful economy in the world, regardless of the current problems we are experiencing. The strength at the core of our banking system is that it is comprised of thousands of financial intuitions of vastly different sizes. Even as our largest banks are struggling to survive, the vast majority of community banks remains strong and continues to provide financial services to their local citizens. It is vital that a one-size-fits-all regulatory system does not adversely affect the industry by putting smaller banks at a competitive disadvantage with larger, more complex institutions. It is our belief that the report should acknowledge the role of federal preemption of state consumer protections and the lack of responsiveness of federal law and regulation to mortgage lending and consumer protection issues. For example, the states began responding in 1999 to circumventions of HOEPA and consumer abuses related to subprime lending. Nine years later and two years into a nationwide subprime crisis and Congress has not yet been able to adopt a predatory lending law. We believe that some industry advocates have pushed for preemption to prevent the states from being able to develop legislative and regulatory models for consumer protection and because they have been successful in thwarting legislation and significant regulation at the federal level. Minimal Taxpayer Exposure CSBS strongly agrees that a regulatory system should have adequate safeguards that allow financial institution failures to occur while limiting taxpayers’ exposure to financial risk. Part of this process must be to prevent institutions from becoming “too big to fail,” “too systemic to fail,” or simply too big to regulate. Specifically, the federal government must have regulatory tools in place to manage the orderly failure of the largest institutions rather than continuing to prop up failed systemic institutions. CSBS Principles of Regulatory Reform While numerous proposals will be advanced to overhaul the financial regulatory system, CSBS believes the structure of the regulatory system should: 5 1. Usher in a new era of cooperative federalism, recognizing the rights of states to protect consumers and reaffirming the state role in chartering and supervising financial institutions. 2. Foster supervision that is tailored to the size, scope, and complexity of the institution and the risk they pose to the financial system. 3. Assure the promulgation and enforcement of consumer protection standards that are applicable to both state and nationally chartered financial institutions and are enforceable by locally-responsive state officials against all such institutions. 4. Encourage a diverse universe of financial institutions as a method of reducing risk to the system, encouraging competition, furthering innovation, insuring access to financial markets, and promoting efficient allocation of credit. 5. Support community and regional banks, which provide relationship lending and fuel local economic development. 6. Require financial institutions that are recipients of governmental protection or pose systemic risk to be subject to safety and soundness and consumer protection oversight. The states, through CSBS and the State Liaison Committee’s involvement on the FFIEC, will be part of any solution to regulatory restructuring or our current economic condition. We want to ensure consumers are protected, and preserve the viability of both the federal and state charter to ensure the success of our dual-banking system and our economy as a whole. CSBS believes there is significant work to be done on this issue, and we commend the GAO for undertaking this report. Appendix IX: Comments from the Mortgage Bankers Association Ms. Orice M. Williams Director, Financial Markets and Community Investment U.S. Government Accountability Office 441 G Street, N.W. Washington, D.C. 20548 The Mortgage Bankers Association greatly appreciates the opportunity to comment on the forthcoming report of the United States Government Accountability Office entitled "Financial Regulation: A Framework for Crafting and Assessing Proposals to Modernize the Outdated U.S. Regulatory System." MBA strongly supports the improvement of the regulatory requirements and the regulatory structure for mortgage lending and commends GAO’s efforts in this vital area. MBA’s main comments are that the report should recognize that: (1) responsibility for the current financial crisis is diffuse; (2) solutions recommended for the lending sphere should include consideration of a uniform mortgage lending standard that is preemptive of state lending standards; and (3) federal regulation of at least independent mortgage bankers deserves discussion. In MBA’s view, the factors contributing to the current crisis are manifold. They include, but are not limited to, traditional factors such as unemployment and family difficulties, high real estate prices and overbuilding, extraordinary appetites for returns, lowering of lending standards to satisfy investor and borrower needs, the growth of unregulated and lightly regulated entities and, to some degree, borrower misjudgment and even fraud. In MBA’s view no single actor or actors can fairly be assigned sole or even predominant blame for where we are today. On the other hand, MBA strongly believes that all of these factors contributing to the crisis deserve review as we fashion regulatory solutions. Specifically, respecting mortgage lending, MBA believes that the crisis presents an unparalleled opportunity to reevaluate the current regulatory requirements and structure for mortgage lending to protect the nation going forward. MBA has long supported establishment of a uniform national mortgage lending standard that establishes strong federal protections, preempts the web of state laws and updates and expands federal requirements. Currently, lending is governed, and consumers are protected by, a patchwork of more than 30 different state laws which are piled on top of federal requirements. Some state laws are overly intrusive and some are weak. The federal requirements in some cases are duplicative and in some areas are out-of-date. In some states, there are no lending laws and borrowers have little protection beyond federal requirements. December 18, 2008 GAO Comment Letter Page 2 MBA believes legislators should look at the most effective state and federal approaches and work with stakeholders to fashion a new uniform standard which is appropriately up-to-date, robust, applies to every lender, and protects every borrower. It should be enacted by the Congress and preempt state laws. A uniform standard would help restore investor confidence and be the most effective and least costly means of protecting consumers against lending abuses nationwide. Having one standard would avoid undue compliance costs, facilitate competition and ultimately decrease consumer costs. MBA recognizes that one of the key objections to a preemptive national standard is that it would not be flexible and adaptable and preclude state responses to future abuse. MBA believes this problem is surmountable and could be resolved by injecting dynamism into the law. One approach would be to supplement the law as needed going forward with new prohibitions and requirements formulated by federal and state officials in consultation. Currently, some mortgage lenders are regulated as federal depository institutions, some as state depositories and some as state-regulated non-depositories. MBA believes that along with establishment of a uniform standard, a new federal regulator for independent mortgage bankers and mortgage brokers should be considered and MBA is interested in exploring that possibility. A new regulator should have sufficient authorities to assure prudent operations to address financing needs of consumers. If such an approach is adopted, states also could maintain a partnership with the federal regulator in examination, enforcement and licensing. MBA believes the combined efforts of state and federal officials in regulatory reviews and enforcement under a uniform standard would greatly increase regulatory effectiveness and focus. Notably, any new regulatory scheme should address the differing regulatory concerns presented by mortgage bankers and by mortgage brokers, considering their differing functions and the differing policy concerns which the respective industries present. MBA has written extensively on this subject and commends to GAO’s attention the attached report entitled Mortgage Bankers and Mortgage Brokers: Distinct Businesses Warranting Distinct Regulation (2008). Again, MBA strongly believes today’s financial difficulties present an unparalleled opportunity to establish better regulation in the years to come. Today’s financial crisis reminds us daily that financial markets are national and international in scope. As the crisis worsened, the world looked to national and international governments for solutions. MBA believes it would be unwise not to use this moment to establish a national standard and cease dispersing regulatory responsibility, to help prevent crises ahead. Thank you again for the opportunity to comment. Appendix XI: Comments from the Center for Responsible Lending, the National Consumer Law Center, and the U.S. PIRG VIA EMAIL AND U.S. MAIL Ms. Orice M. Williams (williamso@gao.gov) Director, Financial Markets and Community Investment U.S. Government Accountability Office 441 G Street, N.W. Washington, D.C. 20548 with copies via email to: Mr. Cody Goebel, Assistant Director (goebelc@gao.gov) Mr. Randall Fasnacht (fasnachtr@gao.gov) Re: Comments on Draft Report, GAO-09-216 We appreciate the opportunity to review the draft report at your offices on December 4, and to offer comments. These are offered jointly by CRL, the National Consumer Law Center and USPIRG. The report is a thoughtful and thorough review of the structural issues regarding regulatory reform. We especially appreciate that your report notes the problem of charter competition and the distorting impact of the funding structure for the banking regulators. We would like to preface our comments by stating the obvious – that this review does not occur in a vacuum, but rather in the context of a major crisis which exposed fundamental weaknesses on many fronts. The structural problems in the federal regulator system are but one. Some of these comments derive not from the specific content of the report, but the messages conveyed by some of the references to other aspects of the crisis, such as the nature of the market and consumer behavior. Another especially important comment derives as much from what is left unsaid. Perhaps it seems as though it should go without saying, but given much of the debate that this crisis has engendered, we fear that without at least an acknowledgement of what is not addressed by your report, necessary reminders of other integral parts of regulatory reform may be lost. While the structure of regulation can create its own problems, such as the potential for charter competition and regulatory capture that you note, regulators also need tools (in the form of laws to enforce, or directives to promulgate rules in furtherance of such laws), adequate resources and, above all, the will to regulate. No amount of structural reform will succeed if regulators have no charge to fulfill in their job, nor the will to do so. We have had three decades of a deregulatory agenda, and without a change in that overarching view, structural changes will be insufficient. We recognize that the prevailing philosophy of regulation was not the focus of this report. However, we believe that any discussion of regulatory structural reform must be accompanied by an explicit caveat that it addresses only one aspect of the overall regulatory issues that contributed to 1 Appendix XI: Comments from the Center for Responsible Lending, the National Consumer Law Center, and the U.S. PIRG this crisis, and that changing the structure, alone, will be insufficient if these other necessary conditions for effective oversight are not reformed, as well. Beyond that overarching context for regulatory reform, we offer the following comments. 1. The best way to avoid systemic risk is to address problems that exist at the level of individual consumer transactions, before they pose a threat to the system as a whole. The report appropriately addresses the need to effectively monitor and regulate problems that threaten the financial system as a whole. However, the most effective way to address systemic risk is to identify market failures that threaten abuse of individual consumers, and to address these failures before they threaten the system as a whole. The crisis today would not have reached its current state had problems been addressed and prevented before they evolved into the foreclosure epidemic now underway. The report correctly notes that most subprime lending was done by nonbank lenders who were not subject to oversight by the federal banking agencies. However, the market failures that contributed to the current crisis are not limited to the subprime market. The failure of the Alt-A market, including poorly underwritten non-traditional loans, are also significant contributors, as is becoming increasingly apparent. The failures of IndyMac and Washington Mutual, among others, are largely the function of overly aggressive lending of risky products that were unsuitable for far too many borrowers, and these did occur under the watch of the federal banking agencies. Though the federal banking agencies issued some guidelines for nontraditional lending, it was too little and too late. Further, to judge from the performance of the late vintages of these loans, even then, they were insufficiently enforced. But in any case, neither bank nor nonbank lenders were subject to adequate consumer protection laws. Both banks and non-bank lenders pressed legislators and regulators not to enact such protections. Furthermore, banks subject to federal regulation also contributed to the problem by being part of the secondary market’s demand for the risky products that permeated the subprime and Alt-A markets. The report should make clear that to adequately protect consumers, and avoid systemic risk in the future, whatever regulatory structure emerges will need to be more robust and effective in protecting consumers than the current system has been to date. Appendix XI: Comments from the Center for Responsible Lending, the National Consumer Law Center, and the U.S. PIRG 2. To effectively protect consumers the regulatory system must prohibit unsustainable lending; disclosures and “financial literacy” are not enough. The fundamental problem at the heart of today’s crisis is that loan originators pushed borrowers into loan products that were inherently risky and unsustainable by design, and they did so notwithstanding the availability of the more suitable and affordable loans for which they qualified. The most common product in the subprime market in recent years was not merely an adjustable rate mortgage, but rather an adjustable rate mortgage with built-in payment shock that lenders anticipated most borrowers could not afford, but that they could avoid only by refinancing before the payment shock took effect, typically paying typically 3% to 4% of the loan balance as a “prepayment penalty” in order to refinance. According to a Wall Street Journal study, 55% of the borrowers who received such loans in 2005, and 60% of those who received them in 2006, had credit scores high enough to have qualified for lower cost prime loans. And even those borrowers who did not qualify for prime could have had 30-year fixed rate loans for approximately 65 basis points above the introductory rate on the loans they received. The report suggests incorrectly (pp. 43-44) that subprime loans “help borrowers afford houses” they could not otherwise afford, when in fact, most subprime loans refinanced existing loans, rather than purchased new homes. But in either case, had borrowers been offered the more suitable loans for which many qualified, many more borrowers could have sustained homeownership. The experience with the recent vintages of Alt-A loans are similarly instructive. Chris Ferrell, an economics editor with the NPR program Marketplace referred to the Payment Option ARM product (many of which are Alt-A) as “the most complicated mortgage product ever marketed to consumers.” The greater the complexity, the less suitable that disclosure is as a “market perfecting” tool. Further, the huge jump in payment option ARMS, (from $145 billion to $255 billion from 2004-2007), was primarily possible only by the increasingly poor underwriting. Countrywide, one of the major issuers of these Appendix XI: Comments from the Center for Responsible Lending, the National Consumer Law Center, and the U.S. PIRG loans (that issued them under both its national bank and federal thrift charters, as well as some of its non-depository entities) admitted that an estimated 80% of its recent POARMs would not meet the late 2006 federal guidelines. The Federal Reserve has noted that, given the misaligned incentives of originators and the complexity of products and loan features, even with increased information or knowledge, borrowers could not have defended against poorly underwritten, risky products and deceptive practices. The main problem with these loans was not the inadequacy of the disclosures or the financial literacy of the borrowers. Rather, the fundamental problem was that – as the federal banking regulators belatedly recognized with respect to non-traditional loans in late 2006 and subprime lending in 2007 -- lenders should not have made loans that they knew borrowers would be unable to sustain without refinancing. 3. To effectively protect consumers, the regulatory system must monitor and address market incentives that encourage loan originators to push risky or unsuitable loan products. The report correctly notes that market incentives encouraged loan originators to extend excessive credit (p. 22). It should also note that these same incentives encouraged them to push riskier productions and features than the borrowers qualified for. The report should note the need for regulatory oversight of market failures that reward market participants for irresponsible behavior. We understand that philosophies of consumer protection and the adequacy of consumer protection laws is not your intended focus. However, there were occasional statements in the report which, intended or not, seemed to convey a message that improved disclosure or literacy would be adequate. Yet more people – including some of the regulators themselves – are recognizing that in an era of highly complex products and unseen perverse incentives, disclosure is an insufficient tool, and literacy is an elusive goal. We would be happy to provide further information. Orice M. Williams, (202) 512-8678 or williamso@gao.gov, or Richard J. Hillman, (202) 512-8678 or hillmanr@gao.gov. In addition to the contacts named above, Cody Goebel (Assistant Director), Kevin Averyt, Nancy Barry, Rudy Chatlos, Randy Fasnacht, Jeanette Franzel, Thomas McCool, Jim McDermott, Kim McGatlin, Thomas Melito, Marc Molino, Susan Offutt, Scott Purdy, John Reilly, Barbara Roesmann, Paul Thompson, Winnie Tsen, Jim Vitarello, and Steve Westley made key contributions to this report. Troubled Asset Relief Program: Additional Actions Needed to Better Ensure Integrity, Accountability, and Transparency. GAO-09-161. Washington, D.C.: December 2, 2008. Hedge Funds: Regulators and Market Participants Are Taking Steps to Strengthen Market Discipline, but Continued Attention Is Needed. GAO-08-200. Washington, D.C.: January 24, 2008. Information on Recent Default and Foreclosure Trends for Home Mortgages and Associated Economic and Market Developments. GAO-08-78R. Washington, D.C.: October 16, 2007. Financial Regulation: Industry Trends Continue to Challenge the Federal Regulatory Structure. GAO-08-32. Washington, D.C.: October 12, 2007. Financial Market Regulation: Agencies Engaged in Consolidated Supervision Can Strengthen Performance Measurement and Collaboration. GAO-07-154. Washington, D.C.: March 15, 2007. Alternative Mortgage Products: Impact on Defaults Remains Unclear, but Disclosure of Risks to Borrowers Could Be Improved. GAO-06-1021. Washington, D.C.: September 19, 2006. Credit Cards: Increased Complexity in Rates and Fees Heightens Need for More Effective Disclosures to Consumers. GAO-06-929. Washington, D.C.: September 12, 2006. Financial Regulation: Industry Changes Prompt Need to Reconsider U.S. Regulatory Structure. GAO-05-61. Washington, D.C.: October 6, 2004. Consumer Protection: Federal and State Agencies Face Challenges in Combating Predatory Lending. GAO-04-280. Washington, D.C.: January 30, 2004. Long-Term Capital Management: Regulators Need to Focus Greater Attention on Systemic Risk. GAO/GGD-00-3. Washington, D.C.: October 29, 1999. Financial Derivatives: Actions Needed to Protect the Financial System. GAO/GGD-94-133. Washington, D.C.: May 18, 1994.
The United States and other countries are in the midst of the worst financial crisis in more than 75 years. While much of the attention of policymakers understandably has been focused on taking short-term steps to address the immediate nature of the crisis, these events have served to strikingly demonstrate that the current U.S. financial regulatory system is in need of significant reform. To help policymakers better understand existing problems with the financial regulatory system and craft and evaluate reform proposals, this report (1) describes the origins of the current financial regulatory system, (2) describes various market developments and changes that have created challenges for the current system, and (3) presents an evaluation framework that can be used by Congress and others to shape potential regulatory reform efforts. To do this work, GAO synthesized existing GAO work and other studies and met with dozens of representatives of financial regulatory agencies, industry associations, consumer advocacy organizations, and others. Twenty-nine regulators, industry associations, and consumer groups also reviewed a draft of this report and provided valuable input that was incorporated as appropriate. In general, reviewers commented that the report represented an important and thorough review of the issues related to regulatory reform. The current U.S. financial regulatory system has relied on a fragmented and complex arrangement of federal and state regulators--put into place over the past 150 years--that has not kept pace with major developments in financial markets and products in recent decades. As the nation finds itself in the midst of one of the worst financial crises ever, the regulatory system increasingly appears to be ill-suited to meet the nation's needs in the 21st century. Today, responsibilities for overseeing the financial services industry are shared among almost a dozen federal banking, securities, futures, and other regulatory agencies, numerous self-regulatory organizations, and hundreds of state financial regulatory agencies. Much of this structure has developed as the result of statutory and regulatory changes that were often implemented in response to financial crises or significant developments in the financial services sector. For example, the Federal Reserve System was created in 1913 in response to financial panics and instability around the turn of the century, and much of the remaining structure for bank and securities regulation was created as the result of the Great Depression turmoil of the 1920s and 1930s. Several key changes in financial markets and products in recent decades have highlighted significant limitations and gaps in the existing regulatory system. First, regulators have struggled, and often failed, to mitigate the systemic risks posed by large and interconnected financial conglomerates and to ensure they adequately manage their risks. The portion of firms operating as conglomerates that cross financial sectors of banking, securities, and insurance increased significantly in recent years, but none of the regulators is tasked with assessing the risks posed across the entire financial system. Second, regulators have had to address problems in financial markets resulting from the activities of large and sometimes less-regulated market participants--such as nonbank mortgage lenders, hedge funds, and credit rating agencies--some of which play significant roles in today's financial markets. Third, the increasing prevalence of new and more complex investment products has challenged regulators and investors, and consumers have faced difficulty understanding new and increasingly complex retail mortgage and credit products. Regulators failed to adequately oversee the sale of mortgage products that posed risks to consumers and the stability of the financial system. Fourth, standard setters for accounting and financial regulators have faced growing challenges in ensuring that accounting and audit standards appropriately respond to financial market developments, and in addressing challenges arising from the global convergence of accounting and auditing standards. ? Finally, despite the increasingly global aspects of financial markets, the current fragmented U.S. regulatory structure has complicated some efforts to coordinate internationally with other regulators.
DOD depends overwhelmingly on the U.S. commercial electrical power grid for electricity to support its operations and missions. As illustrated in figures 1 and 2, the grid is a vast, complex network of interconnecte d regional systems and infrastructure (e.g., power plants, electricity lines, and control centers) used to generate, transmit, distribute, and manage electrical power supplies across the United States. According to the Defense Science Board Task Force on DOD Energy Strategy, approximately 99 percent of the electrical power DOD installations consume originates from outside installation boundaries, while approximately 85 percent of the energy infrastructure that DOD relies on for electrical power is commercially owned and outside of DOD’s control. There are currently a variety of mechanisms in place that may help to mitigate the risk of losing electricity service due to electrical power disruptions, including mandatory reliability standards for the electrical power industry approved by the Federal Energy Regulatory Commission. In addition, other risk mitigation measures are being considered, such as islanding. However, while the U.S. commercial electrical power grid is generally a reliable source of electricity and is subject to some reliability standards that typically assure its availability over 99 percent of the time, concerns have been raised about the increasing vulnerability of the grid to more frequent or longer electrical power disturbances. For example, the Defense Science Board Task Force reported that the commercial power grid is “brittle, increasingly centralized, capacity-strained, and largely unprotected from physical attack, with little stockpiling of critical hardware.” Similarly, according to the May 2007 Infrastructure Resiliency Guide for DOD’s Defense Critical Infrastructure Program, “the electric power network is a complex system of interconnected components that can fail and cause massive service disruptions.” Factors that contribute to the grid’s vulnerability include (1) increasing national demand for electricity; (2) an aging electrical power infrastructure; (3) increased reliance on automated control systems that are susceptible to cyberattacks; (4) the attractiveness of electrical power infrastructure as targets for physical or terrorist attacks; (5) long lead times (of several months to several years) for replacing high-voltage transformers—which cost several millions of dollars and are manufactured only in foreign countries—if attacked or destroyed; and (6) more frequent interruptions in fuel supplies to electricity-generating plants. The National Science and Technology Council’s Committee on Homeland and National Security also established a task force in January 2009 to identify research and development needs for electric grid vulnerabilities and to coordinate with other federal agencies to address those needs. In addition, government and industry efforts are under way to examine cybersecurity threats, develop potential “Smart Grid” solutions to address some of the grid’s vulnerabilities, and develop and enforce electricity reliability standards for the industry. DOD assets are vulnerable to electrical power disruptions in various ways. For example, according to the DCIP Infrastructure Resiliency Guide, vulnerabilities may involve the co-location of both primary and secondary electrical power equipment, single points of failure in an electrical power network, lack of security access controls to critical electrical power equipment, electrical power lines sharing rights-of-way with other utilities, and insufficient backup sources of electrical power generation. To address such vulnerabilities, the guide suggests that owners or operators of DOD assets consider diversifying the locations of primary and secondary electrical power equipment, establishing independent transmission paths for commercial and backup electrical power, increasing security and monitoring access to critical electrical power equipment, establishing mitigation options based on potential loss of rights-of-way, and developing additional backup sources of electrical power. For more detailed information regarding typical electrical power vulnerabilities that could affect DOD assets and potential measures to address them, see appendix II. DOD identifies the vulnerabilities and manages the risks of its most critical assets to electrical power disruptions primarily through DCIP. On October 14, 2008, DOD designated 34 assets through DCIP as its most critical assets—assets of such extraordinary importance to DOD operations that according to DOD, their incapacitation or destruction would have a very serious, debilitating effect on the ability of the department to fulfill its missions. While most (29 of 34) of these critical assets—which may be located in the United States, U.S. territories, or foreign countries—are owned by DOD, 5 are owned by other entities, including both domestic and foreign commercial and other governmental entities. To ensure the availability of these and other networked assets critical to DOD missions, DCIP uses a risk management model that helps decision makers (1) identify the department’s critical assets based on the criticality of their missions; (2) conduct “threat and hazard assessments;” (3) conduct “vulnerability assessments” (that include detailed reviews of electrical power vulnerabilities); (4) conduct “risk assessments” to determine the consequences of the assets’ loss, evaluate the importance and urgency of proposed actions, and develop alternate courses of action; (5) reach “risk management decisions” to accept risks or reduce risks to acceptable levels; and (6) formulate “risk responses” to implement the risk management decisions. Key stakeholders involved in these DCIP processes include ASD(HD&ASA), which serves as the principal civilian advisor to the Secretary of Defense on the identification, prioritization, and protection of defense critical infrastructure; the Chairman of the Joint Chiefs of Staff, who serves as DOD’s principal military advisor for the program; and the combatant commands, the military services, and other DOD agencies and organizations, which may serve as asset owners or mission owners for specific critical assets. In addition, as the DISLA for the DCIP Public Works Defense Sector—which includes both DOD-owned and non-DOD assets used to support, generate, produce, or transport electrical power for and to DOD users—the U.S. Army Corps of Engineers is responsible for identifying asset interdependencies in its sector, including those related to electrical power, as appropriate. Figure 3 illustrates the key elements of the DCIP risk management model. In addition to using DCIP, DOD also identifies vulnerabilities and manages the risks of its most critical assets, including those related to electrical power, through other DOD mission assurance programs or activities, including those related to force protection; antiterrorism; information assurance; continuity of operations; chemical, biological, radiological, nuclear, and high-explosive defense; readiness; and installation preparedness. These programs and activities are intended to ensure that required capabilities and supporting infrastructures are available to DOD to carry out the National Military Strategy. DOD has established several complementary programs that help protect critical assets, including those listed in table 1. In addition, the military departments have developed service-level critical infrastructure protection programs, which they coordinate with DCIP. Other federal agencies and industry organizations are to collaborate with DOD and play significant roles in protecting critical electrical power infrastructure within the framework of Homeland Security Presidential Directive 7. This directive, issued in December 2003, requires all federal departments and agencies to identify, prioritize, and coordinate the protection of critical infrastructure and key resources from terrorist attacks. These entities and their roles are summarized below. Department of Homeland Security. DHS is the principal federal entity responsible for leading, integrating, and coordinating the overall national effort to protect the nation’s critical infrastructure and key resources. DHS led the development of the National Infrastructure Protection Plan, which provides a framework for managing risks to U.S. critical infrastructure and outlines the roles and responsibilities of DHS and other security partners—including other federal agencies; state, territorial, local, and tribal governments; and private companies. DHS is responsible for leading and coordinating a national effort to enhance protection through 18 critical infrastructure and key resource sectors, and a “sector-specific agency” has lead responsibility for coordinating the protection of each of the sectors. Department of Energy. DOE serves as the sector-specific agency for the Energy Sector, which includes critical infrastructure and key resources related to electricity. DOE is responsible for developing an Energy Sector Specific Plan, in close collaboration with other National Infrastructure Protection Plan stakeholders, that applies the plan’s risk management model to critical infrastructure and key resources within that sector. Within DOE, the Office of Electricity Delivery and Energy Reliability seeks to lead national efforts to modernize the electrical grid; enhance security and reliability of energy infrastructure; and facilitate recovery from disruptions to energy supply. When requested, DOE and its national laboratories can provide energy-related expertise and assistance to DOD. According to DOE officials, DOE and several DOD combatant commands, including U.S. European Command and U.S. Africa Command, are considering utilizing DOE representatives as energy attachés to those commands. The DOE representatives can provide energy-related expertise to their respective commands, particularly with respect to the commands’ energy-related planning activities and the security and reliability of the commands’ energy infrastructure. Federal Energy Regulatory Commission and the North American Electric Reliability Corporation. The Energy Policy Act of 2005 provided the Federal Energy Regulatory Commission and its subsequently appointed Electric Reliability Organization— the North American Electric Reliability Corporation—new responsibilities for helping protect and improve the reliability and security of the U.S. bulk power system through the establishment, approval, and enforcement of mandatory electrical reliability standards. Both of these organizations also participate in safeguarding the nation’s critical infrastructures and key resources, and they have interacted with DOD regarding electrical power vulnerabilities. Similarly, the North American Electric Reliability Corporation has collaborated with DOD and military service officials through the federal Task Force on Electric Grid Vulnerability, which is co-chaired by DOD, to identify and address electrical power vulnerabilities. The Electrical Power Industry. Electrical power industry representatives also contribute to the assurance of electrical power supplies through industry associations—such as the Edison Electric Institute, the American Public Power Association, and the National Rural Electric Cooperative Association—and through local electrical power providers to DOD installations or assets. Electrical power industry associations, for example, collaborate with the federal government to help secure the U.S. electrical power grid through coordinating mechanisms in the National Infrastructure Protection Plan. In early 2009 the institute established the Energy Security Partnership Group, which includes officials from DOD installations and focuses on improving communications between DOD and its utilities and on identifying and removing barriers to the development of comprehensive energy security programs at DOD installations. DOD’s most critical assets and the missions they support are vulnerable to disruptions in electrical power supplies because of the extent of their reliance on electricity, particularly from the commercial electrical power grid. According to our survey of DOD’s most critical assets, all of these assets require electrical power continuously in order to function and support their mission(s). Furthermore, the survey results indicate that all of the most critical assets depend on other supporting infrastructure— such as water; natural gas; and heating, ventilation, and air conditioning— that in turn also rely on electricity to function. As a result, without appropriate backup electrical power supplies or risk management measures, these critical assets may be unable to function fully and support their mission(s) in the event of an electrical power disruption. According to our survey, at least 24 of the 34 most critical assets experienced some electrical power disruptions—lasting up to 7 days—during the 3-year period from January 2006 through December 2008, and the missions supported by 3 of those critical assets were adversely impacted by electrical power disruptions. In addition, based on our survey, 31 of these 34 assets rely primarily on commercial electrical power grids for their electricity supplies. The U.S. commercial electrical power grids have become increasingly fragile and vulnerable to prolonged outages because of such factors as (1) increased user demand, (2) fewer spare parts for key electrical power equipment, (3) increased risks of deliberate physical or cyberattacks on electrical power infrastructure by terrorists, and (4) more frequent interruptions in fuel supplies to electricity-generating plants. Based on our survey, vulnerability assessments of 6 of the most critical assets reported vulnerabilities associated with the reliability of the electrical power grids of their commercial electricity providers or DOD installations. Furthermore, 8 of these critical assets attributed some of their electrical power disruptions to their commercial electrical power provider. DOD is identifying key vulnerabilities—including those related to electrical power—of its most critical assets through DCIP vulnerability assessments, but as of June 2009, the department had conducted such assessments on only 14 of its 34 most critical assets. As part of the DCIP risk management process, DCIP vulnerability assessments are intended to systematically examine the characteristics of an installation, system, asset, application, or its dependencies that could cause it to suffer a degradation or loss—that is, incapacity to perform its designated function—as a result of having been subjected to a certain level of threat or hazard. These vulnerability assessments—most of which the Defense Threat Reduction Agency has been conducting for DOD—include specific reviews of the critical assets’ supporting electrical power networks “to ensure that the distribution network at a given location and supporting offsite [electrical power] system has the capacity, redundancy, path diversity, security, survivability, and reliability to properly support a given mission.” DOD Instruction 3020.45 requires DOD to conduct DCIP vulnerability assessments on all of its most critical assets at least once every 3 years. However, while DOD has conducted DCIP assessments on some of its most critical assets since March 2007, ASD(HD&ASA) and Joint Staff officials indicated that the department could not schedule or conduct these assessments systematically until its most critical assets were formally identified in October 2008. As a result, as of June 2009, DOD had conducted DCIP vulnerability assessments on 14 of the 34 most critical assets; had scheduled additional assessments for 13 other most critical assets from July 2009 through December 2010; and had not yet scheduled assessments for the remaining 7 most critical assets. According to ASD(HD&ASA) and Joint Staff officials, DCIP vulnerability assessments will be conducted on all the most critical assets by October 2011, as required by DOD Instruction 3020.45. Nevertheless, until DOD completes these DCIP vulnerability assessments, the department will not have complete information about electrical power vulnerabilities for all the most critical assets. DOD has not yet conducted or scheduled DCIP vulnerability assessments, including assessments of electrical power vulnerabilities, on any of its non-DOD-owned most critical assets—both those located in the United States and in foreign countries—and has not yet developed guidance addressing the unique challenges related to conducting the assessments on such assets. While the majority of the most critical assets—which may be located in the United States, U.S. territories, or foreign countries—are owned by DOD, 5 of the 34 are not owned by DOD. Instead, such critical assets are owned by either U.S. or foreign commercial or governmental entities. DOD Instruction 3020.45 requires DOD to conduct DCIP vulnerability assessments at least once every 3 years on all of its most critical assets, regardless of the assets’ ownership or location. However, DOD has not yet conducted or even scheduled DCIP vulnerability assessments for any of the non-DOD-owned most critical assets located in the United States or abroad. Furthermore, while DOD has issued extensive DCIP guidance applicable to all defense critical infrastructure (including non-DOD-owned critical infrastructure), as discussed above, DOD has not yet developed a systematic approach or guidelines addressing the unique challenges related to conducting the assessments on such non-DOD- owned critical assets. ASD(HD&ASA) and Joint Staff officials cited security concerns, political sensitivities, and lack of DOD authority over non-DOD-owned assets as key challenges in conducting the DCIP vulnerability assessments on the non-DOD-owned most critical assets in foreign countries. For example, according to these officials, notifying a U.S. or foreign commercial entity, or a foreign government, about its asset’s designation as one of DOD’s most critical assets could compromise DCIP security guidelines or U.S. national security. Similarly, for political reasons, foreign companies or governments may not want to have their assets identified as supporting U.S. or DOD military missions. ASD(HD&ASA) and Joint Staff officials recognize the need for developing an approach and guidelines to conduct DCIP vulnerability assessments on the five non-DOD-owned most critical assets, particularly those located abroad. According to these officials, DOD has begun to coordinate with the Department of State’s Office of the Coordinator for Counterterrorism to help address some of the security concerns and political sensitivities associated with conducting such assessments. We have previously reported on DOD efforts to coordinate with the Department of State on similar sensitive matters involving foreign governments’ support for DOD assets abroad. For example, we have previously reported that through the Department of State, the United States and host-nation governments have successfully established various types of agreements—including general agreements, intelligence exchange agreements, written agreements, and informal agreements—that have been used to help protect U.S. forces and facilities abroad, and nothing prohibits DOD from developing a similar approach for conducting DCIP vulnerability assessments on non-DOD- owned most critical assets in foreign countries. Until DOD completes the vulnerability assessments on such assets, which DOD is also required to complete by October 2011, DOD officials will not know the extent of those assets’ vulnerabilities to electrical power disruptions. The U.S. Army Corps of Engineers (Corps)—which serves as DCIP’s DISLA for Public Works (including electricity)—has not completed preliminary technical analyses of DOD installation infrastructure. Such analyses are intended to identify public works infrastructure networks, assets, points of service, and inter- and intradependencies that support the critical assets on DOD installations. ASD(HD&ASA) requested these analyses for all the most critical assets from the Corps in order to support the teams conducting DCIP vulnerability assessments on those assets. Preliminary desktop analyses are intended to help brief DCIP vulnerability assessment teams on the most critical assets’ supporting public works infrastructure—including electrical power systems—before those teams conduct the vulnerability assessments on the assets in the field. According to ASD(HD&ASA), the Corps has completed these analyses for public works infrastructure located outside of DOD installations with the most critical assets. However, as of July 2009, the Corps had not yet conducted these analyses for public works infrastructure located within DOD installations for any of the most critical assets. According to a Corps official, the Corps has been unable to begin these analyses because it has not received infrastructure-related information that it requires from the military services. According to the official, the Corps has been requesting this infrastructure-related information informally from the military services for several months and recently augmented its requests with formal written requests to the services. However, as of July 2009, the U.S. Navy is the only service that has begun to gather the requested information. In written correspondence with us, the remaining two military departments have indicated that limited funds and personnel will affect their ability to respond to the Corps’ request for the infrastructure- related information, which one of the services considers to be an unfunded mandate. Without this information, however, the Corps will be unable to conduct its preliminary technical analyses of public works infrastructure, including electrical power systems, which support the most critical assets. As a result, the teams conducting DCIP vulnerability assessments will be unable to consider crucial background information about the most critical assets’ public works infrastructure—including networks, assets, points of service, and inter- and intradependencies related to electrical power systems—before the teams conduct the DCIP vulnerability assessments in the field. DOD does not systematically coordinate DCIP vulnerability assessment policy, guidelines, or processes with those of other, related DOD mission- assurance programs that also examine electrical power vulnerabilities of DOD critical assets. DOD Directive 3020.40 calls for DCIP to complement other DOD mission assurance programs and efforts, including force protection; antiterrorism; information assurance; continuity of operations; chemical, biological, radiological, nuclear, and high-explosive defense; readiness; and installation preparedness. Vulnerability assessments from these other mission programs and efforts also examine electrical power vulnerabilities of DOD critical assets. For example, as part of DOD’s antiterrorism and force protection efforts, the Defense Threat Reduction Agency conducts Joint Staff Integrated Vulnerability Assessments at selected DOD installations worldwide, including some that host critical assets. These assessments identify vulnerabilities related to terrorism and force protection at the selected installations, including those related to electrical power systems, and provide options to assist installation commanders in mitigating or overcoming the vulnerabilities. Similarly, as part of the critical asset protection processes, the military services also conduct vulnerability assessments related to mission assurance at installations that may also host critical assets. However, DOD Directive 3020.40 does not provide specific guidelines or requirements for systematically coordinating policy, guidelines, and processes or the results from DCIP vulnerability assessments on the critical assets with those of other DOD mission assurance programs. ASD(HD&ASA) and Joint Staff officials acknowledge the benefits of coordinating and leveraging the results of assessments from DCIP and other DOD mission assurance programs—particularly those related to antiterrorism/force protection, continuity of operations, and information assurance—and have already taken some steps to further such coordination. For example, as of June 2009, the Defense Threat Reduction Agency has conducted DCIP vulnerability assessments on 12 of the most critical assets in conjunction with Joint Staff Integrated Vulnerability Assessments being conducted on installations that host those assets, while the military services have conducted DCIP vulnerability assessments on 2 of the most critical assets. Also, according to ASD(HD&ASA) and Joint Staff officials, the results of other DOD mission assurance vulnerability assessments already conducted on critical assets are made available for DCIP vulnerability assessment teams to consider before they conduct the DCIP vulnerability assessments. In addition, the Joint Staff and the Defense Threat Reduction Agency have begun to develop a formal agreement to align more closely the standards and benchmarks used to conduct vulnerability assessments for related DOD mission assurance programs, particularly DCIP, antiterrorism/force protection, continuity of operations, and information assurance. However, until DOD finalizes the guidelines being developed in this agreement, it may be unable to systematically leverage the results of related vulnerability assessments that may be conducted on the same critical assets by multiple sources, and thus enhance DOD’s ability to identify those assets’ electrical power vulnerabilities. DCIP vulnerability assessment teams do not consistently consider the vulnerabilities of the critical assets to longer-term electrical power disruptions on a mission-specific basis, which is not explicitly defined in the DCIP vulnerability assessment benchmarks for electrical power. These benchmarks serve as detailed criteria by which DCIP vulnerability assessment teams assess whether the electrical power networks that support the critical assets—at the host installation and in the supporting off-site electrical power system—have the “capacity, redundancy, path diversity, security, survivability, and reliability to properly support a given mission.” Although the benchmarks consider how long electrical power backup systems can sustain continuity of critical operations, how to define what an unacceptable loss of power is, and whether the asset owner maintains a contingency plan to ensure availability of the electrical power network to accomplish an asset’s mission, they do not explicitly consider vulnerabilities related to longer-term electrical power disruptions. As a result, DOD’s DCIP vulnerability assessments may only focus on vulnerabilities associated with shorter-term electrical power disruptions. According to ASD(HD&ASA) officials, DCIP vulnerability assessment teams already consider longer-term electrical power disruptions indirectly through questions in the benchmarks that ask about contingency plans and continuity of operations. However, we found that the DCIP vulnerability assessment reports that were available for 10 of the 34 most critical assets did not explicitly consider specific vulnerabilities or risk mitigation options associated with longer-term electrical power disruptions on a mission-specific basis. Consequently, such vulnerabilities or options may not be identified and DOD may not make appropriate risk management decisions. Nevertheless, several DOD sources have recognized the need for the department to more explicitly consider the effects of longer-term electrical power disruptions to DOD’s critical assets. For example, the Department of Defense Energy Manager’s Handbook calls for DOD components to develop strategies for both short- and long-term energy disruptions, including electricity disruptions. Also, in its February 2008 report, the Defense Science Board Task Force on DOD Energy Strategy—which concluded that DOD’s critical national security and homeland defense missions were at an unacceptably high risk of failure from extended power disruptions—recommended that DOD consider the duration of electrical power disruptions, among other factors, in its risk management approach to reducing risks to critical missions from the loss of commercial electrical power. An update by the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics on DOD’s Energy Security Task Force also proposed a subgoal of “reduc the risk of loss of critical functions due to extended commercial grid power disruptions at fixed installations.” Without explicit guidance in the DCIP vulnerability assessment benchmarks for considering longer-term electrical power disruptions, future DCIP vulnerability assessments on other critical assets may be unable to identify vulnerabilities associated specifically with such electrical power disruptions. DOD has taken some steps to assure the availability of its electrical power supplies by identifying and addressing the vulnerabilities and risks of its critical assets to electrical power disruptions. For example, from August 2005 through October 2008, DOD issued Defense Critical Infrastructure Program guidance for identifying critical assets, assessing their vulnerabilities, and making risk management decisions about those vulnerabilities. Also, as previously discussed, DOD has conducted DCIP vulnerability assessments on 14 of the 34 most critical assets and has scheduled assessments for 13 of the remaining assets, but it has not yet scheduled assessments for 5 of the non-DOD-owned most critical assets. The DCIP vulnerability assessments conducted so far have identified specific electrical power–related vulnerabilities to some of the critical assets, including vulnerabilities associated with the reliability of the assets’ supporting commercial electrical power grid, the availability of backup electrical power supplies, and single points of failure in electrical power systems supporting the assets. Addressing the risks associated with these vulnerabilities—by remediating, mitigating, or accepting those risks—can help DOD assure the availability of electrical power to the critical assets. For example, at all 6 most critical assets we visited, the DOD asset owners have installed diesel-based electrical power generators as backup sources of electricity during electrical power disruptions. Other (non-DCIP) DOD mission assurance programs also have the potential to help DOD assure the availability of electrical power supplies to its most critical assets. For example, we found that Joint Service Integrated Vulnerability Assessments and similar vulnerability assessments from the military services, which have been conducted on some of the installations with critical assets for antiterrorism and force protection purposes, also have identified vulnerabilities related to electrical power. Furthermore, DOD also has taken steps to coordinate with other federal agencies, including DOE and DHS, as well as electrical industry organizations, and these steps may help to assure the supply of electricity to its critical assets. For example, to represent its concerns and interests on electricity, DOD participates in the Energy Government Coordinating Council. The council provides DOD and other federal agencies with a forum for sharing their concerns, comments, and questions on energy- related matters—including critical infrastructure protection—with DOE, which chairs the group. In another effort involving DOE, several DOD combatant commands—including U.S. European Command and U.S. Africa Command—have recently agreed to accept a DOE departmental representative to serve as an energy attaché to the commands. The DOE representatives will provide energy-related expertise to their respective commands, particularly with respect to the commands’ energy-related planning activities and the security and reliability of the commands’ energy infrastructure. DOD has also partnered with various federal agencies and industry organizations to further increase the assurance of electrical power. For example, DOD serves as co-chair of the federal Task Force on Electric Grid Vulnerability of the National Science and Technology Council’s Committee on Homeland and National Security, which was established in January 2009 to identify research and development needs for electrical grid vulnerabilities and to coordinate with other federal agencies to address those needs. In addition, DOD officials are collaborating with a working group established by the Edison Electric Institute in early 2009 called the Energy Security Partnership Group. The group focuses on improving communications between DOD and its utilities and on identifying and removing barriers to the development of comprehensive energy security programs at DOD installations. Also, in July 2009, DOD participated in an interagency exercise cosponsored by DHS, DOE, and DOD called Secure Grid 2009, Electric Grid Tabletop Exercise, for which officials from DOD, DOE, DHS, the Federal Energy Regulatory Commission, the North American Electric Reliability Corporation, and the Edison Electric Institute, among others, jointly developed recommendations and potential responses to two scenarios involving theoretical physical and cyber-related attacks on U.S. electrical power grids. Our survey results confirm that some steps are being taken at various levels within DOD to improve the assurance of electrical power supplies to its most critical assets. For example, according to the survey and reports we reviewed, DOD conducted vulnerability and risk assessments involving electrical power on 24 of the most critical assets through a variety of DOD mission assurance reviews, including DCIP assessments, Joint Staff Integrated Vulnerability Assessments, combatant command assessments, DOD agency assessments, and local installation assessments. The survey results also indicate that secondary sources of electricity—such as uninterruptible power supply systems and diesel generators—provide some backup electrical power capabilities to almost all of the critical assets. In addition, according to the survey, asset owners and host installations for some of the critical assets whose vulnerabilities have been assessed have taken specific measures to address those vulnerabilities, such as eliminating single points of failure, developing electrical power disruption contingency plans, installing emergency electrical power generators, and increasing physical security measures around electrical power facilities. DOD has not yet established a mechanism for systematically tracking the implementation of future DCIP risk management decisions, which are intended to address vulnerabilities (including those involving electrical power) that have been identified for the most critical assets. Such tracking could help DOD ensure that DCIP stakeholders are developing and implementing measures to address the most critical assets’ identified vulnerabilities to electrical power disruptions and thereby help assure the availability of electrical power to those assets. As previously discussed, DCIP’s risk management program involves the identification of DOD’s most critical assets; the assessment of those assets’ vulnerabilities through vulnerability assessments; and subsequent risk assessments, risk management decisions, and risk responses involving relevant DCIP stakeholders. DCIP guidance contained in DOD Instruction 3020.45 requires stakeholders to coordinate to make risk management decisions regarding whether and how to address identified vulnerabilities—through remediation or mitigation—or accept the risk posed by not addressing those vulnerabilities. Under DCIP, ASD(HD&ASA) has overall responsibility for overseeing the implementation of actions for the remediation, mitigation, or acceptance of risks to DOD critical assets, while owners of the critical assets are required to monitor the status and progress of the implementation of DCIP risk management decisions for their respective assets. ASD(HD&ASA) officials indicated to us that they do not systematically track the results of DCIP vulnerability assessments, asserting that they consider it more important to track the implementation of the subsequent DCIP risk management decisions and responses to be made concerning the vulnerabilities that are identified. The officials told us that these risk management decisions would reflect the consensus that would be reached by relevant DCIP stakeholders (such as asset owners, mission owners, and defense infrastructure sector lead agents) on either remediating, mitigating, or accepting specific vulnerabilities—actions that may require the stakeholders to provide funding or other resources in order to implement. However, the officials have not yet tracked any such decisions or responses, because no such decisions or responses have yet been made in response to the 14 DCIP vulnerability assessments conducted so far. According to these officials, because of the number of stakeholders and potential resources involved, risk management decisions can take several months to coordinate following a DCIP vulnerability assessment. These officials said that they plan to monitor the implementation of DCIP risk management decisions and responses, but they have not yet developed a mechanism, such as a schedule to track the implementation status of those decisions and responses, by which to do so. Without systematic tracking of risk management decisions and responses, DOD may be unable to comprehensively determine whether asset owners and host installations are taking the steps agreed to by relevant DCIP stakeholders to address the vulnerabilities of the critical assets, including vulnerabilities related to electrical power disruptions. DCIP guidance recognizes the importance of collaboration by encouraging coordination between DOD facilities with critical assets and their respective public utilities—including electricity providers—in order to remediate risks involving those utilities. According to this guidance, a DOD installation “should establish good communications with public service providers about service requirements,” and “that relationship does not have to wait for the identification of a vulnerability,” as “the remediation of risks posed by commercial dependency may be more complicated than that of DOD- owned infrastructure.” Similarly, in recognition of the important role that local utility providers play in supporting DOD installations with critical assets, the U.S. Army Corps of Engineers is requesting funds for a pilot program that would involve extensive collaboration with the local electricity providers at selected U.S. Army installations with critical assets. The pilot program is intended to analyze the reliability of community infrastructure in meeting current and anticipated needs of the installations and the critical missions. As previously discussed, our survey indicated that 31 of the DOD’s 34 most critical assets identified the commercial electrical power grid as their primary source of electrical power. Yet despite this overwhelming reliance, host installations or owners of only 7 of the surveyed critical assets reported coordinating with their local electricity providers to either identify or address their assets’ vulnerabilities to electrical power disruptions. Furthermore, according to the survey and our analysis, none of the host installations or owners of the critical assets have developed any formal agreements with their local electricity providers to help manage the risks and vulnerabilities of those assets to electrical power disruptions. Survey respondents cited various reasons for not coordinating with local electricity providers, including the absence of a requirement for such coordination and the lack of a vulnerability assessment on the asset that would indicate the need to initiate such coordination. Coordinating with local electricity providers could usefully enhance DOD’s efforts to identify or address the vulnerabilities of critical assets to electrical power disruptions and thereby better assure the availability of electrical power to those assets. However, few host installations or owners of critical assets have coordinated with their local electrical power providers to help identify or address the assets’ vulnerabilities to electrical power disruptions. According to an electrical power industry association representative, local electricity providers may have technical expertise or be pursuing activities that could help DOD installations develop risk remediation or mitigation measures to address electrical power vulnerabilities affecting a critical asset. According to this representative, such coordination, for example, could lead to agreements in which local electricity providers would prioritize the restoration of electrical power to a DOD installation with a critical asset following an electrical power disruption. In addition, DOD installations could usefully coordinate with their respective electricity providers concerning an industry initiative called Spare Transformer Equipment Program, in which electricity providers agree to share spare electrical power transformers—which are often foreign made, expensive, and can take several years to order—in the event of an emergency. Without more extensive coordination between DOD DCIP stakeholders and local electricity providers, DOD may be limiting the risk remediation or mitigation options that it could consider for addressing the vulnerabilities of its critical assets to electrical power disruptions. DOD relies on commercial electrical power grids for secure, uninterrupted electrical power supplies to support its most critical assets—those whose incapacitation or destruction would have a very serious, debilitating effect on the department’s ability to fulfill its missions. However, according to the Defense Science Board Task Force on DOD Energy Strategy, the commercial electrical power grids have become increasingly fragile and vulnerable to extended power disruptions that could severely impact DOD’s most critical assets, their supporting infrastructure, and the missions they support, and disruptions to the electrical power grid have occurred. DOD’s most critical assets are vulnerable to disruptions in electrical power supplies, but DOD would benefit from additional information to determine the full extent of the risks and vulnerabilities these assets face. By completing DCIP vulnerability assessments on all of its most critical assets, DOD would have more information to determine the full extent of these assets’ risks and vulnerabilities to such disruptions. Similarly, with additional guidelines, an implementation plan, and a schedule for conducting DCIP vulnerability assessments on all non-DOD- owned most critical assets, particularly those located abroad, DOD could more accurately determine the full extent of those assets’ risks and vulnerabilities to such disruptions. Further, until the U.S. Army Corps of Engineers is able to complete the preliminary technical analyses of public works (including electrical power) infrastructure in support of the DCIP vulnerability assessments of the critical assets, DOD may be unable to identify all electrical power vulnerabilities to its most critical assets. Additionally, once DOD finalizes guidelines specifying how DCIP assessment criteria and processes should be coordinated with those of other DOD mission assurance programs, DOD could more systematically determine whether these programs may also be identifying electrical power vulnerabilities and risk management options for its most critical assets. Also, explicit guidelines to assess vulnerabilities to critical assets from long-term electrical power disruptions would further enhance DOD’s ability to manage the risks associated with such disruptions. While DOD has taken some steps toward assuring the availability of its electrical power supplies to its critical assets, additional DCIP measures could further enhance efforts to address these assets’ risks and vulnerabilities to electrical power disturbances. DOD could also improve its ability to leverage related mission assurance assessments and respond to future disruptions by developing a mechanism to systematically track the results of future risk management decisions and responses intended to address risks and vulnerabilities identified for the most critical assets. Additionally, DOD could expand its options for addressing disruptions in the commercial electrical power grid by encouraging greater collaboration between the owners or host installations of the most critical assets and their respective local electricity providers. With more comprehensive knowledge of DOD’s most critical assets’ risks and vulnerabilities to electrical power disruptions and more effective coordination with electricity providers, DOD can better avoid compromising crucial DOD-wide missions during electrical power disruptions. This additional information may also improve DOD’s ability to effectively prioritize funding needed to address identified risks and vulnerabilities of its most critical assets to electrical power disruptions. To ensure that DOD has sufficient information to determine the full extent of the risks and vulnerabilities to electrical power disruptions of its most critical assets, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs, in collaboration with the Joint Staff’s Directorate for Antiterrorism and Homeland Defense, combatant commands, military services, and other Defense Critical Infrastructure Program stakeholders, as appropriate, to take the following five actions: Complete Defense Critical Infrastructure Program vulnerability assessments, as required by DOD Instruction 3020.45, on all of DOD’s most critical assets by October 2011. Develop additional guidelines, an implementation plan, and a schedule for conducting Defense Critical Infrastructure Program vulnerability assessments on all non-DOD-owned most critical assets located in the United States and abroad in conjunction with other federal agencies, as appropriate, that have a capability to implement the plan. Establish a time frame for the military services to provide the infrastructure data required for the Public Works Defense Infrastructure Sector Lead Agent—the U.S. Army Corps of Engineers— to complete its preliminary technical analysis of public works (including electrical system) infrastructure at DOD installations that support DOD’s most critical assets. Finalize guidelines currently being developed to coordinate Defense Critical Infrastructure Program assessment criteria and processes more systematically with those of other DOD mission assurance programs. Develop explicit Defense Critical Infrastructure Program guidelines for assessing the critical assets’ vulnerabilities to long-term electrical power disruptions. To enhance DOD’s efforts to mitigate these assets’ risks and vulnerabilities to electrical power disruptions and leverage previous assessments and multiple asset owners’ information, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs, in collaboration with the Joint Staff’s Directorate for Antiterrorism and Homeland Defense, combatant commands, military services, and other Defense Critical Infrastructure Program stakeholders, as appropriate, to take the following two actions: Develop a mechanism to systematically track the implementation of future Defense Critical Infrastructure Program risk management decisions and responses intended to address electrical power–related risks and vulnerabilities to DOD’s most critical assets. Ensure for DOD-owned most critical assets, and facilitate for non-DOD- owned most critical assets, that asset owners or host installations of the most critical assets, where appropriate, reach out to local electricity providers in an effort to coordinate and help remediate or mitigate risks and vulnerabilities to electrical power disruptions that may be identified for DOD’s most critical assets. In written comments on a draft of this report, DOD concurred with all of our recommendations and provided technical comments, which we incorporated in the report where appropriate. DOD’s comments are reprinted in appendix VI. Due to the sensitivity of DOD’s most critical assets and its concerns about the classification and dissemination of the initial draft report, as well as the focus of the recommendations on DOD’s program, we did not request agency comments on the full draft report from DOE, DHS, and the Federal Energy Regulatory Commission. However, we did seek technical comments from these entities on sections of the initial draft report that pertained to their roles and responsibilities, which we also incorporated in the report where appropriate. DOD concurred with our five recommendations to ensure that DOD has sufficient information to determine the full extent of the risks and vulnerabilities to electrical power disruptions of its most critical assets. Based on DOD’s comments, we modified our original recommendation concerning the establishment of a time frame for the military services to provide the infrastructure data required for preliminary technical analysis of public works (including electrical system) infrastructure at DOD installations that support DOD’s most critical assets. First, DOD concurred with our recommendation that the department complete DCIP vulnerability assessments on all of its most critical assets by October 2011, as required by DOD Instruction 3020.45. DOD noted that the Joint Staff, in coordination with ASD(HD&ASA), has already begun to conduct these assessments using an all-hazards and mission-assurance approach. As we reported, as of June 2009, DOD had conducted DCIP assessments on 14 of the 34 most critical assets. Second, DOD concurred with our recommendation that the department develop additional guidelines, an implementation plan, and a schedule for conducting vulnerability assessments on all non-DOD-owned most critical assets located in the United States and abroad in conjunction with other federal agencies, as appropriate, that have a capability to implement the plan. DOD acknowledged that conducting vulnerability assessments on such assets, particularly those located abroad, presents significant challenges, as they require the agreement of the assets’ non- DOD owners. According to the department, the ASD(HD&ASA)/DCIP Office is coordinating with appropriate offices to examine the possibility of conducting “remote assessments” on these assets. We recognize the challenges faced by DOD in identifying the electrical power vulnerabilities of non-DOD-owned critical assets and support DOD’s efforts to coordinate with appropriate federal agencies in this area. We previously have reported on DOD’s efforts to coordinate with the Department of State on similar sensitive matters involving foreign governments’ support for DOD assets abroad, noting that such efforts have resulted in various types of agreements to help protect U.S. forces and facilities abroad. We also note that if DOD decides to conduct “remote” DCIP vulnerability assessments on the non-DOD-owned most critical assets, such assessments should rely on the same benchmarks used for conducting DCIP vulnerability assessments on DOD-owned most critical assets. Third, DOD concurred with our recommendation that the department establish a time frame for the military services to provide the infrastructure data required for the Public Works Infrastructure Sector Lead Agent—the U.S. Army Corps of Engineers—to complete its preliminary technical analysis of public works infrastructure at DOD installations that support DOD’s most critical assets. Based on comments from DOD that the Corps has already completed the technical analysis for public works infrastructure located outside of the installations, but is still waiting for the data required to complete the analysis on infrastructure located within the installations, we modified this recommendation to indicate that these data are required for completing—rather than conducting—the preliminary technical analysis. DOD acknowledged that such information is necessary for the proper characterization of its critical assets from a public works perspective. We believe that the establishment of specific time frames for the military services to provide this important information is necessary because, as of July 2009, only one of the military services— the U.S. Navy—had begun to gather the requested information. Fourth, DOD concurred with our recommendation that the department finalize guidelines currently being developed to coordinate DCIP assessment criteria and processes more systematically with those of other DOD mission-assurance programs. While acknowledging the synergistic effect of complementary risk management program activities and security-related functions, DOD noted that such programs are subject to different directives and appropriations, and that critical infrastructure protection at the installation level is not yet mature. According to DOD, the Joint Staff is now overseeing a “way ahead” process to better synchronize these efforts. We encourage the Joint Staff to complete this initiative and identify specific ways for coordinating DCIP assessment criteria and processes more systematically with those of DOD’s other mission assurance programs. Fifth, DOD concurred with our recommendation that the department develop explicit DCIP guidelines for assessing the most critical assets’ risks and vulnerabilities to long-term electrical power disruptions. According to DOD, the ASD(HD&ASA)/DCIP Office will review current vulnerability assessment criteria and standards and work with the Joint Staff to include considerations of long-term electrical power disruptions. DOD also concurred with our two recommendations to enhance DOD’s efforts to mitigate its most critical assets’ risks and vulnerabilities to electrical power disruptions and leverage previous assessments and multiple asset owners’ information: First, DOD concurred with our recommendation that the department develop a mechanism to systematically track the implementation of future DCIP risk management decisions and responses intended to address electrical power–related risks and vulnerabilities to DOD’s most critical assets. According to DOD, the ASD(HD&ASA)/DCIP Office has developed draft DOD Manual 3020.45, Volume 5, Defense Critical Infrastructure Program (DCIP) Coordination Timeline, that is being coordinated within the department. DOD notes that manual’s purpose is to provide uniform procedures and timelines for DCIP stakeholders—that is, ASD(HD&ASA), the Joint Staff, military departments, combatant commands, defense agencies, and DISLAs—to execute DCIP activities and responsibilities, including those related to risk management decisions and responses. We encourage DOD to finalize this draft manual and ensure that it provides explicit guidance on tracking the implementation of DCIP risk management decisions and responses resulting from DCIP vulnerability assessments of DOD’s most critical assets. DOD also notes that the DCIP Office is developing an automated Critical Asset Identification Process Collaboration Tool that will document and track the status of DCIP stakeholders’ progress in the DCIP risk management process. Second, DOD also concurred with our recommendation that the department ensure for DOD-owned most critical assets, and facilitate for non-DOD-owned most critical assets, that asset owners or host installations of the most critical assets, where appropriate, reach out to local electricity providers in an effort to coordinate and help remediate or mitigate risks and vulnerabilities to electrical power disruptions that may be identified for DOD’s most critical assets. DOD’s comments cited existing guidance that, among other things, (1) encourages government and private-sector decision makers to work with electricity providers to identify remedies to potential single points of failure and (2) advises DOD facility managers to establish good communications with public service providers about service requirements, and to review service-level agreements, acquisition programs, contracts, and operational processes for opportunities to address and include stronger resiliency language and requirements for future remediation efforts. According to DOD, this guidance will be reinforced at DCIP forums for collaboration, such as meetings of the Defense Critical Infrastructure Integration Staff, Operational Advisory Board, and Defense Infrastructure Sector Council. We encourage DOD to reinforce such guidance concerning collaboration with local electricity providers directly with asset owners or host installations for each of the most critical assets, as appropriate, in order to help mitigate the risks and vulnerabilities to electrical power disruptions that may be identified for those assets. We are sending copies of this report to other interested congressional parties; the Secretary of Defense; the Chairman, Joint Chiefs of Staff; the Secretaries of the U.S. Army, the U.S. Navy, and the U.S. Air Force; the Commandant of the U.S. Marine Corps; and the Director, Office of Management and Budget. This report also is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-5431 or dagostinod@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. To conduct our review of the assurance of electrical power supplies to Department of Defense (DOD) critical assets, we administered three structured written surveys to the owners or those with program responsibility for 100 percent of DOD’s 34 most critical assets, which DOD identified through the Defense Critical Infrastructure Program (DCIP) as its most critical assets as of October 2008. We administered one survey to the military services and DOD agencies that own or have program responsibility for the assets through DCIP to obtain information about the (1) assets’ degree of reliance on electrical power; (2) assets’ primary and backup sources of electrical power supplies; (3) number and type of unplanned electrical power disruptions to the assets; (4) DCIP and non- DCIP assessments of the assets’ risks and vulnerabilities to electrical power disruptions from January 2006 through December 2008; and (5) measures recommended, implemented, or planned to address or manage such risks and vulnerabilities. We administered another survey to the Joint Staff to obtain information about the missions supported by the assets. Finally, we administered the third survey to the Office of the Assistant Secretary of Defense for Homeland Defense and Americas’ Security Affairs (ASD(HD&ASA)) regarding coordination efforts with relevant DOD and non-DOD stakeholders. (These surveys are reproduced in full in apps. III, IV, and V, respectively.) We limited our surveys to the universe of DOD’s most critical assets because of concerns over the reliability of DOD’s larger list of about 675 Tier 1 Task Critical Assets, which support critical DOD missions at the departmental, combatant command, and military service levels. We also conducted six follow-up site visits to a nonprobability sample of critical assets to verify and validate the surveys’ results and evaluate in-depth issues identified in the surveys’ responses. We selected the sites for visits judgmentally based on the survey responses regarding issues addressed in this report. We initially selected a random sample from DOD’s universe of about 675 Tier 1 Task Critical Assets to survey for this review. However, based on discussions with DOD officials and our own analysis, we found significant data reliability and validity problems with DOD’s Tier 1 Task Critical Asset list. We found that the use of disparate sets of guidance, including draft and nonbinding guidance, resulted in the selection and submission of assets to the Tier 1 Task Critical Asset list based on inconsistent criteria, thus limiting the usefulness of the Tier 1 Task Critical Asset list to DOD decision makers in determining DOD’s most critical assets and prioritizing funding to address identified vulnerabilities. As a result, we determined that for methodological purposes, DOD’s current Tier 1 Task Critical Asset list did not represent a meaningful universe from which we should select our survey sample or to which we should project our survey results. Because the universe of critical assets did not represent an accurate, comprehensive list of DOD Tier 1 Task Critical Assets, we determined that this issue in and of itself warranted further analysis. Therefore, we issued a separate report, with recommendations, on issues relating specifically to the Tier 1 Task Critical Asset list to enable DOD to take timely actions to update and improve its list of Defense Critical Assets in the fall of 2009 and prioritize funding. Defense Information Systems Agency Defense Infrastructure Sector Lead Agents Headquarters, U.S. Army Corps of Engineers, DCIP Public Works Defense Threat Reduction Agency, Support Branch Chief, Combat Director of Defense Research and Engineering Defense Science Board, Task Force on DOD Energy Strategy Mission Assurance Division, Naval Surface Warfare Center at Dahlgren Selected DOD critical assets at U.S. military installations within the To become more familiar with efforts currently taking place to assure the nation’s electrical power grid, we met with various officials from federal agencies, electrical power industry associations, and private-sector entities and other officials to determine their roles and responsibilities, ongoing initiatives, and the extent of their coordination efforts with DOD in assuring electrical power to the nation’s power grid. We obtained relevant documentation and interviewed officials from the following organizations: Department of Homeland Security (DHS) National Protection and Programs Directorate Office of Infrastructure Protection Infrastructure Information Collection Division Partnership and Outreach Division Protective Security Coordination Division Office of Cybersecurity and Communications Department of Energy (DOE), Office of Electricity Delivery & Energy Federal Energy Regulatory Commission, Office of Electric Reliability North American Electric Reliability Corporation CACI International, Inc. Edison Electric Institute Pareto Energy, Inc. Talisman International, LLC We did not request agency comments from DOE, DHS, and the Federal Energy Regulatory Commission on the full draft report, which at the time was classified as SECRET because of (1) DOD’s concerns about the classification and dissemination of the report and (2) the focus of the recommendations on DOD’s program. We did seek technical comments from these entities on sections of the initial draft report that pertained to their roles and responsibilities, which we incorporated in the report where appropriate. We also shared sections of the initial draft report that discussed the 2008 Report of the Defense Science Board Task Force on DOD Energy Strategy, “More Fight—Less Fuel,” and the entities either agreed or did not take issue with the conclusions of this report. To learn more about the assurance of electrical power supplies to DOD critical assets, we developed three electronic surveys for DOD critical assets, their missions, and coordination efforts regarding the assets. We asked responders about (1) missions supported by the assets; (2) assets’ degree of reliance on electrical power; (3) assets’ primary and backup sources of electrical power supplies; (4) number and type of unplanned electrical power disruptions to the assets; (5) DCIP and non-DCIP assessments of the assets’ risks and vulnerabilities to electrical power disruptions; and (6) measures recommended, implemented, or planned to address or manage such risks and vulnerabilities, including coordination efforts with relevant DOD and non-DOD stakeholders. We conducted our surveys from May 2009 through August 2009, using self- administered electronic surveys. We sent a questionnaire on DOD critical assets to the owners and operators of DOD-owned critical assets. We sent a second questionnaire on DOD critical asset missions to the Joint Staff (J- 34). We sent a third questionnaire on coordination efforts for DOD critical assets to ASD(HD&ASA)/DCIP Office. We sent the questionnaires by SIPRNet in an attached Microsoft Word form that respondents could return electronically via SIPRNet after marking check boxes or entering responses up to the SECRET classification level into open answer boxes. We also made provisions for receiving completed questionnaires at the TOP SECRET classification level, if needed, via a GAO Joint Worldwide Intelligence Communications System account, which was established for us at the DOD Office of the Inspector General. We sent the original three electronic questionnaires in May and June 2009. We sent out reminder e-mail messages at different times to all nonrespondents in order to encourage a higher response rate. In addition, we made several courtesy telephone calls to nonrespondents to encourage their completion of the surveys. All questionnaires were returned by August 2009. In the end, we achieved a 100 percent response rate. Because this was not a sample survey, but rather a survey of the universe of respondents, it has no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question, determining sources of information available to respondents, or entering data into a database or analyzing them can introduce unwanted variability into the survey results. We took steps in developing the questionnaire, collecting the data, and analyzing them to minimize such nonsampling error. For example, design methodologists designed the questionnaire in collaboration with GAO staff who had subject matter expertise. In addition to an internal expert technical review by GAO’s Survey Coordination Group, we pretested the survey with U.S. Army, U.S. Navy, and U.S. Air Force officials representing three most critical asset sites as well as officials from the Joint Staff (J-34) and ASD(HD&ASA) to ensure that the questions were relevant, clearly stated, and easy to understand. Since there were relatively few changes based on the pretests and because we were conducting surveys with the universe of respondents, we did not find it necessary to conduct additional pretests. Instead, changes to the content and format of the questionnaire were made after the pretests based on the feedback we received. When we analyzed the data, an independent analyst checked all computer programs. All data were double keyed during the data entry process, and GAO staff traced and verified all of the resulting data to ensure accuracy. To verify and validate the survey recipients’ responses and evaluate in more detail issues identified in the surveys, we conducted six follow-up site visits to a nonprobability sample of surveyed assets. We selected the sites for visits judgmentally based on the survey responses regarding issues addressed in this report. During these site visits, we spoke with installation personnel, including asset owners and operators, about their reliance on supporting electrical infrastructure and electricity providers. While findings from our site visits are not generalizable to all 34 most critical assets, we obtained follow-up survey information from installation personnel for critical assets and visited those assets to validate the survey responses, as applicable. We clarified responders’ interpretation of the survey questions, discussed their responses in detail, and visited the critical assets and their supporting infrastructure to better understand each asset’s unique situation. Finally, we reviewed documentation and guidance related to those critical assets, including vulnerability assessments. We conducted this performance audit from October 2008 through October 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Typical Electrical Power Vulnerabilities and Remediation Measures Examples of electrical power vulnerabilities Both the primary and backup power supply systems exist in the same room for convenience of maintenance. Implement physical diversity in location of backup support. Ensure fire suppression systems support continued operation of non-affected systems. Increase security on the location during higher threat periods. Single transformer provides both commercial and backup power to a critical asset. Alternate paths that supply electrical power converge at single component (i.e., a transformer) and represent potential point of failure if common component fails. Have independent commercial and backup power paths. Identify alternate location to relocate critical operations. Use portable generations and uninterruptible power supplies to provide power in case of a single component failure. Arrange for immediate emergency maintenance response to restore the component capability. Access to buildings that house electrical power supplies to critical assets. Establish strict access control procedures for buildings and areas housing important system components. Relocate important system components to secured areas. Bury electric power lines or protect poles with anti-ram barriers. Power lines share right-of-way with other key utilities. Bridges, tunnels, and trenches often involve shared rights-of-way for electrical power that may contain other key utilities. Establish mitigation options, such as backup power or transferring mission to another location, based on loss of the right-of-way. Establish agreements with local community to increase security or patrols for these locations during increased threat periods. Be aware of maintenance or repair activities for other utilities in these locations. Generators and uninterruptible power supplies are not large enough to support the critical asset in case of primary power loss or in case the location does not stockpile sufficient fuel to support the operational time frame during an electrical power loss. Determine critical asset needs and purchase backup generators accordingly. Maintain at least minimum operational requirements for consumables (i.e., fuel). Distribute critical asset operations to other backup power supplied locations. The U.S. Government Accountability Office (GAO) is an independent, non-partisan legislative branch agency that assists Congress in evaluating how the federal government spends taxpayer dollars. GAO supports the Congress in meeting its constitutional responsibilities and to help improve the performance and ensure the accountability of the federal government for the benefit of the American people. GAO provides Congress with timely information that is objective, fact-based, nonpartisan, nonideological, fair, and balanced. You can answer most of the questions easily by checking boxes or filling in blanks. A few questions request narrative answers. Please note that the space provided will expand to accommodate your answer. You may write additional comments at the end of the survey. We request that you provide the most recent information from no earlier than January 1, 2006. Please use your mouse to navigate throughout the survey by clicking on the field or check box you wish to fill in. Do not use the “Tab” or “Enter” keys as doing so may cause formatting problems. To select or deselect a check box, simply click or double click on the box. Please enter asset here: narrative responses by writing (U) for “unclassified” or (S) for “SECRET” at the beginning of each entry or paragraph, as appropriate. Please limit your responses to Task Critical Asset information classified no higher than “SECRET” in accordance with the Defense Critical Infrastructure Program (DCIP) Security Classification Guide, May 2007. Since you have been identified as a subject matter expert for this asset, we ask that you coordinate the completion of this survey with other officials as necessary and return one consolidated survey for this asset. After we receive your reply, we may call you to schedule a follow-up telephone interview if we need to clarify some answers in the survey. To assist us, we ask that you complete and return this survey by June 5, 2009, via SIPRNet to ArtadiD@gao.sgov.gov. Please return the completed survey by e-mail. Simply save this file to your classified computer desktop, hard drive, or disk and attach it to your e-mail. Lt Col Norman Worthen Phone: (703) 693-7542 SIPRNet:: Norman.Worthen@js.pentagon.smil.mil Thank you for your help. 1. Although several people may participate in the completion of this survey, we ask that you provide contact information below for the person coordinating the completion of the survey in case we need to follow-up with additional questions. Name: Rank: Title: Unit Name: Base/Organization: Commercial Phone #: ( ) - E-mail: SIPRNet: Section A. Reliance on Electrical Power Again, please enter the name of the asset for which this survey is being completed. 2. Does this asset require electrical power in order to function and support its military mission(s)? (Mark only one response) Yes No SKIP TO QUESTION #62 3. To what extent does this critical asset require electrical power to function? (Mark only one response) All of the time (continuous/constant) Most of the time About half of the time Less than half of the time None of the time No SKIP TO QUESTION #62 Please explain if necessary: 4. Does this critical asset require supporting infrastructure, such as water; natural gas; heating, ventilating, and air conditioning (HVAC); or any other supporting utility to function? (Mark only one response) Yes No SKIP TO QUESTION #6 5. Does this critical asset’s supporting infrastructure require electrical power to function? (Mark only one response) 6. From what source does this asset generally receive its primary electrical power supply? (Mark only one response) Non-DOD electricity provider(s) or utility(ies) (e.g., the commercial power grid) Name of provider(s) or utility(ies): DOD-generated electricity supply based on fossil fuels (e.g., diesel-powered generators) DOD-generated electricity supply based on solar energy DOD-generated electricity supply based on geothermal energy DOD-generated electricity supply based on wind energy DOD-generated electricity supply based on biomass energy DOD-generated electricity supply based on nuclear energy 7. Does this asset rely on an intermediate or transitional uninterruptible power supply (UPS) (i.e. a battery backup) to provide power in the event of an electrical power disruption? (Mark only one response) Yes How many minutes is the UPS expected to provide electrical power to the No Why not? 8. Does this asset have a back-up power source, other than UPS, in the event of an electrical power disruption from any of the following sources? (Mark one response for each row) a. Batteries or fuel cells (other than UPS) b. Non-DOD electricity provider(s)/utility(ies) (e.g., the commercial power grid) Name of provider(s) or utility(ies): c. DOD-generated electricity supply based on fossil fuels (e.g., diesel-powered generators) d. DOD-generated electricity supply based on solar energy e. DOD-generated electricity supply based on geothermal energy f. DOD-generated electricity supply based on wind energy g. DOD-generated electricity supply based on biomass energy h. DOD-generated electricity supply based on nuclear energy 9. How long, collectively, can back-up electrical power sources identified in question 8 provide electricity to the critical asset? (Mark only one response) Less than 24 hours Between 1 and 3 days (72 hours) More than 3 days up to 1 week Between 1 and 2 weeks Over 2 weeks Indefinitely (as long as fuel is available) 10. Do the back-up electrical power sources identified in question 8 involve electrical power generators? (Mark only one response) Yes No SKIP TO QUESTION #25 11. Are back-up generators dedicated to the critical asset or shared with other critical assets or infrastructure? (Mark only one response) Dedicated to the critical asset Shared with other critical assets or infrastructure 12. Is the back-up generator(s) sufficient to maintain the critical asset and meet its mission(s) requirements? Yes No SKIP TO QUESTION #25 13. How many days can these generators function before requiring replenishing energy supplies (e.g., diesel fuel, natural gas, JP-8, etc.)? 14. How many days would the energy supply that is currently stored at the installation or location of the critical asset be able to support these generators? 15. How many days can these generators function before requiring preventive maintenance? 16. How many days can these generators function before requiring corrective maintenance? 17. Do you have another back-up generator that could be utilized while performing preventive or corrective maintenance on the primary generator? 18. How frequently are the generators identified in question 10 above subject to inspection and preventive maintenance to ensure that they function as intended? 19. Do you conduct inspections and preventive maintenance to these generators as prescribed by schedule requirements? 20. How frequently are these generators subject to routine testing to ensure that they function as intended? 21. What plans, if any, do you have to obtain additional energy supplies for these generators once currently stocked supplies run out? 22. What size (in terms of electricity production capacity, such as kilowatts) are these generators? 23. What are the electrical requirements (such as kilowatts) for the critical asset? 24. When was this electrical requirement last validated? (date) Section C. Unplanned Disruptions to Electrical Power 25. How many unplanned disruptions, if any, to this asset’s primary electrical power sources have occurred between January 1, 2006, and December 31, 2008? (Mark only one response) Zero 1 to 5 6 to 10 More than 10 Unknown 26. When did the disruption(s) occur? (List date(s) for each disruption) 27. How long did each of these disruptions last? 28. Do you know the cause(s) for each disruption? Yes No SKIP TO QUESTION #31 29. What were the causes of each disruption? 30. What trends, if any, did you identify regarding causes of the disruptions? 31. How, if at all, did the disruption(s) affect the asset’s mission(s)? 32. What actions, if any, did you take to mitigate the impact of the disruption(s) on the asset’s mission(s)? 33. Is this asset incorporated into its electricity provider’s/utility’s reconstitution or restoration planning? (Mark only one response) 34. Have any cyber or computer-based attacks or probes occurred that have negatively affected the delivery of electrical power to the asset or its supporting infrastructure? (Mark only one response) Yes No SKIP TO QUESTION #37 Unknown SKIP TO QUESTION #37 35. How did you determine that such cyber or computer-based attacks or probes occurred? (Mark only one response) 36. Who did you inform, if anyone, about the cyber or computer-based attacks or probes? (Mark only one response) 37. Were any assessments conducted between January 1, 2006, and December 31, 2008, that specifically examined (1) the vulnerabilities of this asset to electrical power disruptions and/or (2) the risks of electrical power disruptions to this asset? (Mark only one response) Yes No SKIP TO QUESTION #54 38. What organization(s) conducted the assessment(s)? 39. What were the date(s) of the assessment(s)? 40. Did the assessment(s) consider vulnerabilities or risks up to one node (electrical power substation) nearest to the installation or location of the critical asset (i.e., “one node beyond the fence”)? (Mark only one response) 41. Did the assessment(s) consider vulnerabilities or risks beyond one node (electrical power substation) nearest to the installation or location of the critical asset (i.e., more than “one node beyond the fence”)? (Mark only one response) 42. Which of the following vulnerabilities or risks listed below were identified from the assessments? (Mark one response for each row) a. The reliability and resiliency of a commercial or DOD installation’s power grid. b. The physical security of commercial and DOD electrical power infrastructures. c. The cyber-security of commercial and DOD electrical power infrastructures. d. The lack of back-up electrical generation capabilities (maintenance, testing, fuel supplies, etc.). e. Single points of failure within commercial/DOD electrical power infrastructures. f. The lack of contingency plans for addressing electrical power disruptions to critical assets. g. Other vulnerability or risk Please describe: 43. What detail was provided about each vulnerability or risk identified in question #42 above? 44. Were measures proposed or recommended to address or manage these vulnerabilities or risks? (Mark only one response) Yes No SKIP TO QUESTION #54 45. What measures were proposed or recommended to address or manage these vulnerabilities or risks? 46. At what level within DOD was the decision made to implement the recommended measure(s) or not implement the measure(s) and accept the risks? 47. What criteria, if any, were used in determining which measure(s) would be taken to address, manage, or accept vulnerabilities or risks (e.g., asset criticality, costs, staffing, technology, funding availability, time constraints, prior Base Realignment and Closure decisions, etc.)? 48. Was the decision made to implement the recommended measure(s) or not implement the measure(s) and accept the vulnerabilities or risks? Yes, implement recommended measure(s) No, decided not to implement the recommended measure(s) and accept the vulnerabilities 49. Were measures selected for implementation? Yes No SKIP TO QUESTION #54 50. What were the estimated costs for implementing these measures? 51. Have these measures been implemented, scheduled for implementation, or not scheduled for implementation at this time? (Mark for all that apply). Been implemented Please identify measure(s): Been scheduled for implementation Please identify measure(s): Not scheduled for implementation at this time Please identify measure(s): 52. Which DOD major budget category was (or is being) used to implement these measures? (Mark for all that apply). Operations and Maintenance Military Personnel Procurement Research and Development Other (Please specify) Unknown 53. What DoD organizational level implemented (or is implementing) these measures? (Mark for all that apply). Host installation Higher headquarters Major command Combatant command Other (Please specify) Unknown Section F. Coordination with Other Entities 54. Is this asset located within the United States? Yes No SKIP TO QUESTION #57 55. To what extent, if at all, did you or the host installation of this asset coordinate with U.S. electricity provider(s) to identify or address potential vulnerabilities or risks identified in question 42 above? (Mark only one response) Not at all SKIP TO QUESTION #62 Some extent Moderate extent Great extent 56. What was the nature of the coordination with U.S. electricity providers? 57. Is this asset located outside the United States? Yes No SKIP TO QUESTION #62 58. Have there been any efforts to coordinate with host-nation governments and/or foreign- owned electricity providers to identify or address potential vulnerabilities or risks identified in question #40 above? Yes No SKIP TO QUESTION #62 Unknown SKIP TO QUESTION #62 59. What was the nature of the coordination with the host-nation governments and/or foreign-owned electricity provider(s)? 60. Did you or the host installation of this asset coordinate with any other organizations or entities (other than U.S. electricity providers or host-nation governments and/or foreign-owned electricity provider(s)) to identify or address potential vulnerabilities or risks? (Mark only one response) Yes No- SKIP TO QUESTION #62 61. With whom did you or the host installation of this asset coordinate? 62. Please provide any additional information about efforts to identify, assess, or address the vulnerabilities and risks associated with electrical power disruptions to this asset that may not have been addressed through the previous questions. The U.S. Government Accountability Office (GAO) is an independent, non-partisan legislative branch agency that assists Congress in evaluating how the federal government spends taxpayer dollars. GAO supports the Congress in meeting its constitutional responsibilities and to help improve the performance and ensure the accountability of the federal government for the benefit of the American people. GAO provides Congress with timely information that is objective, fact-based, nonpartisan, nonideological, fair, and balanced. You can answer most of the questions easily by checking boxes or filling in blanks. A few questions request narrative answers. Please note that the space provided will expand to accommodate your answer. You may write additional comments at the end of the survey. We request that you provide the most recent information from no earlier than January 1, 2006. In response to a congressional mandate in the House Report on the National Defense Authorization Act for Fiscal Year 2009, Title XXVIII, Defense Critical Infrastructure Program, Report 110-652 (May 16, 2008), GAO is conducting a review of the Assurance of Electrical Power Supplies to DOD Critical Assets (GAO code 351266). The following critical asset was selected for this survey: survey by clicking on the field or check box you wish to fill in. Do not use the “Tab” or “Enter” keys as doing so may cause formatting problems. To select or deselect a check box, simply click or double click on the box. Please enter asset here: Since the Joint Staff (J-34) has agreed to respond to the mission-related questions for this asset, we ask that the Joint Staff (J-34) coordinate the completion of this survey with other officials as necessary and return one consolidated survey for this asset. narrative responses by writing (U) for “unclassified,” (S) for “SECRET,” or (TS) for TOP SECRET at the beginning of each entry or paragraph, as appropriate. However, please try to limit your responses to Task Critical Asset information classified no higher than “SECRET” in accordance with the Defense Critical Infrastructure Program (DCIP) Security Classification Guide, May 2007. After we receive your reply, we may call you to schedule a follow-up telephone interview if we need to clarify some answers in the survey. Thanks in advance for taking the time to complete this survey. If you have any questions about the survey, please contact: David Artadi, GAO Analyst-in-Charge Phone: (404) 679-1989 SIPRNet: ArtadiD@gao.sgov.gov. To assist us, we ask that you complete and return this survey by June 26, 2009, to David Artadi via SIPRNet at ArtadiD@gao.sgov.gov or to Mark Pross via JWICS at igproma@dodig.ic.gov, as appropriate. Please return the completed survey by e-mail. Simply save this file to your classified computer desktop, hard drive, or disk and attach it to your e-mail. Thank you for your help. Again, please enter the name of the asset for which this survey is being completed. 1. Within which DCIP defense sector(s), as identified in DOD Directive 3020.40, Defense Critical Infrastructure Program (DCIP), is this asset? (Mark all that apply.) Defense Industrial Base (DIB) Financial Services Global Information Grid (GIG) Health Affairs Intelligence, Surveillance, and Reconnaissance (ISR) Logistics Personnel Public Works Space Transportation Unknown 2. Where is this asset physically located? (Mark only one response) At a military installation please specify name of installation: At a commercial facility please specify name of facility: At an industrial site At a stand-alone facility please specify name of facility: please specify name of industrial site: 3. What is the nearest city (and U.S. state or country) to this installation, facility, or site? a. City: b. State (only if in the U.S.): c. Country (only if outside the U.S.): 4. Who owns the asset? (Mark only one response) DOD military service DOD combatant command Other DOD organization Other (non-DOD) U.S. government organization please specify: please specify: please specify: (federal, state, or local) please specify: U.S. private organization Foreign military organization please specify: Foreign government (nonmilitary) please specify: please specify: Foreign private company Other 5. Who primarily operates the asset during normal operational status? (Mark all that apply.) DOD military department DOD combatant command Other DOD organization Other (non-DOD) U.S. government organization please specify: please specify: please specify: (federal, state, or local) please specify: U.S. private organization please specify: Foreign military Foreign government (nonmilitary) please specify: please specify: Foreign private company Other Section B. Mission(s), Combatant Command(s), and Military Service(s) Supported by Asset 6. Which military mission(s) does this asset support within DOD during normal operational status other than those missions already described in the document that the Joint Staff (J-34) provided to GAO about the surveyed assets on November 19, 2008? (Please list and describe the mission(s) based on the “mission impact statements” and “mission essential tasks”—as defined in DOD Manual 3020.45, Vol. I, DOD Critical Asset Identification Process (Oct. 24, 2008)—that were used to designate this asset at its current DCIP critical asset classification.) 7. For the military missions identified in question #6, which DOD Unified Combatant Command(s) with regional responsibilities, if any, does this asset support? (Mark all that apply) United States Africa Command (USAFRICOM) United States Central Command (USCENTCOM) United States European Command (USEUCOM) United States Northern Command (USNORTHCOM) United States Pacific Command (USPACOM) United States Southern Command (USSOUTHCOM) 8. For the military missions identified in question #6, which DOD Unified Combatant Command(s) with functional responsibilities, if any, does this asset support? (Mark all that apply) United States Joint Forces Command (USJFCOM) United States Special Operations Command (USSOCOM) United States Strategic Command (USSTRATCOM) United States Transportation Command (USTRANSCOM) 9. For the military missions identified in question #6, which DOD military service(s), if any, does this asset support? (Mark all that apply) United States Army United States Air Force United States Navy United States Marine Corps 10. For the military missions identified in question #6, which other DOD agencies or organizations, if any, does this asset support? 11. Which non-DOD mission(s), if any, does this asset support during normal operational status? (Please include the names of the non-DOD organizations whose missions are supported by the asset.) 12. Please provide any additional information regarding the missions, combatant commands, and military services supported by the asset that may not have been addressed through the previous questions. The U.S. Government Accountability Office (GAO) is an independent, non-partisan legislative branch agency that assists Congress in evaluating how the federal government spends taxpayer dollars. GAO supports the Congress in meeting its constitutional responsibilities and to help improve the performance and ensure the accountability of the federal government for the benefit of the American people. GAO provides Congress with timely information that is objective, fact-based, nonpartisan, nonideological, fair, and balanced. You can answer most of the questions easily by checking boxes or filling in blanks. A few questions request narrative answers. Please note that the space provided will expand to accommodate your answer. You may write additional comments at the end of the survey. We request that you provide the most recent information from no earlier than January 1, 2006. Please use your mouse to navigate throughout the survey by clicking on the field or check box you wish to fill in. Do not use the “Tab” or “Enter” keys as doing so may cause formatting problems. To select or deselect a check box, simply click or double click on the box. Please enter asset here: Since ASD(HD&ASA)/DCIP Office has agreed to respond to the coordination-related questions for this asset, we ask that ASD(HD&ASA)/DCIP Office coordinate the completion of this survey with other officials as necessary and return one consolidated survey for this asset. narrative responses by writing (U) for “unclassified,” (S) for “SECRET,” or (TS) for TOP SECRET at the beginning of each entry or paragraph, as appropriate. However, please try to limit your responses to Task Critical Asset information classified no higher than “SECRET” in accordance with the Defense Critical Infrastructure Program (DCIP) Security Classification Guide, May 2007. After we receive your reply, we may call you to schedule a follow-up telephone interview if we need to clarify some answers in the survey. David Artadi, GAO Analyst-in-Charge Phone: (404) 679-1989 SIPRNet: ArtadiD@gao.sgov.gov. To assist us, we ask that you complete and return this survey by June 26, 2009, to David Artadi via SIPRNet at ArtadiD@gao.sgov.gov or to Mark Pross via JWICS at igproma@dodig.ic.gov, as appropriate. Please return the completed survey by e-mail. Simply save this file to your classified computer desktop, hard drive, or disk and attach it to your e-mail. Thank you for your help. Section A. Coordination with DOD DCIP Stakeholders Again, please enter the name of the asset for which this survey is being completed. 1. To what extent has coordination taken place between the owner/custodian/operator of this asset with the following DOD DCIP stakeholders to identify and/or address potential vulnerabilities or risks involving electrical power disruptions? (Mark one response for each row) a. Military service(s) (Specify service(s): ) b. Combatant command(s) (Specify command(s): ) c. Defense Infrastructure Sector Lead Agent(s) (Specify Agent(s): ) d. ASD(HD&ASA)/DCIP Office e. Joint Staff (J-34) f. Mission Assurance Division/Dahlgren, VA g. Defense Threat Reduction Agency (DTRA) h. Other DOD DCIP stakeholder(s) (Specify other stakeholder(s): ) NOTE: If you answered “Not At All” to Question #1, skip to Question #4. Otherwise, continue to Question #2. 2. What was the nature of the coordination with these DOD DCIP stakeholders? 3. What impact, if any, did this coordination with these DOD DCIP stakeholders have on identifying and/or addressing potential vulnerabilities or risks to the asset? Section B. Coordination with Non-DOD Entities 4. To what extent has coordination taken place between DOD stakeholders and the U.S. Department of Homeland Security to identify and/or address potential vulnerabilities or risks involving electrical power disruptions to the asset? Not at all SKIP TO QUESTION #8 Some extent Moderate extent Great extent 5. Which DOD stakeholder(s) were involved in these coordination efforts with the U.S. Department of Homeland Security? 6. What was the nature of the coordination with the U.S. Department of Homeland Security? 7. What impact, if any, did this coordination with the U.S. Department of Homeland Security have on identifying and/or addressing potential vulnerabilities or risks involving electrical power disruptions to the asset? 8. To what extent has coordination taken place between DOD stakeholders and the U.S. Department of Energy to identify and/or address potential vulnerabilities or risks involving electrical power disruptions to the asset? Not at all (Skip to question #12.) Some extent Moderate extent Great extent 9. Which DOD stakeholder(s) were involved in these coordination efforts with the U.S. Department of Energy? 10. What was the nature of the coordination with the U.S. Department of Energy? 11. What impact, if any, did this coordination with the U.S. Department of Energy have on identifying and/or addressing potential vulnerabilities or risks to the asset? 12. To what extent has coordination taken place between DOD stakeholders and the U.S. Federal Energy Regulatory Commission (FERC) to identify and/or address potential vulnerabilities or risks involving electrical power disruptions to the asset? Not at all SKIP TO QUESTION #16 Some extent Moderate extent Great extent 13. Which DOD stakeholder(s) were involved in these coordination efforts with the FERC? 14. What was the nature of the coordination with the FERC? 15. What impact, if any, did this coordination with the FERC have on identifying and/or addressing potential vulnerabilities or risks to the asset? 16. To what extent has coordination taken place between DOD stakeholders and the North American Electric Reliability Corporation (NERC) to identify and/or address potential vulnerabilities or risks involving electrical power disruptions to the asset? Not at all SKIP TO QUESTION #20 Some extent Moderate extent Great extent 17. Which DOD stakeholder(s) were involved in these coordination efforts with the NERC? 18. What was the nature of the coordination with the NERC? 19. What impact, if any, did this coordination with the NERC have on identifying and/or addressing potential vulnerabilities or risks to the asset? 20. To what extent has coordination taken place between DOD stakeholders and DOE national laboratories to identify and/or address potential vulnerabilities or risks involving electrical power disruptions to the asset? Not at all SKIP TO QUESTION #24 Some extent (Specify laboratory(ies): ) Moderate extent (Specify laboratory(ies): ) Great extent (Specify laboratory(ies): ) 21. Which DOD stakeholder(s) were involved in these coordination efforts with DOE national laboratories? 22. What was the nature of the coordination with DOE national laboratories? 23. What impact, if any, did this coordination with DOE national laboratories have on identifying and/or addressing potential vulnerabilities or risks to the asset? 24. To what extent has coordination taken place between DOD stakeholders and the U.S. Department of State to identify and/or address potential vulnerabilities or risks involving electrical power disruptions to the asset? Not at all SKIP TO QUESTION #28 Some extent Moderate extent Great extent 25. Which DOD stakeholder(s) were involved in these coordination efforts with the U.S. Department of State? 26. What was the nature of the coordination with the U.S. Department of State? 27. What impact, if any, did this coordination with the U.S. Department of State have on identifying and/or addressing potential vulnerabilities or risks to the asset? 28. To what extent has coordination taken place between DOD stakeholders and electrical power industry associations to identify and/or address potential vulnerabilities or risks involving electrical power disruptions to the asset? Not at all SKIP TO QUESTION #32 Some extent (Specify association(s): ) Moderate extent (Specify association(s): ) Great extent (Specify association(s): ) 29. Which DOD stakeholder(s) were involved in these coordination efforts with electrical power industry associations? 30. What was the nature of the coordination with electrical power industry associations? 31. What impact, if any, did this coordination have on identifying and/or addressing potential vulnerabilities or risks to the asset? 32. To what extent has coordination taken place between DOD stakeholders and any other organizations not mentioned above to identify and/or address potential vulnerabilities or risks involving electrical power disruptions to the asset? Not at all SKIP TO QUESTION #26 Some extent (Specify other organization(s): ) Moderate extent (Specify other organization(s): ) Great extent (Specify other organization(s): ) 33. Which DOD stakeholder(s) were involved in these coordination efforts with these other organizations? 34. What was the nature of the coordination with these other organizations? 35. What impact, if any, did this coordination with these other organizations have on identifying and/or addressing potential vulnerabilities or risks to the asset? 36. Please provide any additional information regarding coordination with DOD or non- DOD organizations to identify and/or address potential vulnerabilities or risks involving electrical power disruptions to the asset that may not have been addressed through the previous questions. In addition to the contact named above, Mark A. Pross, Assistant Director; David G. Artadi; James D. Ashley; Yecenia C. Camarillo; Gina M. Flacco; Brian K. Howell; Katherine S. Lenane; Greg A. Marchand; Michael S. Pose; Terry L. Richardson; John W. Van Schaik; Marc J. Schwartz; and Cheryl A. Weissman made key contributions to this report. Defense Critical Infrastructure: Actions Needed to Improve the Consistency, Reliability, and Usefulness of DOD’s Tier 1 Task Critical Asset List. GAO-09-740R. Washington, D.C.: July 17, 2009. Defense Critical Infrastructure: Developing Training Standards and an Awareness of Existing Expertise Would Help DOD Assure the Availability of Critical Infrastructure. GAO-09-42. Washington, D.C.: October 30, 2008. Defense Critical Infrastructure: Adherence to Guidance Would Improve DOD’s Approach to Identifying and Assuring the Availability of Critical Transportation Assets. GAO-08-851. Washington, D.C.: August 15, 2008. Defense Critical Infrastructure: DOD’s Risk Analysis of Its Critical Infrastructure Omits Highly Sensitive Assets. GAO-08-373R. Washington, D.C.: April 2, 2008. Defense Infrastructure: Management Actions Needed to Ensure Effectiveness of DOD’s Risk Management Approach for the Defense Industrial Base. GAO-07-1077. Washington, D.C.: August 31, 2007. Defense Infrastructure: Actions Needed to Guide DOD’s Efforts to Identify, Prioritize, and Assess Its Critical Infrastructure. GAO-07-461. Washington, D.C.: May 24, 2007. The Department of Homeland Security’s (DHS) Critical Infrastructure Protection Cost-Benefit Report. GAO-09-654R. Washington, D.C.: June 26, 2009. Influenza Pandemic: Opportunities Exist to Address Critical Infrastructure Protection Challenges That Require Federal and Private Sector Coordination. GAO-08-36. Washington, D.C.: October 31, 2007. Critical Infrastructure: Sector Plans Complete and Sector Councils Evolving. GAO-07-1075T. Washington, D.C.: July 12, 2007. Critical Infrastructure Protection: Sector Plans and Sector Councils Continue to Evolve. GAO-07-706R. Washington, D.C.: July 10, 2007. Critical Infrastructure: Challenges Remain in Protecting Key Sectors. GAO-07-626T. Washington, D.C.: March 20, 2007. Critical Infrastructure Protection: Progress Coordinating Government and Private Sector Efforts Varies by Sectors’ Characteristics. GAO-07-39. Washington, D.C.: October 16, 2006. Critical Infrastructure Protection: Challenges for Selected Agencies and Industry Sectors. GAO-03-233. Washington, D.C.: February 28, 2003. Critical Infrastructure Protection: Commercial Satellite Security Should Be More Fully Addressed. GAO-02-781. Washington, D.C.: August 30, 2002. Electricity Restructuring: FERC Could Take Additional Steps to Analyze Regional Transmission Organizations’ Benefits and Performance. GAO-08-987. Washington, D.C.: September 22, 2008. Department of Energy, Federal Energy Regulatory Commission: Mandatory Reliability Standards for Critical Infrastructure Protection. GAO-08-493R. Washington, D.C.: February 21, 2008. Electricity Restructuring: Key Challenges Remain. GAO-06-237. Washington, D.C.: November 15, 2005. Meeting Energy Demand in the 21st Century: Many Challenges and Key Questions. GAO-05-414T. Washington, D.C.: March 16, 2005. Electricity Restructuring: Action Needed to Address Emerging Gaps in Federal Information Collection. GAO-03-586. Washington, D.C.: June 30, 2003. Restructured Electricity Markets: Three States’ Experiences in Adding Generating Capacity. GAO-02-427. Washington, D.C.: May 24, 2002. Energy Markets: Results of FERC Outage Study and Other Market Power Studies. GAO-01-1019T. Washington, D.C.: August 2, 2001. Cybersecurity: Continued Federal Efforts Are Needed to Protect Critical Systems and Information. GAO-09-835T. Washington, D.C.: June 25, 2009. Information Security: Cyber Threats and Vulnerabilities Place Federal Systems at Risk. GAO-09-661T. Washington, D.C.: May 5, 2009. National Cybersecurity Strategy: Key Improvements Are Needed to Strengthen the Nation’s Posture. GAO-09-432T. Washington, D.C.: March 10, 2009. Critical Infrastructure Protection: DHS Needs to Better Address Its Cybersecurity Responsibilities. GAO-08-1157T. Washington, D.C.: September 16, 2008. Critical Infrastructure Protection: DHS Needs to Fully Address Lessons Learned from Its First Cyber Storm Exercise. GAO-08-825. Washington, D.C.: September 9, 2008. Cyber Analysis and Warning: DHS Faces Challenges in Establishing a Comprehensive National Capability. GAO-08-588. Washington, D.C.: July 31, 2008. Critical Infrastructure Protection: Further Efforts Needed to Integrate Planning for and Response to Disruptions on Converged Voice and Data Networks. GAO-08-607. Washington, D.C.: June 26, 2008. Information Security: TVA Needs to Address Weaknesses in Control Systems and Networks. GAO-08-526. Washington, D.C.: May 21, 2008. Critical Infrastructure Protection: Sector-Specific Plans’ Coverage of Key Cyber Security Elements Varies. GAO-08-64T. Washington, D.C.: October 31, 2007. Critical Infrastructure Protection: Sector-Specific Plans’ Coverage of Key Cyber Security Elements Varies. GAO-08-113. Washington, D.C.: October 31, 2007. Critical Infrastructure Protection: Multiple Efforts to Secure Control Systems Are Under Way, but Challenges Remain. GAO-07-1036. Washington, D.C.: September 10, 2007. Critical Infrastructure Protection: DHS Leadership Needed to Enhance Cybersecurity. GAO-06-1087T. Washington, D.C.: September 13, 2006. Critical Infrastructure Protection: Challenges in Addressing Cybersecurity. GAO-05-827T. Washington, D.C.: July 19, 2005. Critical Infrastructure Protection: Department of Homeland Security Faces Challenges in Fulfilling Cybersecurity Responsibilities. GAO-05-434. Washington, D.C.: May 26, 2005. Critical Infrastructure Protection: Improving Information Sharing with Infrastructure Sectors. GAO-04-780. Washington, D.C.: July 9, 2004. Technology Assessment: Cybersecurity for Critical Infrastructure Protection. GAO-04-321. Washington, D.C.: May 28, 2004. Critical Infrastructure Protection: Establishing Effective Information Sharing with Infrastructure Sectors. GAO-04-699T. Washington, D.C.: April 21, 2004. Critical Infrastructure Protection: Challenges and Efforts to Secure Control Systems. GAO-04-628T. Washington, D.C.: March 30, 2004. Critical Infrastructure Protection: Challenges and Efforts to Secure Control Systems. GAO-04-354. Washington, D.C.: March 15, 2004. Posthearing Questions from the September 17, 2003, Hearing on “Implications of Power Blackouts for the Nation’s Cybersecurity and Critical Infrastructure Protection: The Electric Grid, Critical Interdependencies, Vulnerabilities, and Readiness.” GAO-04-300R. Washington, D.C.: December 8, 2003. Critical Infrastructure Protection: Challenges in Securing Control Systems. GAO-04-140T. Washington, D.C.: October 1, 2003. Combating Terrorism: Observations on National Strategies Related to Terrorism. GAO-03-519T. Washington, D.C.: March 3, 2003. Critical Infrastructure Protection: Efforts of the Financial Services Sector to Address Cyber Threats. GAO-03-173. Washington, D.C.: January 30, 2003. High-Risk Series: Protecting Information Systems Supporting the Federal Government and the Nation’s Critical Infrastructures. GAO-03-121. Washington, D.C.: January 2003. Critical Infrastructure Protection: Significant Challenges Need to Be Addressed. GAO-02-961T. Washington, D.C.: July 24, 2002. Critical Infrastructure Protection: Federal Efforts Require a More Coordinated and Comprehensive Approach for Protecting Information Systems. GAO-02-474. Washington, D.C.: July 15, 2002. Critical Infrastructure Protection: Significant Homeland Security Challenges Need to Be Addressed. GAO-02-918T. Washington, D.C.: July 9, 2002. Critical Infrastructure Protection: Significant Challenges in Safeguarding Government and Privately Controlled Systems from Computer-Based Attacks. GAO-01-1168T. Washington, D.C.: September 26, 2001. Critical Infrastructure Protection: Significant Challenges in Protecting Federal Systems and Developing Analysis and Warning Capabilities. GAO-01-1132T. Washington, D.C.: September 12, 2001. Critical Infrastructure Protection: Significant Challenges in Developing Analysis, Warning, and Response Capabilities. GAO-01-1005T. Washington, D.C.: July 25, 2001. Critical Infrastructure Protection: Significant Challenges in Developing Analysis, Warning, and Response Capabilities. GAO-01-769T. Washington, D.C.: May 22, 2001. Critical Infrastructure Protection: Significant Challenges in Developing National Capabilities. GAO-01-323. Washington, D.C.: April 25, 2001. Critical Infrastructure Protection: Challenges to Building a Comprehensive Strategy for Information Sharing and Coordination. GAO/T-AIMD-00-268. Washington, D.C.: July 26, 2000. Critical Infrastructure Protection: Comments on the Proposed Cyber Security Information Act of 2000. GAO/T-AIMD-00-229. Washington, D.C.: June 22, 2000. Critical Infrastructure Protection: “ILOVEYOU” Computer Virus Highlights Need for Improved Alert and Coordination Capabilities. GAO/T-AIMD-00-181. Washington, D.C.: May 18, 2000. Critical Infrastructure Protection: National Plan for Information Systems Protection. GAO/AIMD-00-90R. Washington, D.C.: February 11, 2000. Critical Infrastructure Protection: Comments on the National Plan for Information Systems Protection. GAO/T-AIMD-00-72. Washington, D.C.: February 1, 2000. Critical Infrastructure Protection: Fundamental Improvements Needed to Assure Security of Federal Operations. GAO/T-AIMD-00-7. Washington, D.C.: October 6, 1999. Critical Infrastructure Protection: Comprehensive Strategy Can Draw on Year 2000 Experiences. GAO/AIMD-00-1. Washington, D.C.: October 1, 1999.
The Department of Defense (DOD) relies on a global network of defense critical infrastructure so essential that the incapacitation, exploitation, or destruction of an asset within this network could severely affect DOD's ability to deploy, support, and sustain its forces and operations worldwide and to implement its core missions, including those in Iraq and Afghanistan as well as its homeland defense and strategic missions. In October 2008, DOD identified its 34 most critical assets in this network--assets of such extraordinary importance to DOD operations that according to DOD, their incapacitation or destruction would have a very serious, debilitating effect on the ability of the department to fulfill its missions. Located both within the United States and abroad, DOD's most critical assets include both DOD- and non-DOD-owned assets. DOD relies overwhelmingly on commercial electrical power grids for secure, uninterrupted electrical power supplies to support its critical assets. DOD is the single largest consumer of energy in the United States, as we have noted in previous work. According to a 2008 report by the Defense Science Board Task Force on DOD's Energy Strategy, DOD has traditionally assumed that commercial electrical power grids are highly reliable and subject to only infrequent (generally weather-related), short-term disruptions. For backup supplies of electricity, DOD has depended primarily on diesel generators with short-term fuel supplies. In 2008, however, the Defense Science Board reported that "[c]ritical national security and homeland defense missions are at an unacceptably high risk of extended outage from failure of the [commercial electrical power] grid" upon which DOD overwhelmingly relies for its electrical power supplies. Specifically, the reliability and security of commercial electrical power grids are increasingly threatened by a convergence of challenges, including increased user demand, an aging electrical power infrastructure, increased reliance on automated control systems that are susceptible to cyberattack, the attractiveness of electrical power infrastructure for terrorist attacks, long lead times for replacing key electrical power equipment, and more frequent interruptions in fuel supplies to electricity-generating plants. As a result, commercial electrical power grids have become increasingly fragile and vulnerable to extended disruptions that could severely impact DOD's most critical assets, their supporting infrastructure, and ultimately the missions they support. DOD's most critical assets are vulnerable to disruptions in electrical power supplies, but DOD lacks sufficient information to determine the full extent of the risks and vulnerabilities these assets face. All 34 of these most critical assets require electricity continuously to support their military missions, and 31 of them rely on commercial power grids--which the Defense Science Board Task Force on DOD Energy Strategy has characterized as increasingly fragile and vulnerable--as their primary source of electricity. DOD Instruction 3020.45 requires DOD to conduct vulnerability assessments on all its most critical assets at least once every 3 years. Also, the Office of the Assistant Secretary of Defense for Homeland Defense and Americas' Security Affairs ASD(HD&ASA) has requested the U.S. Army Corps of Engineers--which serves as the Defense Critical Infrastructure Program's Defense Infrastructure Sector Lead Agent for Public Works--to conduct preliminary technical analyses of DOD installation infrastructure (including electrical power infrastructure) to support the teams conducting Defense Critical Infrastructure Program vulnerability assessments on the most critical assets. (1) As of June 2009, and according to ASD(HD&ASA) and the Joint Staff, DOD had conducted Defense Critical Infrastructure Program vulnerability assessments on 14 of the 34 most critical assets.18 DOD has not conducted the remaining assessments because it did not identify the most critical assets until October 2008. To comply with the instruction, DOD would have to complete Defense Critical Infrastructure Program vulnerability assessments on all most critical assets by October 2011. (2) DOD has neither conducted, nor developed additional guidelines and time frames for conducting, these vulnerability assessments on any of the five non-DOD-owned most critical assets located in the United States or foreign countries, citing security concerns and political sensitivities. (3) The U.S. Army Corps of Engineers has not completed the preliminary technical analyses requested because it has not yet received infrastructure-related information regarding the networks, assets, points of service, and inter- and intradependencies related to electrical power systems that it requires from the military services. (4) Although DOD is in the process of developing guidelines, it does not systematically coordinate Defense Critical Infrastructure Program vulnerability assessment processes and guidelines with those of other, complementary DOD mission assurance programs--including force protection; antiterrorism; information assurance; continuity of operations; chemical, biological, radiological, nuclear, and high-explosive defense; readiness; and installation preparedness--that also examine electrical power vulnerabilities of the most critical assets, because DOD has not established specific guidelines for such systematic coordination. (5) The 10 Defense Critical Infrastructure Program vulnerability assessments we reviewed did not explicitly consider assets' vulnerabilities to longer-term (i.e., of up to several weeks' duration) electrical power disruptions19 on a mission-specific basis, as DOD has not developed explicit Defense Critical Infrastructure Program benchmarks for assessing electrical power vulnerabilities associated with longer-term electrical power disruptions. With more comprehensive knowledge of the most critical assets' risks and vulnerabilities to electrical power disruptions, DOD can better avoid compromising crucial DOD-wide missions during electrical power disruptions. This additional information may also improve DOD's ability to effectively prioritize funding needed to address identified risks and vulnerabilities of its most critical assets to electrical power disruptions.
As shown in figure 1, each of the 16 parts we purchased was either suspect counterfeit or bogus. Specifically, all 12 of the parts we received after requesting authentic part numbers (either with valid or invalid date codes) were suspect counterfeit, according to SMT Corp. In addition, vendors provided us with 4 bogus parts after we requested invalid part numbers, which demonstrates their willingness to sell parts that do not technically exist. The following sections detail our findings for each of the three categories of parts we purchased. Under our selection methodology, the 16 parts we purchased were provided by 13 vendors in China. After submitting requests for quotes on both platforms, we received responses from 396 vendors, of which 334 were located in China; 25 in the United States; and 37 in other countries, including the United Kingdom and Japan. All 40 of the responses we received for the bogus part numbers were from vendors located in China (6 of these vendors also offered to sell us parts for the authentic part numbers we requested). We selected the first of any vendor among those offering the lowest prices that provided enough information to purchase a As such, 3 vendors each supplied 2 given part, generally within 2 weeks.parts and 10 vendors each supplied 1 part. We sent 13 payments to Shenzhen, 2 payments to Shantou, and 1 payment to Beijing. Despite operating under different company names, 2 vendors provided us with identical information for sending payment (name of representative and contact information). There could be a number of explanations for this, ranging from legitimate (the vendors handle payments through the same banker or accountant) to potentially deceptive (same individuals representing themselves as multiple companies). Thirteen parts were then shipped from Shenzhen and 3 from Hong Kong. All seven of the obsolete or rare parts that SMT Corp. tested were suspected counterfeits. Each part failed multiple component authentication analyses, including visual, chemical, X-ray, and microscopic testing. The parts were purchased from five different vendors. Figure 2 provides photos and detailed test results for each part. DAA6 (two parts purchased). Both purchases made using part number DAA6 contained samples that failed multiple authentication analyses, leading SMT Corp. to conclude that the parts were suspect counterfeit. Both parts were purchased from different vendors using the same part number, but were not identical, as shown in figure 2. An authentic part with this part number is an operational amplifier that may be commonly found in the Army and Air Force’s Joint Surveillance and Target Attack Radar System; the Air Force’s F-15 Eagle fighter plane; and the Air Force, Navy, and Marine Corps’s Maverick AGM-65A missile. If authentic, this part converts input voltages into output voltages that can be hundreds to thousands of times larger. Failure can lead to unreliable operation of several components (e.g., integrated circuits) in the system and poses risks to the function of the system where the parts reside. The part we received from one vendor failed four of seven authentication analyses. Visual inspection found inconsistencies, including different or missing markings and scratches, which suggested that samples were re- marked. Scanning electron microscopy (SEM) analysis revealed further evidence of re-marking. X-ray fluorescence (XRF) testing of the samples revealed that the leads contain no lead (Pb) instead of the 3 percent lead (Pb) required by military specifications. Five samples were chosen for delidding, which exposes parts’ die, because of their side marking inconsistencies. While all five samples had the same die, the die markings were inconsistent. According to SMT Corp., die markings in components manufactured within the same date and lot code should be consistent. Finally, the devices found in the first lot tested went into “last time buy” status in 2001, meaning that the parts were misrepresented as newer than they actually were. The manufacturer confirmed this status and added that the part marking did not match its marking scheme, meaning that the date code marked on the samples would not be possible. The part received from the second vendor failed five of seven authentication analyses. Visual inspection again found inconsistencies, including additional markings on about half the samples. Further, scratches and reconditioned leads indicated that the parts were removed from a working environment—that is, not new as we requested. SEM analysis corroborated these findings. As with the other DAA6 part, XRF testing revealed that the leads contain no lead (Pb). X-rays revealed different sized die, and delidding revealed that the die were differently marked. IHH1 (one part purchased). The purchase made using part number IHH1 contained samples that failed five of nine authentication analyses, leading SMT Corp. to conclude that the part was suspect counterfeit. An authentic part with this part number is a multiplexer, which allows electronic signals from several different sources to be checked at one location. It has been used in at least 63 different DOD weapon systems, including the Air Force Special Operations Forces’ AC-130H Gunship aircraft, the Air Force’s B-2B aircraft, and the Navy’s E-2C Hawkeye aircraft. If at least one of the specific signals is critical to the successful operation of the system, then failure could pose a risk to the system overall. Visual inspection revealed numerous issues, including color differences in the top and bottom of the part’s surfaces, suggesting resurfacing and re- marking. Large amounts of scuffs and scratches, foreign debris, and substandard leads were also found. The part also failed resistance to solvents (RTS) testing when it resulted in removal of resurfacing material. Further, Dynasolve testing (additional RTS testing) revealed remnants of a completely different manufacturer and part number. SEM showed evidence of lapping, which is the precise removal of a part’s material to produce the desired dimensions, finish, or shape. Finally, delidding showed die that were similar but insufficiently marked to determine whether they matched the authentic part number. However, because of the failure of the Dynasolve testing, the die cannot be correct. MLL1 (two parts purchased). Both purchases made using part number MLL1 contained a number of samples that failed three of seven authentication analyses, leading SMT Corp. to conclude that the parts were suspect counterfeit. Both parts were purchased from different vendors using the same part number, but were not identical, as shown in figure 2. An authentic part with this number is a voltage regulator that may be commonly found in military systems such as the Air Force’s KC-130 Hercules aircraft, the Navy’s F/A-18E Super Hornet fighter plane, the Marine Corps’s V-22 Osprey aircraft, and the Navy’s SSN-688 Los Angeles Class nuclear-powered attack submarine. If authentic, these parts provide accurate power voltage to segments of the system they serve. Failure can lead to unreliable operation of several components (e.g., integrated circuits) in the system and poses risks to the function of the system where the parts reside. The parts received from both vendors failed the same authentication analyses. Visual inspection was performed on all evidence samples from both purchases. Different color epoxy seals were noted within both lots, according to SMT Corp., which is common in suspect counterfeit devices because many date and lot codes are re-marked to create a uniform appearance. Moreover, XRF testing of the samples revealed that the leads contain no lead (Pb); according to military performance standards, leads should be alloyed with at least 3 percent of lead (Pb). Further, XRF data between the top and bottom of the lead revealed inconsistencies in chemical composition, leading SMT Corp. to conclude that the leads were extended with the intention to deceive. Microscopic inspection revealed that different revision numbers of the die and differences in various die markings were found even though the samples were advertised to be from the same lot and date code. Commonly, components manufactured within the same date and lot code will have the same die revisions. According to SMT Corp.’s report, the manufacturer also stated that “it is very unusual to have two die runs in a common assembly lot. This is suspicious.” Finally, the devices found in the first lot tested went into “last time buy” status—an end-of-life designation—on September 4, 2001, meaning that the parts were misrepresented as newer than they actually were. The manufacturer confirmed this status and added that the part marking did not match its marking scheme, meaning that the date code marked on the samples would not be possible. YCC7 (two parts purchased). Both purchases made using part number YCC7 contained samples that failed several authentication analyses, leading SMT Corp. to conclude that the parts were suspect counterfeit. Both parts were purchased from different vendors using the same part number. An authentic part with this part number is a memory chip that has been used in at least 41 different DOD weapons systems, including the ballistic missile early warning system, the Air Force’s Peacekeeper missile and B-1B aircraft, the Navy’s Trident submarine and Arleigh Burke class of guided missile destroyer, and the Marine Corps’s Harrier aircraft. Failure of the chip, if not redundant, could pose risk to the overall system. The part we received from one vendor failed four of seven authentication analyses. Visual inspection identified numerous issues, including bent or misshapen leads and lead ends and deformed, less-detailed logos of the claimed manufacturer. X-ray analysis revealed that various parts in the samples contained different sized die. SEM analysis showed that surface material had been precisely removed to allow for re-marking. Finally, delidding of two samples revealed die that were marked from a competitor manufacturer with a different part number than the one we requested. In addition, one die was marked with a 1986 copyright, while the other was labeled 1992. The part received from the second vendor failed four of nine authentication analyses. Visual inspection showed evidence of re- marking, with the color of the top surfaces of samples not matching the color of the bottom surfaces. Some samples displayed faded markings while others were blank and had heavy scuff marks to suggest resurfacing. The markings were also not as clear and consistently placed as manufacturer-etched markings would be. Leads were substandard in quality, had been refurbished, and were not as thick as specified. Further, SEM showed evidence of lapping. Finally, the samples responded inconsistently to Dynasolve testing. Similarly, all five of the parts we received and tested after requesting legitimate part numbers but specifying postproduction date codes were also suspected counterfeit, according to SMT Corp. By fulfilling our requests, the four vendors that provided these parts represented them as several years newer than the date the parts were last manufactured, as verified by the part manufacturers. Figure 3 provides photos and detailed test results. DAA6 (one part purchased). The purchase made using part number DAA6 contained samples that failed four of seven authentication analyses, leading SMT Corp. to conclude that the part was suspect counterfeit. Surfaces on the parts in the evidence lots were found to have scratches similar to suspect counterfeit devices that have been re- marked, as confirmed by both visual inspection and SEM analysis. In addition, the quality of exterior markings, including a lack of consistency between the manufacturer’s logo, was lower than would be expected for authentic devices. Tooling marks were also found on the bottom of all components within the evidence lot; these marks suggest that the components were pulled from a working environment. Further inspection led SMT Corp. to conclude that many samples with refurbished leads were extended with the intention to deceive. Moreover, XRF analysis revealed the leads contain no lead (Pb) instead of the 3 percent lead (Pb) required by military specifications. Delidding revealed that the die, while correct for this device, were inconsistent. As previously stated, multiple die runs are considered suspicious. Finally, some of the samples went into “last time buy” status in 2001, despite the fact that we requested parts from 2005 or later and the vendor agreed to provide parts from 2010 or later. IHH1 (one part purchased). The purchase made using part number IHH1 contained samples that failed seven of nine authentication analyses, leading SMT Corp. to conclude that the part was suspect counterfeit. The part we received was supplied by a different vendor than the one that supplied the IHH1 part shown in figure 2. Visual inspection revealed numerous issues, including mismatching surface colors, many scratches and scuffs, foreign debris, and leads that were not uniformly aligned. SEM also showed evidence of lapping. RTS testing resulted in removal of resurfacing material, and surfaces faded when exposed to Dynasolve, which should not occur. Further, samples did not solder properly. Finally, X-rays indicated that different die were used within the samples. This was confirmed in delidding, which revealed inconsistencies in size, shape, and date markings. Of the two types of die found in the sample, one does not match the authentic part number. MLL1 (one part purchased). The purchase made using part number MLL1 contained samples that failed four of seven authentication analyses, leading SMT Corp. to conclude that the part was suspect counterfeit. The part we received was supplied by a different vendor than the ones who supplied the MLL1 parts shown in figure 2. Visual inspection revealed scuffs and scratches indicative of re-marking, which was also seen in SEM analysis. Different colored epoxy seals and variegated sizes and colors of the center mounting slug were also seen. Leads also showed evidence of being refurbished with the intent to deceive. XRF testing of the samples revealed that the leads contain no lead (Pb); according to military performance standards, leads should be alloyed with at least 3 percent of lead (Pb). Delidding revealed that die, though similar, had markings indicating different revisions, which is uncommon for die manufactured in the same date code. Finally, the devices went into “last time buy” status in 2001, whereas the tested parts showed a date code indicating they were made in 2008. The manufacturer confirmed this status. YCC7 (two parts purchased). The two purchases made from different vendors using part number YCC7 contained samples that failed several authentication analyses, leading SMT Corp. to conclude that they were suspect counterfeit. The part we received from one vendor failed three of eight authentication analyses. Visual inspection identified numerous issues, including different colored surfaces that suggest re-marking and unknown residues that indicate improper handling or storage. SEM analysis showed that surface material had been precisely removed to allow for re-marking, similarly to a YCC7 part with legitimate date codes tested above. Further, according to the manufacturer, the legitimate version of this part was last shipped in 2003, whereas the tested part showed a manufacturing date code of 2006. RTS testing resulted in removal of the part marking. The part received from the second vendor failed three of nine authentication analyses. Visual inspection detected numerous issues, including different colored surfaces that suggest re-marking. The markings were also substandard, lacking clarity and consistency in placement. RTS testing removed part markings, further suggesting re- marking. SEM showed evidence of lapping. Delidding revealed die that were consistent with the authentic part, but the date code showed evidence of re-marking to make them appear as if they had come from a homogenous lot. Finally, the manufacturer verified that it last shipped this part in 2003, whereas our samples were marked 2007, which according to SMT Corp., could not be possible. We received offers from 40 vendors in China to supply parts using invalid part numbers, and we purchased four parts from four vendors to determine whether they would in fact supply bogus parts. (See fig. 4.) These were different vendors than the ones that supplied us with the suspect counterfeit parts. The invalid numbers were based on actual part numbers, but certain portions that define a part’s performance specifications were changed. For example, one of our invalid numbers was for an actual voltage regulator but that operated at bogus specifications. None of the invalid part numbers were listed in DLA’s Federal Logistics Information System and, according to selected manufacturers, none are associated with parts that have ever been manufactured. As such, we did not send the parts to SMT Corp. for authentication analysis. We received the four bogus parts after requesting invalid part numbers DAA5, GDD4, and 3MM8. We made two orders using DAA5, one from each Internet purchasing platform, which were fulfilled by different vendors. The parts we received from each vendor appeared similar, as shown in figure 4. The similarity may be due to a number of factors. For example, the vendors could have simply ignored the invalid portion of the part numbers we requested (they did not contact us to inform us that the numbers were invalid). Another possible explanation could be that the parts happened to be fulfilled by the same vendor operating under two different names. In furtherance of our investigation to determine the willingness of firms to provide us bogus parts, we created a totally fictitious part number that was not based on an actual part number and requested quotations over one Internet platform. We received an offer to supply the part from one vendor, but did not invest the resources to purchase the bogus part. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Acting Under Secretary of Defense for Acquisition, Technology, and Logistics, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact Richard Hillman at (202) 512-6722 or hillmanr@gao.gov or Timothy Persons at (202) 512-6522 or personst@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other key contributors to this report are listed in appendix II. This appendix provides details on each of the tests that constitute the authentication analysis SMT Corp. conducted for the parts we purchased. Visual inspection: Visual inspection is performed on a predetermined number of samples (usually 100 percent) to look for legitimate nonconformance issues as well as any red flags commonly found within suspect counterfeit devices. Resistance to solvents (RTS): A mixture of mineral spirits and isopropyl alcohol is used to determine the part marking resistance and pure acetone is used to remove any resurface material. This test is not performed on all parts. In some cases, resurfacing material would not be used by counterfeiters to re-mark a part; in others, the solvents would remove markings even on legitimate parts. X-ray florescence (XRF) elemental analysis: The XRF gathers and measures the elements within a target area. This is used specifically for testing components for RoHS or Hi-Rel conformance, which refer to dangerous substances such as lead (Pb), cadmium (Cd), and mercury (Hg) that are commonly used in electronics manufacturing. For suspect counterfeit devices, it helps determine if a component has the correct plating for the specification it supposed to adhere to. Package configuration and dimensions: This test measures key areas of the device to see if they fall within industry specifications. Real-time X-ray analysis: X-ray analysis is performed on a predetermined number of samples (usually 100 percent). The internal construction of components is inspected (depending on the component package type) for legitimate issues such as broken/taut bond wires, electrostatic discharge damage, broken die, and so forth. For suspect counterfeit devices, the differences in die size/shape, lead frames, bond wire layout, and so forth are inspected. Scanning electron microscopy: A scanning electron microscope is used to perform an exterior visual inspection—more in depth than the previous visual inspection. This is usually performed on a two-piece sample from the evidence lot. Depending on the package type, indications of suspect counterfeit devices are sought, including surface lapping, sandblasting, and sanding with regard to part marking removal. Solderability: This test is usually for legitimate components to determine if they will solder properly when they are used in production. Dynasolve: Dynasolve is a chemical used to break down epoxies in an effort to remove resurfacing material that is impervious to the standard RTS test. Decapsulation/delidding and die verification: The die of a component is exposed with either corrosive materials or a cutting apparatus. This is done to inspect the die or “brain” of a component to determine its legitimacy. This process is performed on numerous samples to look for differences between samples, such as die metallization layout, revisions, part numbers, and so forth—all of which are red flags for suspect counterfeit parts. Cindy Brown Barnes, Assistant Director; Gary Bianchi, Assistant Director; Virginia Chanley; Dennis Fauber; Barbara Lewis; Jeffery McDermott; Maria McMullen; Kimberly Perteet, Analyst in Charge; Ramon Rodriguez; and Timothy Walker made key contributions to this report.
Counterfeit parts—generally the misrepresentation of parts’ identity or pedigree—can seriously disrupt the Department of Defense (DOD) supply chain, harm weapon systems integrity, and endanger troops’ lives. In a November testimony (GAO-12-213T), GAO summarized preliminary observations from its investigation into the purchase and authenticity testing of selected, military-grade electronic parts that may enter the DOD supply chain. As requested, this report presents GAO’s final findings on this issue. The results are based on a nongeneralizable sample and cannot be used to make inferences about the extent to which parts are being counterfeited. GAO created a fictitious company and gained membership to two Internet platforms providing access to vendors selling military-grade electronic parts. GAO requested quotes from numerous vendors to purchase a total of 16 parts from three categories: (1) authentic part numbers for obsolete and rare parts; (2) authentic part numbers with postproduction date codes (date code after the last date the part was manufactured); and (3) bogus, or fictitious, part numbers that are not associated with any authentic parts. To determine whether the parts received were counterfeit, GAO contracted with a qualified, independent testing lab for full component authentication analysis of the first two categories of parts, but not the third (bogus) category. Part numbers have been altered for reporting purposes. GAO is not making recommendations in this report. Suspect counterfeit and bogus—part numbers that are not associated with any authentic parts—military-grade electronic parts can be found on Internet purchasing platforms, as none of the 16 parts vendors provided to GAO were legitimate. “Suspect counterfeit,” which applies to the first two categories of parts that were tested, is the strongest term used by an independent testing lab, signifying a potential violation of intellectual property rights, copyrights, or trademark laws, or misrepresentation to defraud or deceive. After submitting requests for quotes on both platforms, GAO received responses from 396 vendors, of which 334 were located in China; 25 in the United States; and 37 in other countries, including the United Kingdom and Japan. Of the 16 parts purchased, vendors usually responded within a day. GAO selected the first of any vendor among those offering the lowest prices that provided enough information to purchase a given part, generally within 2 weeks. Under GAO’s selection methodology, all 16 parts were provided by vendors in China. Specifically, all 12 of the parts received after GAO requested rare part numbers or postproduction date codes were suspect counterfeit, according to the testing lab. Multiple authentication tests, ranging from inspection with electron microscopes to X-ray analysis, revealed that the parts had been re-marked to display the part numbers and manufacturer logos of authentic parts. Other features were found to be deficient from military standards, such as the metallic composition of certain pieces. For the parts requested using postproduction date codes, the vendors also altered date markings to represent the parts as newer than when they were last manufactured, as verified by the parts’ makers. Finally, after submitting requests for bogus parts using invalid part numbers, GAO purchased four parts from four vendors, which shows their willingness to supply parts that do not technically exist.
MMA enacted and affected a number of activities that SSA identified as related to its responsibilities. Listed are six provisions enumerated in MMA affecting SSA, and the Medicare-related functions and activities undertaken by SSA as a result. Prescription Drug Program (Part D) and Low Income Subsidy (LIS)–In addition to establishing a beneficiary outreach demonstration project for this provision, SSA is responsible for developing forms and procedures for LIS including a simplified application, conducting education and outreach activities, and processing LIS appeals. In addition, SSA will use computer matching data for verification of attestations, process subsidy-changing events, periodically redetermine LIS eligibility, and deduct Part D premiums when the beneficiary chooses to have their premium withheld from their Title II benefit payment and the Centers for Medicare and Medicaid Services (CMS) notifies SSA of this. SSA was also responsible for transferring premiums withheld from Title II benefit payments to CMS. Medicare Prescription Drug Discount Card Program–SSA will support CMS administration of this program by providing data from SSA’s records and data obtained from other federal agencies on potentially eligible Medicare beneficiaries for transitional assistance. TRICARE–SSA will be responsible for enrolling TRICARE beneficiaries into Medicare Part B, calculating their premiums, and refunding excess premiums paid. Medicare Part B Premium–SSA is tasked with implementing Medicare Part B income-based premium subsidy reductions for beneficiaries with income above a stipulated level. SSA will also collect the income- related monthly amount from the Title II benefit payment and transfer the premiums withheld from Title II benefit payments to CMS, and process appeals of the initial determination. Medicare Advantage (MA) Part C–SSA will compute and collect Part C premiums when the beneficiary chooses to have premiums deducted from his or her Title II benefit payment and transfer premiums withheld from Title II benefit payments to CMS. Health Savings Accounts–SSA will obtain information from employer reports, record the information on SSA’s records, and pass the information to the Internal Revenue Service. As a result of the enactment of MMA, SSA conducts outreach efforts to identify individuals entitled to benefits or enrolled under the Medicare program under Title 18 of the Social Security Act, who may be eligible for transitional assistance under the Medicare Prescription Drug Discount Card Program and premium and cost-sharing subsidies under the Prescription Drug Card Part D Program. SSA continues to have a role in the outreach to low-income Medicare beneficiaries for payment of Medicare cost-sharing under the Medicaid program. SSA is also required to verify the eligibility of applicants for the subsidy under MMA who self- certify their income, resources, and family size. To determine whether a Medicare beneficiary is eligible for a subsidy, SSA collects information on whether the individual has income up to 150 percent of the federal poverty guidelines. SSA has established a database to maintain the information it collects and shares information on those eligible and ineligible for subsidies with CMS. To implement the new responsibilities under MMA, SSA established a Medicare Prescription Drug Planning and Implementation Task Force in December 2003. The objectives of the task force included identifying the potentially eligible population, the number and locations of potential workloads and staff and material resource needs, and agreeing on specific responsibilities with other federal government agencies. SSA also identified the specific tasks to carry out the implementation of the activities for each of the provisions under MMA, including designing and managing the planning and implementation processes; issuing regulations; and developing and implementing communication strategies, budget, appeals process, subsidy-changing event process, redetermination process, and strategies for service delivery. Under MMA, the Congress provided SSA with a $500 million appropriation to fund SSA’s start-up administrative costs to implement MMA, during fiscal years 2004 and 2005, but later extended this budget authority to fiscal year 2006. SSA reported that the $500 million for these administrative costs was exhausted in January 2006, and MMA costs are now funded using the LAE. LAE is SSA’s basic administrative account and is an annual appropriation financed from the Social Security and Medicare trust funds. The total amount of SSA administrative costs covered by the Medicare Trust Funds to fund SSA’s Medicare responsibilities has increased with the enactment of MMA. Prior to the establishment of Part D under MMA, Medicare did not generally pay for outpatient prescription drugs, but it did provide health insurance to individuals who are either 65 or older or disabled. Table 1 reflects SSA’s reported administrative cost outlays covered by the Medicare Trust Funds for implementing MMA activities and other Medicare activities. SSA reported spending the $500 million MMA funds from December 2003 through January 2006 on activities to implement the provisions specified in MMA. SSA’s financial reports showed that almost all of the funding reported was used for personnel-related expenses, contractors, and indirect costs (see table 2). More than half of the funds were spent on personnel-related expenses for staff hours used on MMA activities at SSA’s headquarters and field offices. Once the $500 million was spent, MMA costs were funded by SSA’s LAE appropriation. SSA used its financial accounting and reporting system, SSOARS, and its cost analysis system (CAS) to track overall costs related to the implementation of MMA legislation. SSA did not separately track the administrative costs incurred to implement the individual provisions under MMA legislation because the act did not specifically require SSA to do so and it was not cost effective to do so. SSA reported that it spent approximately $261 million on personnel-related expenses, which consisted of salaries and related benefits for both newly hired and existing SSA employees. As a result of MMA, SSA hired and trained more than 2,200 new employees at its field offices and 500 at headquarters to handle the additional workload created by SSA’s new responsibilities under MMA. Personnel-related expenses included salaries for current SSA employees who were also involved in activities related to implementing MMA, including the new Medicare Part D responsibilities. Many of these employees may have been engaged in work on other SSA programs during the same time. SSA used CAS to prorate these employees’ salaries and related expenses based on the amount of time employees charged to MMA and various other SSA responsibilities. SSA reported indirect costs of approximately $117 million for MMA implementation. During each year, SSA incurred administrative costs in support of the various programs. For example, SSA makes rental payments for most of the approximately 1,300 regional field offices it has located around the country and staff in these offices perform duties related to all of the programs administered by SSA, including MMA. In order to allocate these administrative costs to each of its programs, SSA used its cost analysis system, CAS, to charge certain types of costs either proportionally or in full against the MMA appropriation. SSA charges both direct and indirect costs to its programs either by directly charging specific program- related amounts to the affected program in SSOARS or using CAS to allocate personnel-related and general administrative costs that apply to more than one SSA program. CAS accounts for work-years and costs for each program administered by SSA by specific subfunctions within the SSA programs. It is a centralized, computer-based system that uses data from the financial reporting system to break out costs at SSA by program and major functions. The main objective of CAS is to distribute costs equitably across programs and among the various trust funds and general funds. However, SSA stated that systems modifications to enable tracking MMA administrative costs by each of the MMA provisions were not required by the legislation and that it would not have been cost effective to modify the system. SSA reported that $119.6 million in MMA funds went to contractors, vendors, and other government agencies that provided various goods and services necessary for SSA to meet its responsibilities under MMA. Some of the largest reported expenditures included $34.2 million paid to one contractor for software systems development $23 million paid to one contractor for telephone-based beneficiary outreach and information distribution; $18.6 million to the United States Postal Service for mass mailings and other paper-based information distribution; $11.4 million to one contractor for computer hardware and software $6.5 million to the Government Printing Office for the design and production of informational mailings, posters, and other printed materials. The remaining expenditures to other contractors, vendors, and government agencies for goods and services charged to the MMA implementation appropriation included additional computer hardware, software development, and information systems support, as well as installation and reconfiguration of MMA service centers workstations. As of February 2007, SSA reported it had completed 16 of the 22 tasks for implementing six provisions of MMA. SSA is continuing its implementation of the remaining six tasks using LAE funding. Table 3 provides a breakdown of the 22 tasks by major MMA provision. SSA had agencywide policies and procedures in place over its cost tracking and allocation, asset accountability, and invoice review and approval processes. SSA also established specific guidance to charge and allocate its costs to implement MMA. However, those policies and procedures were not always complied with consistently. We found that SSA did not effectively communicate the specific MMA-related guidance to all relevant staff. This ineffective communication resulted in millions of dollars of costs being misallocated to MMA. Some of these misallocations were subsequently detected by SSA and corrected during SSA’s review process. In the area of purchase card transactions, which represented 0.5 percent of the $500 million, we found some instances where credit card purchases had not yet been correctly allocated to MMA. In addition, we found that some purchases made with credit cards were not properly supported or reviewed and may not have been a proper use of MMA funds. Finally, noncompliance with SSA policies and procedures over asset accountability resulted in inadequately tracked accountable assets that were purchased with MMA funds. In order to track costs associated with the development and implementation of SSA’s MMA-related activities, SSA used existing processes and applications, such as CAS, expanded existing processes, such as establishing unique common accounting numbers (CAN) for MMA- related costs, and implemented new processes, such as the online time recording system for MMA-related time spent by administrative staff. In addition, SSA developed specific cost accounting principles for each of its major offices which, when appropriately applied, would enable the offices to allocate nonpersonnel costs among MMA-related activities and across other SSA operating activities. However, the lack of a formal process to ensure that this critical information was communicated to the appropriate level within the SSA offices resulted in misallocation of costs to MMA. In January 2004, SSA initiated a process to expand on its cost accounting process to track and report the cost of implementing the MMA-related activities. The Deputy Commissioner for Finance, Assessment, and Management issued a series of three memoranda to senior officials containing policies and procedures for reporting time spent and updating cost accounting principles associated with MMA planning and implementation efforts. These memoranda included accounting codes and procedures for tracking costs specific to implementing MMA activities, reporting formats for MMA-related costs, and updated cost accounting principles for LAE and MMA allocations. In May 2005 the Deputy Commissioner for Finance, Assessment, and Management issued a memorandum to the deputy commissioners and other key officers for all SSA offices, which reemphasized the need to properly account for MMA-related costs, provided updated cost accounting principles for costs associated with the planning and implementation of MMA, and requested SSA-wide assistance in accurately applying these principles. The updated cost accounting principles were included in a table that was attached to the memorandum. The table provided specific guidance for each office on the cost principle methodology to apply for specific types of costs in order to allocate the costs between the component’s regular resource allocations and the MMA funding. In addition, the memorandum requested that each component identify an individual who would aid in ensuring these principles were appropriately applied. We met with SSA staff to discuss the policies and procedures in place to disseminate these critical memoranda within SSA. We found that not all staff responsible for MMA activities were aware of the guidance. We were told that there was no specific guidance related to the dissemination of key management memoranda. The May 2005 memorandum was addressed to the deputy commissioners of each of SSA’s major offices, and it clearly stated the importance of applying the principles described. We obtained information and documentation from the individuals identified as the contact employee for each component to aid in the effort. We found that there was no mechanism in place to help ensure that all memoranda were disseminated to all relevant staff at SSA’s headquarters and field offices. Timely and thorough communication of operational procedures is critical in ensuring that an agency is able to perform its responsibilities effectively. Our Standards for Internal Control in the Federal Government state that for an entity to run and control its operations, it must have relevant, reliable, and timely communications relating to internal as well as external events. Information is needed throughout the agency to achieve its internal control objectives. Operating information is also needed to determine whether the agency is achieving its compliance requirements under various laws and regulations. Pertinent information should be identified, captured, and distributed in a form and time frame that permits people to perform their duties efficiently. Effective communications should occur in a broad sense with information flowing down, across, and up through the organization. As a result of the ineffective communication of MMA-related guidance, at least $4.6 million of costs were initially incorrectly allocated to MMA. SSA’s offices went through a process to review the allocation of the charges between MMA and LAE appropriation activities, and make appropriate adjustments. The offices identified numerous transactions and adjusted the transaction amounts to reflect the appropriate allocation of costs between MMA and the LAE appropriation. In total they identified transactions totaling more than $4.6 million that had not initially been properly allocated to LAE. However, SSA officials agreed that they had probably not identified all of the transactions that had not been properly allocated and should have been adjusted, such as purchase card purchases. In addition, during our review of the supporting documentation of MMA purchase card transactions, we found 48 purchases totaling $375,313 that had been charged entirely to MMA when a portion of those costs should have been allocated to other SSA programs. The purchases included more than 160 digital projectors, furniture, and other IT equipment such as routers, servers, and tape libraries. While some of these items were initially purchased to carry out MMA-related activities, such as beneficiary outreach, SSA realized that they would also be used for SSA programs other than MMA in the future. Therefore, according to the guidance on accounting for MMA-related expenses, the offices should have charged only one-sixth of the costs for equipment to MMA, with the remainder of the cost charged to the LAE appropriation account to be further allocated across other SSA programs. However, these costs were not allocated as described above and as a result; SSA over-allocated these costs against the MMA appropriation by approximately $313,000. SSA had policies and procedures for purchasing assets and for maintaining accountability for those assets. Included were definitions of the types of assets for which SSA required the requestor to affix bar codes for identification and record in SSA’s asset inventory system. In addition, SSA issued an acquisition alert on the purchase of accountable sensitive and personal government property, which reminded purchase card holders that they were required to provide information to the requestor to ensure that the purchases were reported to their property management or custodial officers so that the property was properly bar-coded and entered into SSA’s property system. However, we found in our review of accountable property purchases that for 21 of 36 transactions that we tested, the purchasers were not aware of their responsibility to provide the requestor of the property with information on the property purchased. The 36 transactions we reviewed included a total of 3,254 accountable property items with a total cost of approximately $4.2 million. As of May 25, 2007, SSA had not properly identified 317 of these items with bar codes or included these assets in the asset inventory system. These items included assets such as information technology network servers and switches, digital projectors, and other electronic equipment. These items had a total cost of approximately $1.3 million. As a result, hundreds of assets purchased with MMA funds were not properly accounted for and SSA was unable to provide us with bar codes or evidence of inclusion of those assets in SSA’s asset inventory system. SSA also has guidance for credit card purchases applicable to micropurchases and purchases made by contracting officers. According to SSA policy on micro-purchasing, credit card purchases are limited to $2,500, must have funds pre-approved, and may not be used to split purchases into more than one transaction to avoid purchase limits. In addition, all credit card purchases must be documented, including written requests, approvals, and proof of purchase and delivery, and maintained by the cardholders for 3 years. Over the last several years, inspectors general and we have reported that some federal agencies do not have adequate internal control over their purchase card programs. Without effective internal control, management does not have adequate assurance that fraudulent, improper, and abusive purchases are being prevented or, if occurring, are being promptly detected with appropriate corrective actions taken. Supervisory approval of purchase requests is a principal means of ensuring that only valid transactions are initiated or entered into by persons acting within the scope of their authority, and the proper amounts are paid to contractors and appropriately charged. A supervisory review of purchase requests is also critical because a supervisor or approving official may be the only person other than the purchaser who would be in a position to identify an inappropriate purchase. Therefore, the supervisor’s or approving official’s review is a critical internal control for ensuring that purchases are appropriate and comply with agency regulations. However, we identified invoices that were paid for questionable amounts without the appropriate supervisory review and approval. Of the 147 purchase card transactions we reviewed, we found 45 transactions totaling $63,828 that did not have proper approval or did not have adequate support for the propriety of the purchase. While SSA’s micro-purchase card policy requires the purchaser to receive an approved purchase request before acquiring goods or services, we noted instances in which the supervisory review or approval was inadequate. We identified the following 18 transactions totaling $31,914 that were initiated and completed by cardholders without proper prior approval. For 8 transactions totaling $17,454, the approvals on the request authorization form occurred after the items had already been purchased by the cardholders. For 2 transactions totaling $2,163, the authorizing signatures were provided on the request authorization form before the request was signed by the requestor. For 2 transactions totaling $3,984, SSA could not provide evidence that the electronic signatures on the purchase requests represented valid authorizations. For 6 transactions totaling $8,313, SSA did not provide evidence that the purchase requests, which authorize the purchase of goods to be made, were approved. We also found instances where the supporting documentation did not provide evidence to support that the costs were related to SSA’s implementation of MMA. We found the following 27 transactions totaling $31,914 for which sufficient supporting evidence was not provided. For 6 transactions totaling $7,077, SSA did not provide any documentation to support the purchases. For 21 transactions totaling $24,837, SSA could not provide sufficient evidence of any relationship between the goods and services purchased and implementation of MMA. The items purchased included five wireless headsets, one big-screen television, remote control devices for PowerPoint presentations, and engraved items. In addition to being unable to relate these purchases to the implementation of MMA, we found no evidence these items were necessary purchases for SSA. In addition, we found evidence that one cardholder circumvented the $2,500 per transaction purchase authority by submitting four purchase requests for the purchase of audio and video media (CDs, DVDs, and VHS tapes) from a single vendor on the same day. As a result, the cardholder ultimately paid a total of $4,365 for four invoices, which was $1,865 above the $2,500 purchase authority limit. SSA had existing policies and procedures in place to track and report the total costs it incurred to implement MMA provisions and to maintain accountability and control over its MMA-related activities. However, procedures and controls over purchase card transactions and asset accountability could be improved. Although purchase card transactions and accountable asset purchases represented a small percentage of the total MMA administrative costs that were paid with MMA funds, having effective controls in place to ensure the proper approval, support, and accountability for these transactions is essential to reduce the risk of improper purchases and improperly accounted for assets. To enhance SSA’s (1) ability to track the costs of program activities including MMA administrative costs, (2) controls over its review and approval processes for purchase card payments, and (3) tracking of its accountable assets, we recommend that the Commissioner of Social Security establish procedures to ensure better dissemination of policies and procedures to all relevant offices and staff; establish additional detailed procedures for a purchase card supporting documentation review and approval process to help ensure that purchase card payments are properly supported, allowable, and allocated; and reinforce existing policies and procedures for the purchase of accountable assets to ensure that accountable assets are bar coded, recorded in SSA’s asset inventory system, and inventoried periodically. In written comments reprinted in appendix II, SSA generally agreed with two of our recommendations, but disagreed with one recommendation. SSA also stated its belief that our report title, SSA Policies and Procedures Were in Place over MMA Spending, but Some Instances of Noncompliance Occurred, did not accurately reflect the findings in the report since it believed there was compliance with its policies and procedures. SSA also believes our characterization of the cause of SSA’s misallocation of costs to MMA as ineffective communication needs to be modified, and pointed out that there was no mention of the remaining misallocated credit card transactions representing only 0.06 percent of the total amount appropriated. SSA suggested the change in the report title because SSA had identified and corrected the $4.6 million initially incorrectly allocated to MMA and the remaining uncorrected instances were insignificant to the total amount appropriated. However, the areas where policies and procedures were not complied with also included misallocated credit card purchases not corrected (which represented more than 10 percent of the dollar value of the credit card purchases charged to MMA), and maintaining accountability over assets purchased with MMA funds. Therefore, we continue to believe that the title of our report accurately characterizes our findings. SSA agreed in theory with our recommendation to establish procedures to better disseminate policies and procedures, but stated its belief that the recommendation was too broad and did not accurately reflect what needed to be done. SSA stated that it will provide more specific instructions for distribution of costs in future guidance. SSA also believed that it had sufficient dissemination methods for acquisition related issues. Our recommendation was intentionally broad to provide SSA management flexibility to determine the most appropriate steps it should take to ensure the complete dissemination of future guidance. To that end, SSA including more specific instructions in future memoranda guidance would provide a corrective action that would be sufficient to address our recommendation. SSA disagreed with our recommendation to establish additional detailed procedures for reviewing and approving supporting documentation for credit card purchases to help ensure that purchase card payments are properly supported, allowable, and allocated. SSA stated its belief that our recommendation was too broad and its guidance for contracting officers and micro-purchasers is sufficient. While SSA stated its belief that its guidance is sufficient and that its contracting officers are already aware of the file documentation required for purchases, we found that 45 of 147 (30 percent) credit card purchases we reviewed did not have proper authorization or complete documentation. This is an unacceptable error rate. We agree that our recommendation is broad, but it is intended to allow SSA the flexibility to determine the most appropriate actions needed to help ensure that there is sufficient evidence available to determine that all credit card purchases are properly approved, supported, allowable, and allocable. SSA pointed out that its current “remote” reviews of micropurchases made in the regions do not include a full file review, and that SSA is considering changing this process to include such reviews. SSA agreed with our recommendation to reinforce existing policies and procedures for accountable asset purchases to help ensure that those assets are bar coded, recorded in SSA’s asset inventory system, and inventoried periodically. SSA also identified its plan to include an acquisition topics website on one of its intranet pages by September 2007 and listed several actions undertaken since December 2006 to reinforce existing policies and procedures and to implement an improvement work plan. SSA also provided additional technical comments, which have been included in the report as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies of this report to the Commissioner of SSA, and other interested parties. Copies will also be made available at no charge on GAO’s Web site at http://www.gao.gov. If you have questions concerning this report, please call me on (202) 512-9471. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix III. To review the costs of the Social Security Administration’s (SSA) implementation of MMA activities, we reviewed the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 and discussed its impact with SSA to obtain an understanding of SSA’s responsibilities under the act. We also reviewed the policies and regulations SSA established to track and report MMA-related costs and other information pertaining to its program activities, as well as additional guidance provided to SSA officials so that they could track the costs of new MMA- related activities. We obtained cost and other information on SSA’s implementation activities from SSA officials in the agency’s headquarters in Baltimore, Maryland. In addition, we discussed specific cost information with various officials and staff in headquarters and field offices who were responsible for specific transactions. To obtain specific cost and program information, as well as information related to specific financial statement issues, we reviewed our reports and reports from SSA’s independent financial statement auditors. To determine how SSA expended the MMA funds to implement MMA activities, we obtained annual schedules of amounts charged to MMA for fiscal years 2004 (starting in December 2003), 2005, and 2006. We analyzed the expenditure data, sorted it by object class, and segregated the amounts charged to the Limitation on Administrative Expenses appropriation after the $500 million of MMA funding had been used by SSA. We discussed our sorted detailed analysis with SSA budget and finance officials. We also compared expenditure data to audited Social Security Online Accounting and Reporting System (SSOARS) data and determined that the data were sufficiently reliable for the purposes of this report. To determine what procedures SSA had in place over the MMA funds, we reviewed MMA and SSA policies, procedures, and other guidance and interviewed key SSA officials for information on the contract procurement, payroll, cost accounting, budget, and payment processes to obtain a thorough understanding of each process. We also conducted follow-up discussions to verify our understanding of all key processes related to spending MMA funds. We reviewed SSA’s independent financial statement auditors’ reports and audit documentation to determine the level of audit coverage provided in the payroll; property, plant, and equipment; and cost accounting areas, plus any internal control weaknesses identified. On the basis of the clean audit opinion on SSA’s financial statements and no related findings, we did not perform testing on the payroll and cost accounting areas. As a result, we focused our testing of transactions on contractor and vendor payments. To determine whether SSA’s contractor expenditures were properly supported as valid uses of MMA funds, we selected and tested a monetary unit sample of 59 transactions totaling $82.6 million from a population of 20,736 transactions totaling $123.5 million paid from January 2004 through February 2006. We found no exceptions during testing. We also used various nonstatistical sampling methods (data mining, document analysis, and other forensic techniques) to nonstatistically select 208 transactions to test adequate supporting documentation of requests, authorization, evidence of purchase and receipt, and applicability to MMA. We discussed all testing exceptions with the appropriate SSA officials and staff involved with the specific transaction. We conducted our work in Washington, D.C., and Baltimore, Md., from March 2006 through April 2007 in accordance with generally accepted government auditing standards. Appendix II: Comments from the Social Security Administration COMMENTS ON THE GOVERNMENT ACCOUNTABILITY OFFICE (GAO) DRAFT REPORT, "SOCIAL SECURITY ADMINISTRATION: POLICIES AND PROCEDURES WERE IN PLACE OVER MMA SPENDING, BUT INSTANCES OF NONCOMPLIANCE OCCURRED” (GAO-07-986) Thank you for the opportunity to review and comment on the draft report. We feel that the title “Policies and Procedures Were in Place Over MMA Spending, but Instances of Noncompliance Occurred,” does not accurately reflect the findings in the report and would recommend that it be changed to “Policies and Procedures Were In Place Over MMA Spending And There Was Compliance”. Also, the summary page and pages 3, 12 and 13 of the report state that “SSA subsequently identified and corrected at least $4.6 million of amounts misallocated between MMA and other SSA program activities, but had not corrected approximately $313,000 misallocated credit card purchase transactions.” The fact is that $4.6 million was initially not correctly allocated to MMA. However, the original plan on allocation was to review the ongoing transactions to assure policy consistency. That being said, the $4.6 million was correctly allocated as planned. As a result, the statement on page 10 that the “ineffective communications resulted in millions of dollars of costs being misallocated to MMA” needs to be appropriately modified. Also, no mention is made that the $313,000 in misallocated credit card purchase transactions represents only 0.06 percent of the total amount appropriated. Our comments on the draft report recommendations, along with technical revisions to assist in the clarity of the report, are as follows: Establish procedures to ensure better dissemination of policies and procedures to all relevant offices and staff. We agree in theory. As the recommendation exists, we do not feel it accurately reflects what needs to be done and find it too broad. In reference to memoranda regarding accounting for costs, rather than establishing specific procedures, which would require interpretation by staff, future memorandum providing guidance similar to that issued for MMA implementation will contain specific instructions for distribution. With respect to acquisition policy, we believe that we have already established sufficient dissemination methods for acquisition related issues. Our Office of Acquisition and Grants (OAG) uses Acquisition Alerts to disseminate policy related to micro-purchasers (and, when noted, to all persons with delegated acquisition authority). Additionally, we have established an email distribution list consisting of project officers, who are requestors for purchases at various dollar levels, in order to better disseminate policies and procedures applicable to them. Establish additional detailed procedures for purchase card supporting documentation review and approval process to help ensure that purchase card payments are properly supported, allowable, and allocated. We disagree. With respect to the policies and procedures directed to micro-purchasers and contracting officers (not requestors), we do not concur with this recommendation and find it too broad. Contracting officers (COs) are already aware of the file documentation required for their purchases, whether paid with the purchase card or otherwise. We believe that our current guidance, “Micro-purchasing in SSA,” contains sufficient information for micro- purchasers and their approving officials regarding file documentation and retention. Micro- purchasers and approving officials must take this course prior to being appointed and, beginning in fiscal year 2008, they will be required to take refresher training every three years. Additionally, OAG and regional COs conduct acquisition management reviews (AMRs) of micro-purchase activity. When OAG conducts reviews of purchases made by micro- purchasers in Headquarters, and when regional COs conduct on-site AMRs within their regions, we review the purchase log and the file documentation associated with the purchases. OAG also conducts “remote” AMRs of purchases made in the regions. These remote reviews currently do not entail a review of an entire contract file. We are currently considering altering this process to request, for select cases, that the cardholder under review send us a copy of all the file documentation related to the purchase being reviewed. Regarding the issue of proper allocation of purchase card transactions, we will review this area with the appropriate component officials. Reinforce existing policies and procedures for accountable assets purchased to help ensure that accountable assets are bar coded, recorded in SSA’s asset inventory system, and inventoried periodically. We agree. As previously stated, we will add this policy to the “Acquisition Special Topics” webpage on the OAG intranet page. A notice and other cross-references to this permanent location of the policy will be disseminated via Acquisition Alerts and Acquisition Updates. We anticipate completing this by the end of September 2007. With regard to accountable assets and related policies, since December 2006, our Office of Publications and Logistics Management have undertaken actions to reinforce existing policies and procedures and to implement an improvement work plan. These include: Entering into collaborative agreements with our Operations and Systems components to establish mechanisms for correcting Property Management issues and policies; Developing a new User’s Guide for the Sunflower Asset System (SFA), our Property Management System. The new guide contains not only the “how” but also the “why” to give users a better understanding of Property Management; Reviewing and revising internal policy guides, Administrative Instructions Manual Systems (AIMS Guides); Developing stronger lines of communication with employees responsible for property management (e.g., established a Quarterly Property Management Teleconference, updated and distributed news letters on Property Management, offered Sunflower training sessions, website updates, etc.); Updating and clarifying the listing of “Items to be Bar-coded”; and Continuing to reinforce the importance of asset accountability and management. Finally, for the 317 assets noted on page 13 of the report, we have requested GAO provide us specific asset details to assure our inventory system includes the required information. The following team members made key contributions to this report: Steven R. Haughton, Assistant Director; William (Ed) Brown; Sharon Byrd; Rich Cambosos; Marcia Carlsen; Lisa Crye; Leslie C. Jones; Brent J. LaPointe; Margaret Mills; and Robert Martin.
The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) created a voluntary outpatient prescription drug benefit as part of the Medicare program, and appropriated up to $500 million for the Social Security Administration (SSA) to fund the start-up administrative costs in meeting its responsibilities to implement MMA. SSA was given a great deal of discretion in how to use the funds and the act provided little detail on how the funds were to be spent. You asked us to review SSA's costs for implementing MMA to determine (1) how the MMA funds were expended, (2) what procedures SSA has in place over the use of those funds, and (3) how SSA complied with those procedures related to contractor and vendor payments. SSA spent the $500 million in MMA funds from December 2003 through January 2006 to implement activities outlined in MMA. The majority of costs paid with MMA funds consisted of personnel-related expenses, contractors, and indirect costs. More than half of the funds were spent on payroll for staff hours used on MMA activities in SSA headquarters and field offices. Once the $500 million was spent, SSA began to use its general appropriation to fund the remaining costs of implementing MMA activities. SSA used its cost analysis system to track the total costs of its implementation of MMA activities. As of February 20, 2007, SSA had completed implementation of 16 of the 22 tasks for the six provisions under the act. SSA had agencywide policies and procedures in place for its cost tracking and allocation, asset accountability, and invoice review processes. It also established specific guidance to assign and better allocate SSA's costs in implementing MMA. There were some instances though where SSA did not comply with these policies and procedures. SSA did not effectively communicate the specific MMA-related guidance to all affected staff. SSA subsequently identified and corrected at least $4.6 million of costs that initially were incorrectly allocated to MMA, but had not corrected approximately $313,000 misallocated credit card purchase transactions. In addition, GAO found instances where accountable assets purchased with MMA funds, such as electronic and computer equipment, were not being properly tracked by SSA in accordance with its policies and instances where purchase card transactions were not properly supported. Although purchase card transactions and accountable asset purchases represented a small percentage of total MMA costs, proper approval and support for these types of transactions is essential to reduce the risk of improper payments.
In 1996, the United Nations and Iraq established the Oil for Food program to address growing concerns about the humanitarian situation in Iraq after international sanctions were imposed in 1990. The program’s intent was to allow the Iraqi government to use the proceeds of its oil sales to pay for food, medicine, and infrastructure maintenance, and at the same time prevent the regime from obtaining goods for military purposes. From 1997 through 2002, Iraq sold more than $67 billion in oil through the program and issued $38 billion in letters of credit to purchase commodities. The Oil for Food program initially permitted Iraq to sell up to $1 billion worth of oil every 90 days to pay for humanitarian goods. Subsequent U.N. resolutions increased the amount of oil that could be sold and expanded the humanitarian goods that could be imported. In 1999, the Security Council removed all restrictions on the amount of oil Iraq could sell to purchase civilian goods. The United Nations and the Security Council monitored and screened contracts that the Iraqi government signed with commodity suppliers and oil purchasers, and Iraq’s oil revenue was placed in a U.N.-controlled escrow account. In May 2003, U.N. resolution 1483 requested the U.N. Secretary General to transfer the Oil for Food program to the Coalition Provisional Authority by November 2003. The United Nations allocated 59 percent of the oil revenue for the 15 central and southern governorates, which were controlled by the central government; 13 percent for the 3 northern Kurdish governorates; 25 percent for a war reparations fund for victims of the Iraq invasion of Kuwait in 1990; and 3 percent for U.N. administrative costs, including the costs of weapons inspectors. In central and southern Iraq, the Iraqi government used the proceeds from its oil sales to purchase food, medicines, and infrastructure supplies and equipment. The Iraqi government negotiated directly with suppliers and distributed food in accordance with its Public Distribution System, a food ration basket for all Iraqis. In northern Iraq, nine U.N. agencies implemented the program, primarily through constructing or rehabilitating schools, health clinics, power generation facilities, and houses. Local authorities submitted project proposals to the United Nations to consider and implement. The Iraqi government in Baghdad procured bulk food and medicines for the northern region, but the World Food Program and the World Health Organization were responsible for ensuring the delivery of these items. From 1997 to 2002, the Oil for Food program was responsible for more than $67 billion of Iraq’s oil revenue. Through a large portion of this revenue, the United Nations provided food, medicine, and services to 24 million people and helped the Iraqi government supply goods to 24 economic sectors. In February 2002, the United Nations reported that the Oil for Food program had considerable success in sectors such as agriculture, food, health, and nutrition by arresting the decline in living conditions and improving the nutritional status of the average Iraqi citizen. Prior to the creation of OIOS, the United States and other member states had expressed concern about the ability of the United Nations to conduct internal oversight. In 1994, the General Assembly established OIOS to conduct audits, evaluations, inspections, and investigations of U.N. programs and funds. Its mandate reflects many characteristics of U.S. inspector general offices in purpose, authority, and budget. Since its inception, OIOS has submitted its audit reports to the head of the unit being audited for action and only forwarded to the Secretary General those reports in which program officials disagreed with audit recommendations. It also provided certain reports to the General Assembly. However, in December 2004, the General Assembly passed a resolution requiring OIOS to publish the titles and summaries of all audit reports and provide member states with access to these reports on request. Before the OIOS was created in July 1994, the United States and other U.N. member states, the U.S. Congress, and the Government Accountability Office (GAO) had expressed concern about the United Nations’ management of its resources and had criticized the inadequacies of its internal oversight mechanisms. In response, the Secretary General established the Office for Inspections and Investigations in August 1993 under the leadership of an Assistant Secretary General. However, member states—primarily the United States—wanted a more autonomous oversight body with greater authority. In November 1993, the U.S. Permanent Representative to the United Nations proposed the establishment of an “office of inspector general” to the General Assembly. The office would be headed by an “inspector general” who, although an integral part of the Secretariat, would carry out his/her responsibilities independently of the Secretariat and all U.N. governing bodies. According to the proposal, the office would support member states and the Secretary General by providing independent advice based on an examination of all activities carried out at all U.N. headquarters and field locations financed by the regular budget, peacekeeping budgets, and voluntary contributions. At the same time, the new office would have external reporting responsibilities. In April 1994, Congress enacted Public Law 103-236, which required certain funds to be withheld from the United Nations until the President certified that it had established an independent office of inspector general to conduct and supervise objective audits, investigations, and inspections. The legislation stated, among other things, that the inspector general should have access to all records, documents, and offices related to U.N. programs and operations. The legislation also called for the United Nations to have procedures to (1) ensure compliance with the inspector general office’s recommendations and (2) protect the identity of, and prevent reprisals against, any staff members making a complaint, disclosing information, or cooperating in any investigation or inspection by the inspector general’s office. After a series of negotiations among member states, including the United States, a compromise was reached. The General Assembly, in July 1994, approved a resolution creating OIOS within the U.N. Secretariat. OIOS’ mandate reflects many of the characteristics of U.S. inspector general offices in purpose, authority, and budget. For example, OIOS staff have access to all records, documents, or other material assets necessary to fulfill their responsibilities. OIOS’ reporting mandate calls for it to submit reports to the Secretary General and the General Assembly. Since its inception, OIOS has generally submitted its reports to the head of the unit audited. If program officials disagreed with the report’s recommendations, the report was submitted to the Secretary General. However, beginning in 1997, OIOS began listing all its reports in its annual reports to the General Assembly and briefing representatives of member states interested in a particular report. It also provided certain reports of interest to the General Assembly. Further transparency over OIOS audit reports occurred in December 2004 when the General Assembly approved a resolution calling for OIOS to include in its annual and semi-annual reports the titles and brief summaries of all OIOS reports issued during the reporting period. OIOS was also directed to provide member states with access to original versions of OIOS reports upon request. As of June 2004, OIOS had 180 posts, including 124 professional staff and 56 general service staff. Staff work in four operational divisions: Internal Audit Divisions I and II; the Monitoring, Evaluation, and Consulting Division; and the Investigations Division. The 58 audit reports released on January 9, 2005, reflect the work of Internal Audit Division I, which contained a separate unit for Iraq-related work. For 2004, OIOS’ resources totaled $23.5 million. OIOS generally conducts four types of activities: audits, evaluations, inspections, and investigations. Audits determine if internal controls provide reasonable assurance of the integrity of financial and operational information and whether rules are followed and resources are safeguarded. Audits also identify ways to improve the efficient use of resources and the effectiveness of program management. OIOS’ internal audit divisions adhere to the Standards for the Professional Practice of Internal Auditing in the United Nations. These standards regulate issues related to independence, objectivity, proficiency, management, and the code of ethics and rules of conduct for auditors. Inspections address mandates, management issues, or areas of high risk, make recommendations, and are generally submitted through the Secretary General to the General Assembly. Evaluations assess the relevance, efficiency, effectiveness, and impact of a program’s outputs and activities against its objectives. These reports are addressed to the intergovernmental body—normally the Committee for Program and Coordination or the General Assembly—that requested the evaluation. Investigations staff follow up on reports of possible violations of rules or regulations, mismanagement, misconduct, waste of resources, or abuses of authority. OIOS also monitors program performance and prepares the Program Performance Report of the Secretary General, which is submitted to the General Assembly every 2 years. The complexity and diversity of the U.N. Oil for Food program and associated risks called for adequate oversight coverage. In 2000, OIOS established the Iraq Program Audit Section within the Internal Audit Division. The Independent Inquiry Committee report stated that the number of auditors assigned to Oil for Food audits increased from 2 in 1996 to 6 in 2002 and 2003. OIOS’ audit responsibilities extended to the following entities involved in Iraq operations: Office of the Iraq Program (OIP) in New York; U.N. Office of the Humanitarian Coordinator in Iraq; U.N. Compensation Commission (UNCC); U.N. Monitoring, Verification, and Inspection Commission; U.N. Human Settlement Program (U.N.-Habitat) Settlement Rehabilitation Program in northern Iraq; U.N. Guards Contingent in Iraq; and U.N. Department of Management. The OIOS audits revealed a number of deficiencies in the management of the Oil for Food program and its assets and made numerous recommendations to correct these deficiencies. The audits focused primarily on Oil for Food activities in northern Iraq and at the U.N. Compensation Commission. OIOS also conducted audits of the three U.N. contracts for inspecting commodities coming into Iraq and for independent experts to monitor Iraq’s oil exports. We identified a total of 702 findings contained in the reports across numerous programs and sectors. Weaknesses and irregularities were common in planning and coordination, procurement, and asset and cash management. Appendix I contains the summary data of our analysis and a description of our scope and methodology. Our summary below focuses on key findings for the areas that received the most audit coverage— activities in northern Iraq and the U.N. Compensation Commission. We also highlight findings from the audits of the inspections contracts. The OIOS audits that reviewed U.N. activities in northern Iraq found problems with planning and coordination, procurement, and asset and cash management. In 2004, OIOS reported that U.N.-Habitat had not adequately coordinated with other U.N. agencies in providing essential services for its housing projects. For example, U.N.-Habitat provided high-capacity generators but had not contacted the U.N. Development Program—the entity responsible for the power sector—to provide electric power connections. OIOS also found that about 3,200 houses were unoccupied for extended periods due to a lack of coordination with agencies providing complementary services. An August 2000 report noted a lack of planning that resulted in the questionable viability of some Oil for Food projects in northern Iraq. For example, six diesel generators were procured in an area where diesel fuel was not readily available. In addition, local authorities would not accept a newly constructed health facility subject to flooding. A December 2000 report also noted that highways and a sports stadium were built in violation of criteria established by the Security Council and the Iraqi government. In November 2002, OIOS reported that almost $38 million in procurement of equipment for the U.N.-Habitat program was not based on a needs assessment. As a result, 51 generators went unused from September 2000 to March 2002, and 12 generators meant for project-related activities were converted to office use. In addition, OIOS reported that 11 purchase orders totaling almost $14 million showed no documentary evidence supporting the requisitions. In 2002, OIOS found that the U.N-Habitat program lacked a proper asset inventory system and that no policies and procedures governing asset management were evident. As a result, the value of assets was not readily available. In one case, $1.6 million in excess construction material remained after most projects were complete. OIOS also reported that a lack of effective cash management policies meant that project funds were misused or put at risk. In a March 2000 audit, OIOS reported that the U.N. Development Program’s country office used $500,000 in project funds for office expenses without authorization or proper documentation. A February 2002 audit found that the office in Erbil put at risk $600,000 to $800,000 in cash due to a lack of cash management policies. The U.N. Compensation Commission (UNCC), a subsidiary unit of the Security Council, was established in 1991 to process claims and provide compensation for losses resulting from Iraq’s invasion and occupation of Kuwait. Compensation is payable from a special fund that initially received 30 percent of the proceeds from Iraqi oil sales. The claims are resolved by panels, each of which is made up of three commissioners who are experts in law, accounting, loss adjustment, assessment of environmental damage, and engineering, according to UNCC. The UNCC received more than 2.6 million claims for death, injury, loss of or damage to property, commercial claims, and claims for environmental damage resulting from Iraq’s invasion of Kuwait in 1991. As of December 2004, all but about 25,000 of these claims had been resolved, and almost $19 billion had been paid in compensation, according to UNCC. In a July 2002 risk assessment of UNCC, OIOS found that controls to prevent employee fraud were marginal, operations required close monitoring to prevent possible collusion, possibilities existed for illegal activities, and payment processing controls were inadequate. The report concluded that the overcompensation of claims and irregular or fraudulent activities could lead to significant financial risks. OIOS audits identified weaknesses in UNCC’s management of claims processing and payments resulting in recommended downward adjustments of more than $500 million. For example, in a September 2002 audit, OIOS found potential overpayments of $419 million in compensation awarded to Kuwait. OIOS identified duplicate payments, calculation errors, insufficient evidence to support losses, and inconsistent application of claims methodology. In a December 2004 audit, OIOS found that using the exchange rate against the U.S. dollar on the date of the claimed loss, rather than the date of payment as consistent with U.N. financial rules and regulations, had resulted in substantial overpayments. OIOS estimated that the likely overpayments were about $510 million. Previously in 2002, UNCC had challenged OIOS’ audit authority. In a legal opinion on OIOS’ authority requested by UNCC, the U.N. Office of Legal Affairs noted that the audit authority extended to computing the amounts of compensation but did not extend to reviewing those aspects of the panels’ work that constitute a legal process. However, OIOS disputed the legal opinion, noting that its mandate was to review and appraise the use of financial resources of the United Nations. OIOS believed that the opinion would effectively restrict any meaningful audit of the claims process. As a result of the legal opinion, UNCC did not respond to many OIOS observations and recommendations, considering them beyond the scope of an audit. According to OIOS, UNCC accepted about $3.3 million of the more than $500 million in recommended claims reductions. On the audit of $419 million in potential overpayments to Kuwait, OIOS noted that it received the workpapers to conduct the audit 8 days after the award was made. To help ensure that the proceeds of Iraq’s oil sales were used for humanitarian and administrative purposes, the United Nations contracted with companies to monitor Iraq’s oil exports and commodity imports. OIOS audits of these contracts revealed procurement problems and poor contract management and oversight by OIP. The United Nations contracted with Saybolt Eastern Hemisphere B.V. to oversee the export of oil and oil products from Iraq through approved export points. At the time of the audit report in July 2002, the estimated total value of the contract was $21.3 million, with an annual contract value of $5.3 million. OIOS found that OIP had made no inspection visits to Iraq and posted no contract management staff in Iraq. However, OIP had certified that Saybolt’s compliance with the contract was satisfactory and approved extensions to the contract. In addition, OIOS estimated that the United Nations paid $1 million more than was necessary because equipment costs were already built into the inspectors’ daily fee structure. OIOS asserted that these costs should have been charged as a one-time expenditure. OIOS recommended that OIP recover the $1 million paid for equipment and that future contracts provide for equipment purchases as one-time expenditures. OIP did not respond to the auditors’ first recommendation and did not agree with the second recommendation. The first contract for inspecting imported commodities was with Lloyds’ Register Inspection Ltd.; the initial 6-month contract was for $4.5 million, and the total value of the contract increased to more than $25 million by July 1999. Lloyds’ agents were to monitor, verify, inspect, test, and authenticate humanitarian supplies imported into Iraq at three entry points. In July 1999, OIOS found deficiencies in OIP’s oversight of Lloyds’ contract. OIP had certified Lloyd’s invoices for payment without any on- site verification or inspection reports. OIOS reported that Lloyds’ used suppliers’ manifests to authenticate the weight of bulk cargo and did not independently test the quality of medicines and vaccines supplied. In responding to the audit’s findings, OIP rejected the call for on-site inspections and stated that any dissatisfaction with Lloyds’ services should come from the suppliers or their home countries. OIP awarded a new contract to Cotecna Inspection S.A. Similar to Lloyd’s, Cotecna was to verify that the description, value, quantity, and quality of supplies arriving in Iraq were in accordance with the criteria established by the sanctions committee. In April 2003, OIOS cited concerns about procurement issues and amendments and extensions to Cotecna’s original $4.9 million contract. Specifically, OIOS found that, 4 days after the contract was signed, OIP increased Cotecna’s contract by $356,000. The amendment included additional costs for communication equipment and operations that OIOS asserted were included in the original contract. OIP agreed to amend future contracts to ensure that procurement documents include all requirements, thus eliminating the need to amend contracts. OIOS’ audits and summary reports revealed a number of deficiencies in the management and internal controls of the Oil for Food program, particularly in northern Iraq. The reports also identified problems in UNCC’s claims processing resulting in significant overpayments. However, OIOS did not examine certain headquarters functions responsible for overseeing the humanitarian commodity contracts for central and southern Iraq. Limitations on OIOS’ resources and reporting hampered its coverage of the Oil for Food program and its effectiveness as an oversight tool. OIOS did not examine certain headquarters functions—particularly OIP’s oversight of the contracts for central and southern Iraq that accounted for 59 percent or almost $40 billion in Oil for Food proceeds. The Iraqi government used these funds to purchase goods and equipment for central and southern Iraq and food and medical supplies for the entire country. As we reported in 2004, the Iraqi government’s ability to negotiate contracts directly with the suppliers of commodities was an important factor in enabling Iraq to levy illegal commissions. OIP was responsible for examining contracts for price and value at its New York headquarters. In addition, the U.N. sanctions committee reviewed contracts primarily to remove dual-use items that Iraq could use in its weapons programs. However, it remains unclear which U.N. entity reviewed Iraq contracts for price reasonableness. OIOS did not assess the humanitarian contracts or OIP’s roles and responsibilities and its relationship with the sanctions committee. OIOS believed that these contracts were outside its purview because the sanctions committee was responsible for their approval. OIP management also steered OIOS toward program activities in Iraq rather than headquarters functions where OIP reviewed the humanitarian contracts. Even when OIOS requested funds to conduct an assessment of OIP operations, the funds were denied. For example, in May 2002, OIP’s executive director did not approve a request to conduct a risk assessment of OIP’s Program Management Division, citing financial reasons. The Committee also noted that the practice of allowing the heads of programs the right to fund internal audit activities leads to excluding high-risk areas from internal audit examination. The Committee therefore recommended that the Internal Audit Division’s budgets and staffing levels for all activities be submitted directly to the General Assembly. In addition, OIOS assigned only 2 to 6 auditors to cover the Oil for Food program. The Committee found that this level of staffing was low compared to OIOS’ oversight of peacekeeping operations. In addition, the U.N. Board of Auditors indicated that 12 auditors were needed for every $1 billion in U.N. expenditures. The Committee concluded that the Oil for Food program should therefore have had more than 160 auditors at its height in 2000. However, the Committee found no instances in which OIOS communicated broad concerns about insufficient staff levels to U.N. management. OIOS also encountered problems in its efforts to widen the distribution of its reporting beyond the head of the agency audited. In August 2000, OIOS proposed to send its reports to the Security Council. However, the Committee reported that the OIP director opposed this proposal, stating that it would compromise the division of responsibility between internal and external audit. In addition, the U.N. Deputy Secretary General denied the request and OIOS subsequently abandoned any efforts to report directly to the Security Council. The internal audits provide important information on the management of the Oil for Food program, particularly in the north, and on the management of the commission that compensates claims for war damages with proceeds from Iraq’s oil sales—two areas that have received little public attention. The reports also broaden the findings of the Independent Inquiry Committee’s report, particularly with respect to the inadequacies in the award of the oil and customs inspections contracts. However, many unanswered questions remain about the management and failings of the Oil for Food program, particularly the oversight roles of OIP and the Security Council’s sanctions committee. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or the other Subcommittee members may have. We reviewed the 58 reports released by the Independent Inquiry Committee to determine the scope of the audits and the issues addressed in the reports’ findings and recommendations. We created a data base of information from 50 reports to identify the program elements that the audits reviewed, the findings of each audit, and the recommendations for improvement. To identify audit scope, we identified the extent to which the audits addressed Oil for Food headquarters operations, U.N. Secretariat Treasury operations in New York, U.N. operations in the northern Iraq, and the U.N. Compensation Commission for disbursing claims for damage caused by the 1991 Persian Gulf War. To determine the range of issues addressed by the audits, we identified the kinds of issues raised by the findings and determined that the audits addressed the following issues: (1) procurement and contract management and oversight; (2) financial management, including financial controls, management of funds, and procedures for payments; (3) asset management, including inventory, and the management of fixed assets such as vehicles, buildings, and supplies; (4) personnel and staffing; (5) project planning, coordination, and oversight; (6) security; and (7) information technology. We established a protocol to identify findings for data input, and we identified specific recommendations in the audit reports. To ensure consistency of data input, a data base manager reviewed all input, and all data input was independently validated. Table 1 presents the summary of overall findings and recommendations in OIOS reports. Table 2 presents these findings by area of U.N. operation.
The Oil for Food program was established by the United Nations and Iraq in 1996 to address concerns about the humanitarian situation after international sanctions were imposed in 1990. The program allowed the Iraqi government to use the proceeds of its oil sales to pay for food, medicine, and infrastructure maintenance. Allegations of fraud and corruption have plagued the Oil for Food program. As we have testified and others have reported, the former regime gained illicit revenues through smuggling and through illegal surcharges and commissions on Oil for Food contracts. The United Nations' Independent Inquiry Committee was established in April 2004 to investigate allegations of corruption and misconduct within the Oil for Food program and its overall management of the humanitarian program. In January 2005, the Committee publicly released 58 internal audit reports conducted by the United Nations' Office of Internal Oversight Services (OIOS). GAO (1) provides information on OIOS' background, structure, and resources; (2) highlights the findings of the internal audit reports; and (3) discusses limitations on the audits' coverage. Before the United Nations established OIOS, the United States and other member states had criticized its lack of internal oversight mechanisms. In 1993, the United States proposed the establishment of an inspector general position within the United Nations and withheld U.S. funds until such an office was established. In 1994, the General Assembly created OIOS and tasked it with conducting audits, investigations, inspections, and evaluations of U.N. programs and funds. OIOS has generally provided audit reports to the head of the U.N. agency or program subject to the audit but also provided certain reports of interest to the General Assembly. However, this limited distribution hampered member states' efforts to oversee important U.N. programs. In December 2004, the General Assembly directed OIOS to publish the titles and summaries of all audit reports and provide member states with access to these reports on request. The audit reports released in January 2005 found deficiencies in the management of the Oil for Food program and made numerous recommendations. We identified 702 findings in these reports. Most reports focused on U.N. activities in northern Iraq, the operations of the U.N. Compensation Commission, and the implementation of U.N. inspection contracts. In the north, OIOS audits found problems with coordination, planning, procurement, asset management, and cash management. For example, U.N. agencies had purchased diesel generators in an area where diesel fuel was not readily available and constructed a health facility subject to frequent flooding. An audit of U.N.-Habitat found $1.6 million in excess construction material on hand after most projects were complete. OIOS audits of the U.N. Compensation Commission found poor internal controls and recommended downward adjustments totaling more than $500 million. The United Nations asserted that OIOS had limited audit authority over the Commission. Finally, OIOS audits of the contractors inspecting Iraq's oil exports and commodity imports found procurement irregularities and limited U.N. oversight. OIOS' audits and summary reports revealed deficiencies in the management and internal controls of the Oil for Food program. However, OIOS did not examine certain headquarters functions--particularly OIP's oversight of the contracts for central and southern Iraq that accounted for 59 percent or almost $40 billion in Oil for Food proceeds. The Independent Inquiry Committee noted several factors that limited OIOS' scope and authority. First, OIOS did not believe it had purview over the humanitarian contracts because the sanctions committee approved the contracts. Second, the U.N. Office of the Iraq Program steered OIOS toward programs in the field rather than at headquarters. Third, the Office of the Iraq Program refused to fund an OIOS risk assessment of its program management division. Finally, U.N. management and the Office of the Iraq Program prevented OIOS from reporting its audit results directly to the Security Council.
The JCP and the GPO have prominent roles in federal government printing. The oldest joint committee in Congress, the JCP was established in 1846 and is comprised of five Representatives and five Senators. It oversees the operation of GPO, which by law is the principal printing organization for federal agencies. The JCP exercises oversight over government printing, and is authorized to “use any measure it considers necessary to remedy neglect, delay, duplication or waste in the public printing and binding, and the distribution of Government publications.” To assist in carrying out its responsibilities, in 1990 the JCP updated the Government Printing and Binding Regulations, which require agencies to report semi-annually to the JCP any in-house printing and any printing that exceeds 5,000 production units of a single page or 25,000 production units in the aggregate of multiple pages. GPO was established in 1861 to print government documents and disseminate them to the public. Title 44 of the U.S Code, Public Printing and Documents, provides that all printing for Congress, the executive branch, and the judiciary (except the Supreme Court) is to be done by or contracted by GPO, unless otherwise exempted. GPO prints at its in- house plant in Washington, D.C., and one other secure facility for passports and smart cards only, but it contracts with private printers to produce the majority of printing for the federal government. At its in-house plant, GPO prints primarily congressional documents, such as the Congressional Record. GPO offers different programs and services such as GPO Express, which allows agencies to print directly to FedEx Office and other private sector vendors, and the GPO Simplified Purchase Agreement Program, which provides a list of pre-approved private printers and stated prices for federal agencies to use when selecting a printer. GPO receives funding through direct appropriations ($126 million in fiscal year 2012), collection of an approved fee-for-service from other federal agencies for print procurement, and the sale of publications to the public. GPO’s activities also include providing public access to official government documents though the FDLP and GPO’s Federal Digital System (FDsys) website. The Superintendent of Documents, who heads GPO’s Information Dissemination division, is responsible for collecting government products and disseminating them to the public through a network of approximately 1200 depository libraries and online catalogues. GPO evaluates documents to identify those that contain information on U.S. government activities or are important reference publications, and should therefore be disseminated to a depository library. Title 44 requires that federal agencies make their publications available to the Superintendent of Documents for cataloging and distribution though the FDLP. With the onset of digital publishing, the FDLP has been transformed into a primarily electronic program, obtaining and distributing federal documents digitally. GPO’s FDsys website, which offers an online catalogue of official government digital documents, also aids in the collection and preservation of government publications and provides access to the public. Federal government printing definitions are outlined in statute and regulation. Title 44 at Section 501 Note defines printing as the processes of composition, platemaking, presswork, duplicating, silkscreen processes, and binding. The 1990 JCP Regulations include the definitions of key printing terms applicable at that time. Prior to 1994, Title 44 did not include duplicating in the definition of printing, but in 1994, the Title 44 definition was updated to include duplicating (i.e., printing done on high- speed duplication machines) as a printing process. However, the 1990 JCP Regulations were not updated to include duplicating. In our review, we identified two main categories of agency printing used today— ink-based, and toner-based or ink-jet-based: Ink-Based Printing (also referred to as conventional printing) is a water and ink-based process that uses machines called printing presses to produce material such as publications and other documents. Presses use plates to transfer images onto a final paper document. Conventional presses are relatively costly to set up, making the first impression of a document expensive, but costs decline as the volume of copies increases, which makes it cost- effective for high volume print jobs. Ink-based printing uses offset or digital printing presses. Offset printing presses typically have many open areas where the machinery can be manually adjusted, and can take up a large amount of space (see fig. 1). A digital printing press also uses plates to transfer images and ink, but the process is computerized and the press is mostly closed because it does not need the same level of manual adjustments (see fig. 2). Toner-Based Printing (also referred to as duplication) is a process on machines that use either heat to transfer an image on paper or toner or ink-jet to transfer an image to paper. Toner-based printing is typically done on one type of equipment, a high-speed duplication machine, also referred to as a high-speed copier. This refers to a high capacity toner-based machine, typically capable of 100 or more black and white images per minute with some finishing capabilities (e.g., staples, collating, limited binding, etc.) (see fig. 3). These machines typically have a higher printing speed and capacity than typical “walk- up” office copiers. Since the JCP Regulations were updated in 1990, agencies’ printing operations have changed in scale and type. Printing industry and government data suggest that the total volume of printed material has been declining for at least the past 10 years. A major factor in this decline is the use of electronic media options, such as digital publishing. As such, federal agencies publish more documents directly to the Internet where the public can access them, bypassing the need for the agency to print hard copies. At the same time that digital publishing was increasing, digital printer/copier technology was developing. In circumstances where agencies still needed hard copies to be printed, digital printers and copiers allowed federal agencies to produce documents themselves that formerly would have required professional printing expertise from outside vendors such as GPO or private printers. Based on definitions we developed in conjunction with GPO and the Interagency Council on Printing and Publication Services, among the agencies in our survey universe, we measured 64 percent fewer in-house printing plants than the number included in the 1990 JCP Regulations. The 1990 JCP Regulations listed 231 authorized in-house printing plants; agencies we surveyed reported operating 84 in-house plants. Although there was an overall decline in the number of printing plants, there was not a decline across all agencies—9 of the surveyed agencies reported a decrease in the number of plants, 14 agencies reported no change in the number of in-house plants, and 8 agencies reported an increase in the number of in-house plants. DOD accounted for the greatest decline in in-house printing plants. For example, the 1990 JCP Regulations listed 142 in-house printing plants for the armed services and the Defense Logistics Agency (DLA), which currently manages the majority of DOD’s printing infrastructure. DLA officials reported managing 17 in-house plants in our survey (see fig. 4). Other agencies that reported declines showed less dramatic reductions, such as the Department of Energy, which had 18 plants on the 1990 list, and reported 5 plants in our 2013 survey. A decline in the number of in- house plants could be due to some agencies reducing their printing and focusing more on digital publications. One factor that could have influenced the increased emphasis on digital publications is a November 9, 2011, executive order promoting efficient spending. This executive order included a provision encouraging agencies “to limit the publication and printing of hard copy documents and to presume that information should be provided in an electronic form, whenever practicable, permitted by law, and consistent with applicable records retention requirements.” The vast majority of agencies that currently operate in-house printing plants reported operating duplication equipment. That is, most agencies reported operating high-speed duplication machines, and fewer reported operating conventional printing presses. Of the 32 agencies operating in- house printing plants, 17 reported that all of their in-house printing was duplication, and another 14 agencies reported operating some duplication equipment in addition to ink-based conventional printing presses. The remaining agency did not report its type of equipment (see fig. 5). No agency reported having only ink-based conventional printing presses at its in-house plants. In addition to agencies operating fewer printing plants and conducting more duplication, interviews with selected agencies showed declines in printing volumes in recent years. We interviewed six agencies that, according to OMB budget data, accounted for about 80 percent of total federal printing and reproduction obligations. Of those six agencies, five reported declines in total printing volumes between fiscal year 2009 and fiscal year 2011, ranging from 6 percent to 59 percent. One agency reported an 11 percent increase in printing volumes and told us it was due in part to opening an additional in-house plant in 2009. All six agencies estimated a decrease in total spending on printing and reproduction between fiscal year 2009 and fiscal year 2011. Decreases ranged from as little as 0.1 percent to over 90 percent. All six agencies also reported greater spending on printing sent to GPO and private printers than on in-house printing or duplication. For example, in fiscal year 2011, USPS reported spending $5.8 million on in-house printing and spending $108.3 million on printing sent to private printers. Similarly, HHS reported spending $0.4 million on in-house printing, and $50.2 million on printing sent to GPO or private printers in fiscal year 2011. Agencies told us that they use a number of factors such as sensitivity, cost, volume, turnaround time, and in-house capabilities to determine if they will print in-house or externally through GPO or private printers. For example, DLA officials told us DLA produces some documents, such as those printed for the President, at in-house facilities to accommodate the short turnaround time and possible sensitive nature of the documents. VA officials said they send large volume print orders or documents requiring special finishing to private printers. For example, VA officials told us they used GPO and its approved private printers to print 2.9 million copies of its 2011 federal benefits booklet for veterans, survivors, and dependents because private printers could better handle this quantity than VA could in-house. Even with these changes in agencies’ printing, the definition of “printing” has not been updated in the 1990 JCP Regulations, which outline printing-related definitions and list authorized agency in-house printing plants. In 1994, Title 44 was updated to include “duplicating” in the definition of “printing,” so that it read “… ‘printing’ includes the processes of composition, platemaking, presswork, duplicating, silk screen processes, binding, microform, and the end items of such processes.” However, the 1990 JCP Regulations have not been updated to include duplicating in the definition of printing, and duplicating is not separately defined in either authority. For example, Title 44 does not separately define duplicating, and the 1990 JCP Regulations include a definition for “duplicating/copying,” such that it is not distinguished from “copying.” Some agencies had difficulty using the definitions we created or identifying the number of their in-house facilities that performed duplication and qualified as printing plants. As mentioned above, to facilitate consistent data gathering from federal agencies on their printing activities, we worked with GPO and the Interagency Council on Printing and Publication Services to establish working definitions that included the Title 44 definition of printing and an updated definition of duplicating. In our survey, three agencies told us their definition of printing differed from ours. Two of those reported using the 1990 JCP Regulations printing definition, which does not include “duplicating”. Similarly, VA officials believed their agency operated one in-house printing plant, but in our discussion officials noted that the facility does not typically produce work that exceeds the volume limits of “duplication” and as such would not qualify as a printing plant. Postal Service officials reported that its 67 district offices do not track printing volumes, and thus could not determine if those offices’ printing operations would qualify as printing plants. We excluded these VA and Postal Service plants in our tally. Some agencies also had difficulty providing volumes and spending data under these definitions. For instance, printing officials from Commerce’s bureaus told us that they each tracked printing differently—some tracked pages while others tracked the number of jobs. Regarding spending, some printing officials told us they combined in-house printing and printing through GPO and its contractors in their spending estimates while others were able to report these amounts separately. For example, DLA officials said that they do not typically track volumes by our duplication volume limits, and as such would not track spending at facilities (i.e., printing plants) that may exceed those volumes. However, at our request, DLA officials reviewed production at the agency’s in-house facilities and identified plants that met our definitions, and then estimated spending at these plants. Other agencies we interviewed, such as the Postal Service and the State Department, provided total spending for all jobs printed at their in-house printing plants. As such, this data could include jobs printed at volumes below those in the provided duplication definition. Agencies may also have had difficulty providing volumes and spending data because of decentralized printing practices. For instance, five agencies reported that their printing operations were dispersed across the agency, which meant that officials responsible for printing operations did not have readily available information on agency-wide printing operations. One was the Department of Commerce, which houses 12 bureaus and agencies that operate independently and differently. Commerce officials told us that because printing is dispersed, there was not a single official who could report on printing across the agency. Health and Human Services’ four divisions with printing operations also reported operating with their own distinct printing officials and practices. In these cases, each division provided its own volume and spending information, which could lead to inconsistencies in data across bureaus or sub-agencies. In addition, the 1990 JCP Regulations provide that federal agencies report on their print operations to the JCP, but the requirements do not address duplication. The 1990 JCP Regulations state that agencies shall report printing operations semi-annually to the JCP, on information such as the total cost of printing and an inventory of plant equipment at in- house printing plants. Officials from four of the six agencies we interviewed reported that they did not recently send reports on their printing operations to the JCP, and JCP staff told us they received only a few reports in the last 5 years. Discussions with officials from one agency suggested a reason for this may be that some agency officials believed they are only required to report on printing plants with offset or conventional printing equipment, not those with duplication equipment. Since the reporting requirements date back to 1990, before Title 44 was updated to include duplicating in the definition of printing, neither the 1990 JCP Regulations nor the JCP report forms specify that agencies are required to submit reports for printing plants that agencies may consider duplication plants. Officials from one agency told us they did not report to JCP because they did not have any in-house facilities, and officials from another agency noted that they were exempt from the reporting requirements. Additionally, officials from a bureau at one agency told us they send JCP reports to an in-house printing officer, suggesting some confusion regarding reporting requirements. JCP staff told us they recognize the 1990 JCP Regulations do not include duplication in the definition of printing, and are working to revise this guidance. Specifically, JCP staff are in the process of developing printing definitions that more closely reflect current printing practices, particularly for “duplication” activities. JCP staff also told us that they are working to streamline reporting requirements for agency printing officials. The majority of government documents are published digitally (electronically produced and then disseminated over the Internet); however, the provisions in the law that require agencies to submit documents to the Federal Depository Library Program (FDLP) do not reference digital publishing. GPO estimates that more than 90 percent of all government information is published digitally. Title 44 outlines the types of documents that are required to be submitted to the FDLP, upon request, but does not reference digital documents explicitly. Currently, Title 44 defines a “government publication” as “… informational matter which is published as an individual document at Government expense, or as required by law.” Title 44 does not specify if “published” includes digitally published documents. Officials from selected agencies told us they do not submit digital documents to FDLP, and two reported that they do not have any policies or procedures for submitting documents to the FDLP regardless of whether the documents were printed at their in- house printing plants or published electronically. For documents printed through GPO, agencies are not required to make a determination about whether a document should be submitted to FDLP, as GPO typically identifies documents that could be of interest to the FDLP and then, if FDLP request them, sends them directly to the FDLP. Some agencies’ officials told us they rely on GPO for this service, and do not submit digital documents to the FDLP. FDLP staff have taken a number of steps to address this challenge, and to obtain digital documents from agencies. FDLP staff told us they focus on educating agencies about document submission requirements, and searching for and locating possible “fugitive documents”—documents that agencies published and FDLP staff believe should have been submitted to the FDLP. In a January 2013 report, the National Academy of Public Administration (NAPA) recommended that GPO work with depository libraries and other library groups to develop a comprehensive plan for preserving the print collection of government documents, including a process for ingesting digitized copies into FDsys, GPO’s online system that provides free access to government publications. In response to NAPA’s recommendation, GPO noted that GPO and the depository library community have long recognized the need to catalog and preserve the tangible collections that GPO has distributed to federal depository libraries since 1861, and that implementing this recommendation will be incorporated into GPO’s national plan for the future of the FDLP. Because NAPA’s review was ongoing and GPO was outside the scope of our review, we do not make recommendations regarding FDLP submissions to GPO in this report (see app. I for more information on our scope and methodology). We provided copies of our draft report to GPO and DOD for their review and comment. GPO generally agreed with our findings and provided a letter and technical comments, which we incorporated as appropriate. DOD did not provide comments. We are sending copies of this report to the appropriate congressional committees. The report will also be available at no charge on the GAO Website at https://www.gao.gov If you or your staff have any questions about this report, please contact me at 202-512-2834 or stjamesl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To address our objectives, we developed working definitions of key printing terms—“printing”, “duplication”, and “printing plant”—to use when surveying and interviewing agencies on their current printing practices. To do this we used existing statutory language for “printing” and “printing plant” from Title 44 of U.S. Code, Public Printing and Documents, and the Joint Committee on Printing’s 1990 Printing and Binding Regulations (1990 JCP Regulations). To define “duplication” we worked with GPO and the Interagency Council on Printing and Publication Services to update the definition of duplication in the 1990 JCP Regulations. We considered volumes above 500 copies of a single page and 2,500 copies in the aggregate of multiple pages “duplication,” and volumes below that “copying”. This allowed us to identify agencies’ printing operations that qualified as printing plants rather than lower volume copying operations, sometimes referred to as “copy centers”. In interviews with selected agencies, printing officials expressed confusion over this definition of “duplication”, in part due to the volume limits distinguishing duplication from copying. The volume limits we used were similar but not the same as limits outlined in the 1990 JCP Regulations on the number of pages agencies are permitted to produce in-house without prior authority of the JCP (5,000 copies of a single page or 25,000 in the aggregate of multiple pages). We clarified our definitions when possible, but it is possible that agency officials’ confusion could have affected the number of printing plants reported in our survey. See table below for a list of how each printing term was defined in the 1990 JCP Regulations, Title 44, and by GAO with assistance from GPO and the Interagency Council on Printing and Publication Services. To identify agencies that could be operating in-house printing plants, we developed a universe consisting of agencies authorized in the 1990 JCP Regulations and agencies that had obligated funds to the “printing and reproduction” object class in at least 2 years between fiscal years 2009 and 2011. Agencies report printing and related obligations in this object class. Data in the various volumes of the President’s budget request, including object class data, undergo rigorous review by the OMB, and accordingly, are generally considered sufficiently reliable for most of GAO’s purposes. This includes using this data to select agencies to survey. For the purposes of our review we made a number of adjustments to the 1990 JCP Regulations list of 36 agencies. First, we combined the armed forces and their plants, which were listed separately in the 1990 JCP Regulations, under the Defense Logistics Agency (DLA) since DLA took over all printing plants for the armed forces. Second, we excluded the Administrative Office of U.S. Courts, due to its being outside the executive branch; GPO, as it was outside the scope of our review, and the National Academy of Public Administration since it had ongoing related work on GPO; and the Panama Canal Commission, as it was de- commissioned in 1999. Finally we separated the Social Security Administration from Health and Human Services. In 1990 the Social Security Administration was a sub-agency with an in-house printing plant within Health and Human Services, but it became an independent agency in 1994. With these exceptions, 31 of the agencies from the 1990 list remained in our universe. Through budget obligations data we identified 16 additional agencies with at least 2 years of printing and reproduction obligations for fiscal years 2009 through 2011. This resulted in a total universe of 47 agencies with possible in-house printing plants. To describe how federal printing regulations and statutes reflect current printing practices, we reviewed legal documents such as Title 44 of the U.S. Code and the 1990 JCP Regulations, administered a survey to the universe of agencies described above, and interviewed officials from the printing industry, GPO, and other federal agencies. To describe agencies’ printing volumes and spending, we interviewed the six agencies from our survey universe whose printing and reproduction obligations in fiscal years 2009, 2010, and 2011 together constituted the majority—roughly 80 percent—of total federal obligations to printing and reproduction. Those agencies were the Department of Commerce, the Department of Defense, Health and Human Services, Department of State, U.S. Postal Service, and Veterans Affairs. We interviewed printing and budget officials from these agencies, and obtained information on their printing operations, including data on printing volumes and spending. Findings from our interviews with selected agencies cannot be used to generalize results to the entire population. To describe agencies’ in-house printing spending, we relied on data provided by selected agencies since federal printing and reproduction obligations may not reliably describe agencies’ in-house printing spending. This is partially due to challenges with obligations data we have previously reported, such as that obligations categories are not mutually exclusive (e.g., it is possible for some printing and duplication obligations to be categorized under a different object class, and vice versa). We also found that the printing and reproduction object class includes printing done through GPO or external private printers as well as that done at agencies’ in-house printing plants. As such, we relied on interviews with agencies to describe their in-house printing operations and collected spending information from agencies for analysis. We also asked agencies to provide spending information on the printing that they sent out to GPO and private printers. Some agencies provided printing expenses and others provided obligations, but for the purposes of this report, we refer to both as “spending.” Obligations and expenditures capture different aspects of the budgeting process – obligations are the legal commitment to pay for a good or service while expenditures are the actual disbursement of money. For the purposes of this report we assume that there is a relatively short time lag between obligation and expenditure for printing and reproduction activities, and therefore that the difference is not material. As such, we refer to both obligations and expenditures as a single “spending” category. To describe agencies’ current printing practices, such as operation of in- house printing plants, we administered the survey described above to the 47 agencies in our universe from January through March 2013. Results from that survey are presented in this report, and the questions asked in our survey instrument can be found in appendix III. We received a 100 percent response rate on the survey and analyzed the information obtained. We developed a questionnaire using Microsoft Excel to obtain information about federal agencies in-house printing operations. We identified potential survey recipients from a list of agency print contacts provided by GPO and the Interagency Council on Printing and Publication Services. We tested the questionnaire with print officials from five agencies included on our list of potential respondents. We selected these agencies to represent different printing and reproduction obligations and to provide a mix of agencies we identified through obligations data and those listed in the 1990 JCP Regulations. We conducted these survey pretests to determine if the questions were understandable and measured what we intended, and to ensure that the survey was not overly burdensome. On the basis of feedback from the pretests we modified the questionnaire as appropriate. In late January 2013, we sent an email alerting agency contacts to the upcoming survey; the survey was delivered to recipients via email message a few days later. Using the questionnaire, we requested contact information for print officials; feedback on the definitions used in the survey, each agency’s cited authority to operate in- house printing plants, and information on in-house printing plants such as location and equipment used. We did not independently verify if the information in the surveys was accurate. For agencies in the JCP 1990 Regulations, we provided a list of plants that were authorized in 1990 and asked them to verify if a listed plant was operational or not operational. We also asked that the agencies identify any additional plants that were not included in the 1990 list. For agencies that were not included in the 1990 list but were added to our survey population using obligations data, we asked that they provide plant information. To help increase our response rate, we sent two follow-up emails and called agency officials from January through March 2013. The practical difficulties of conducting any survey may introduce some types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted or the sources of information available to respondents can introduce unwanted variability into the survey results. As we discussed previously, some of our survey respondents reported having difficulty with the definitions of printing, duplication, and printing plant that we included in our survey. We included steps in the design of the questionnaire, such as testing the instrument, and followed-up with agencies to clarify responses during the data collection for the purpose of minimizing such nonsampling errors. We took the following steps to increase the response rate: pretesting the questionnaires with agency officials with print operation knowledge, conducting multiple follow-up calls and emails to encourage responses to the survey, and contacting respondents to clarify unclear responses. We conducted this performance audit from August 2012 through July 2013, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. GAO surveyed 47 agencies to determine if they operated in-house printing plants. To identify agencies that could be operating in-house printing plants, we developed a universe consisting of agencies authorized in the 1990 Government Printing and Binding Regulations and agencies that had obligated funds to the “printing and reproduction” object class in at least 2 years between fiscal years 2009 and 2011. The tables below contain the agencies in this universe and the number of plants authorized in the 1990 Printing and Binding Regulations and/or plants reported in 2013 in the GAO Government Printing Survey. We administered a survey focused on government printing operations to print officials at 47 agencies we determined could be operating in-house printing plants. We provided questions to identified agency print officials using Microsoft Excel to obtain information about federal agencies’ in- house printing operations. This appendix contains the questions from the survey questionnaire. We provided the following definitions for agencies to use when answering our questions: Printing includes the processes of composition, platemaking, presswork, duplicating, silkscreen processes, binding, microform, and the end items of such processes. Duplicating includes high-speed duplicating. (Title 44 Sec. 503 Note) Duplication (a) by automatic copy-processing or copier-duplicating machines, producing copies by electrostatic, thermal, or other copying processes, and using exclusively toner or ink-jet type inks instead of ink; and (b) in volumes that exceed either 500 copies of a single page or 2,500 production units in the aggregate of multiple pages, per job. We asked officials from all surveyed agencies the following questions regarding the definitions of key print terms: 1. Does your agency have official definitions of "printing", "duplication," or "printing plant" that differ from any of the three definitions above? 2. If yes, please provide your agency's definitions below: 3. What authority are the definition(s) from? (Please include statute or other citation(s).) We asked agency officials the following questions, with specific instruction, to determine how many in-house printing plants each agency operates: 4. What is the total number of in-house printing plants (including duplication plants) operated by your agency, across all departments and bureaus within the agency? When completing your answer, please include the following types of facilities: Any plant that produces printing (including duplicating), owned or Duplication plants and high speed duplication plants that use toner or ink-jet machines and typically produce more than 500 copies of a single page or 2500 production units in the aggregate, per job. When completing your answer, please exclude the following types of facilities: Any small plants or “copy centers” that do NOT produce more than 500 single/2500 in aggregate production units, per job. 5. Which of the following is the source of your statutory authority to operate in-house printing plants (including duplication plants)? 1990 Government Printing and Binding Regulations Some other statutory authority Don’t know a. If you selected "Some other statutory authority" in Q5, please specify that authority in the space provided here. We asked only print officials from agencies authorized to operate in- house plants listed in the JCP 1990 Government Printing and Binding Regulations the following questions to determine the operating status and additional information on those listed in-house plants: 6. The table provided contains information on printing plants operated by your agency, as indicated in the 1990 Government Printing and Binding Regulations. Please indicate the current status of each plant (Q6a), and complete the requested information for all operational plants (Q6b - Q6f). a. What is the current status of this plant? o Operational o Not Operational b. Does this plant produce the following types of products? (Yes, No, Don’t Know) o Toner or Ink Jet on Paper: o Ink on Paper: o Other products (i.e. other substrates): If you answered "yes" to other products, please provide two examples of other products below: c. Does this plant contain any of the following types of equipment? (Yes, No, Don’t Know) If yes, how many? o Offset printing press (ink): o Digital printing press (ink): o High speed duplication machine (ink-jet or toner): d. Approximately how much of this plant's printing volume is offset printing? (All, More than half, About half, Less than half, None) e. Approximately how much of this plant's printing volume is digital printing? (All, More than half, About half, Less than half, None) f. Approximately how much of this plant's printing volume is duplication? (All, More than half, About half, Less than half, None) We asked all survey participants the following questions about any in-house printing plants not listed in the JCP 1990 Printing and Binding Regulations: 7. If your agency operates any in-house printing plants (including duplication plants) that were NOT included in the previous question (Q6), please list the name and location of the plant and complete the requested information using the table below. If you do not have any in-house printing plants (including duplication plants) to enter on this tab, please check this box and proceed to the next tab: a. Name of Plant: b. Location of Plant (City and State): c. Does this plant produce the following types of products? (Yes, No, Don’t Know) o Toner or Ink Jet on Paper: o Ink on Paper: o Other products (i.e. other substrates): If you answered "yes" to other products, please provide two examples of other products below: d. Does this plant contain any of the following types of equipment? (Yes, No, Don’t Know) o Offset printing press (ink): o Digital printing press (ink): o High speed duplication machine (ink-jet or toner): e. Approximately how much of this plant's printing volume is offset printing? (All, More than half, About half, Less than half, None) f. Approximately how much of this plant's printing volume is digital printing? (All, More than half, About half, Less than half, None) g. Approximately how much of this plant's printing volume is duplication? (All, More than half, About half, Less than half, None) In addition to the contact named above, Sharon Silas (Assistant Director), Melissa Bodeau, Leia Dickerson, Sarah Farkas, Kathleen Gilhooly, Leigh Ann Haydon, Carol Henn, Hannah Laufe, John Mingus, Jr., Betsey Ward- Jenks, Elizabeth Wood, and William T. Woods made significant contributions to this report.
Federal law requires that, with limited exceptions, all federal printing be performed by or through GPO. The JCP authorizes exemptions to specific agencies to operate in-house printing plants. In its 1990 JCP Regulations, the JCP included a list of authorized federal in-house printing plants. Some agency documents, once published, are required to be submitted to the FDLP, a GPO program designed to preserve government documents and make them available to the public. GAO was asked to examine how federal printing practices had changed since the JCP Regulations were updated in 1990. This report describes (1) agencies' current printing practices--including the number of in-house printing plant and selected agencies' volumes and spending--and (2) how agencies' current printing practices are reflected in federal printing regulations and statutes. GAO surveyed agencies that might be operating in-house printing plants, interviewed GPO and agency officials, analyzed agency data on printing volumes and spending, and reviewed printing regulations and statutes. GAO also selected six agencies from those surveyed to interview, based on their printing and reproduction obligations. Findings from these interviews cannot be generalized to other federal agencies. GAO is not making recommendations in this report. GAO provided copies of the draft report to GPO and DOD for review and comment. GPO generally agreed with the findings and provided a letter and technical comments which were incorporated as appropriate. DOD did not provide comments. Agencies GAO surveyed reported operating fewer in-house printing plants than in 1990. Specifically, surveyed agencies reported operating 64 percent fewer plants than the number listed in the Congress's Joint Committee on Printing's (JCP) Government Printing and Binding Regulations, updated in 1990 (1990 JCP Regulations). The Department of Defense (DOD) accounted for the greatest decline in in-house printing plants. The 1990 JCP Regulations listed 142 DOD printing plants; however, the Defense Logistics Agency, which currently manages the majority of DOD's printing infrastructure, reported 17 in-house printing plants in GAO's survey. In addition, most agencies reported operating toner-based high-speed duplication machines, and fewer reported operating ink-based conventional printing presses. Of the 32 agencies operating in-house printing plants, 17 reported that all of their in-house printing was conducted on high-speed duplication machines; another 14 agencies reported operating some duplication equipment in addition to conventional printing presses (the remaining agency did not report its type of equipment). No agency reported having only ink-based conventional printing presses at its in-house plants. In addition, interviews with selected agencies showed declines in printing volumes and total spending, and suggested that agencies spent more on printing sent to the Government Printing Office (GPO) and its contracted private printers than on printing done at in-house printing plants. Agencies' printing practices have changed, but existing authorities have not been updated. For example, in 1994, Title 44 of the U.S Code was updated to include "duplicating" in the definition of "printing," but the 1990 JCP Regulations do not include this definition. According to JCP staff, the Committee is aware that the 1990 JCP Regulations do not include duplicating in the definition of printing, and the Committee is working to revise the guidance. Also, the majority of government documents are now published digitally, but provisions in Title 44 that require agencies to submit documents to the Federal Depository Library Program (FDLP) do not reference digital publishing. Selected agencies GAO interviewed reported that they do not submit digital documents to FDLP. FDLP staff have taken a number of steps to address this, including educating agencies about FDLP requirements. In addition, the National Academy of Public Administration recently recommended that GPO develop a plan to preserve and collect government documents, and include a process for ingesting digitized copies into GPO's online government publications system, and GPO reported that it would incorporate this recommendation into its national plan for the future of the FDLP.
Advocates of biennial budgeting often point to the experience of individual states. In looking to the states it is necessary to disaggregate them into several categories. First, 8 states have biennial legislative cycles and hence necessarily have biennial budget cycles. Second, as the table below shows, the 42 states with annual legislative cycles present a mixed picture in terms of budget cycles: 27 describe their budget cycles as annual, 12 describe their budget cycles as biennial and 3 describe their budget cycles as mixed. The National Association of State Budget Officers (NASBO) reports that those states that describe their system as “mixed” have divided the budget into two categories: that for which budgeting is annual and that for which it is biennial. Connecticut has changed its budget cycle from biennial to annual and back to biennial. In the last 3 decades, 17 other states have changed their budget cycles: 11 from biennial to annual, 3 from annual to mixed, and 3 from annual to biennial. Translating state budget laws, practices, and experiences to the federal level is always difficult. As we noted in our review of state balanced budget practices, state budgets fill a different role, may be sensitive to different outside pressures, and are otherwise not directly comparable. In addition, governors often have more unilateral power over spending than the President does. However, even with those caveats, the state experience may offer some insights for your deliberations. Perhaps significant is the fact that most states that describe their budget cycles as biennial or mixed are small and medium sized. Of the 10 largest states in terms of general fund expenditures, Ohio is the only one with an annual legislative cycle and a biennial budget. According to a State of Ohio official, every biennium two annual budgets are enacted, and agencies are prohibited from moving funds across years. In addition, the Ohio legislature typically passes a “budget corrections bill.” A few preliminary observations can be made from looking at the explicit design of those states which describe their budget cycle as “mixed” and the practice of those which describe their budget cycle as “biennial.” Different items are treated differently. For example, in Missouri the operating budget is on an annual cycle while the capital budget is biennial. In Arizona “major budget units”—the agencies with the largest budgets—submit annual requests; these budgets are also the most volatile and the most dependent on federal funding. In Kansas the 20 agencies that are on a biennial cycle are typically small, single-program or regulatory-type agencies that are funded by fees rather than general fund revenues. In general, budgeting for those items which are predictable is different than for those items subject to great volatility whether due to the economy or changes in federal policy. S. 1434, like a number of previous bills, proposes that the entire budget cycle be shifted from annual to biennial. Under this system, the President would submit budgets every 2 years. Authorizations would be for 2 years or longer. Budget resolutions would be adopted, and appropriations enacted, every 2 years. We believe that this need not be seen as an all-or-nothing proposal. Budget agreements, authorizations, budget resolutions, and appropriations need not cover the same time period. Multiyear fiscal policy agreements and multiyear authorizations make a great deal of sense, but they do not require changing the appropriations decision cycle from annual to biennial. While biennial appropriations could save time for agencies, they would result in a shift in congressional control and oversight. Proposals to change the process should be viewed partly in the context of their effect on the relative balance of power in this debate. We have previously supported the use of multiyear authorizations for federal programs. There seems to be little reason to reexamine and reauthorize programs more often than they might actually be changed. Furthermore, multiyear authorizations help both the Congress and the executive branch by providing a longer term perspective within which a program may operate and appropriations can be determined. This is the normal practice for most of the nondefense portion of the budget. We also agree that a 2-year budget resolution is worth considering. Especially in an era of multiyear spending caps and multiyear reconciliation instructions, a 2-year budget resolution may not be a major change. However, a way would have to be found to update the Congressional Budget Office’s (CBO) forecast and baseline against which legislative action is “scored.” As you know, CBO scores legislation on the economic assumptions in effect at the time of the budget resolution. Even under the current system there are years when this practice presents problems: in 1990 the economic slowdown was evident during the year, but consistent practice meant that bills reported in compliance with reconciliation instructions were scored on the assumptions in the budget resolution. If budget resolutions were biennial, this problem of outdated assumptions would be greater—some sort of update in the “off year” would be necessary. We have also said that we believe that at a time when major efforts are under way to reduce the deficit, there should be some way to look back and track progress against any multiyear fiscal policy plan. Such a formal “lookback” would be even more critical under a biennial budget resolution. Traditionally, biennial budgeting has been advocated as a way to advance several objectives: (1) to shift the allocation of agency officials’ time from the preparation of budgets and justifications to improved financial management and analysis of program effectiveness, (2) to reduce the time Members of the Congress must spend on seemingly repetitive votes, and hence permit increased oversight, and (3) to reduce uncertainty about longer-term funding levels and allocations and hence improve program management and results. However, shifting the entire cycle—authorizations, budget resolutions, and appropriations—to a biennial one may not be necessary to achieve these objectives. As I noted earlier, biennial appropriations can be considered separate from biennial budget resolutions because the two raise quite different questions. Let me turn now specifically to that issue. In considering whether the federal government should shift to a biennial budget, it is important to recognize the critical distinction between how often budget decisions are made and how long the money provided for agency use is available. That is the difference between the periodicity of decisions and the periodicity of funds. Biennial budgeting proposals seek to change the frequency with which decisions are made—from annual to biennial budget decisions. Too often, however, the idea is discussed as though it were necessary to change the frequency of decisions in order to change the length of time funds are available. However, as you know, this is a misconception. The federal budget today is not composed entirely of annually enacted appropriations of 1-year moneys. Not all funds expire on September 30 of each year. First, because budget decisions about mandatory programs and entitlements—which constitute nearly two-thirds of federal spending—are not made annually, the debate about annual versus biennial appropriations deals with less than half of the budget. Annually enacted appropriations apply to that portion of the budget known as discretionary spending—about 36 percent of federal outlays in fiscal year 1995. Even within that 36 percent of the budget on an annual appropriation cycle, not all appropriations were for 1-year funds. The Congress has routinely provided multiple-year or no-year appropriations for accounts or for projects within accounts when it seemed to make sense to do so. Indeed, about two-thirds of the accounts on an annual appropriation cycle contained some multiple-year or no-year funds. For these accounts, some prior year and/or current year authority was available for obligation beyond September 30, 1995, without further congressional action. To the extent that biennial budgeting is proposed as a way to ease a budget execution problem, the Congress has shown itself willing and able to meet that need under the current annual cycle. The federal government has had some experience with biennial budgets. The 1986 Defense Authorization Act directed the Department of Defense (DOD) to submit a biennial budget for fiscal years 1988 and 1989 and every 2 years thereafter. DOD submitted 2-year budgets for a number of fiscal years. However, the authorization committees have not approved a full 2-year budget, and thus the appropriation committees have not provided appropriations for the second year. We have previously reported that if the Congress decides to implement a 2-year budget at the appropriation account level, it should proceed cautiously by testing it on a limited basis. Good candidates for a limited test would be organizations or programs which are relatively stable and for which there are no obvious impediments. Impediments would be activities that hamper the forecasting of budgetary needs for the 2-year period, such as a major reorganization, major changes in financial management or IRM systems, or major changes in mission. In its efforts to bring the budget into balance, the Congress is currently considering major changes in the scope and methods of delivering government services. The very magnitude of these changes raises questions about whether a shift to biennial appropriations could or should be made at the same time. For agency officials—both agency budget officers and program managers—the arguments for biennial budgeting may seem quite strong. Currently, agency budget officers spend several months every year preparing a “from-the-ground-up” budget with voluminous written justifications. Much of this work is repetitious. In contrast, requests for supplemental appropriations are handled on an exception basis. Only those agencies requesting supplemental appropriations prepare and present justifications, and those justifications are less complex than for the annual budget. If, under a biennial appropriations process, the “off-year” updates, amendments, or adjustments were treated like supplemental appropriations, the savings in agency time could be significant, even if the Congress required—as seems reasonable—that agencies submit audited financial and spending reports every year. Would agency time and energy be shifted to improved financial management or better program evaluation? I suspect that would depend on the President’s and the agency’s leadership and on what the Congress demanded of the agencies. For agency program managers, the interest in biennial budgets is slightly different. Although preparation and analysis for the annual budget preparation and submission process is time-consuming and burdensome for program managers, they are likely to have a greater interest in how long money is available for use. Especially in some programs, such as defense procurement and education programs, multiyear appropriations tend to smooth program functioning. However, as noted above, the Congress has already addressed this budget execution problem for many of these programs by giving them some multiyear funding. While a shift of the entire cycle would ease planning and increase predictability for all program managers, multiyear or advance funding can be provided for those programs for which 1-year money seriously impairs program effectiveness without that shift. Regardless of the potential benefits to agencies, the decision on biennial budgeting will depend on how the Congress chooses to exercise its constitutional authority over appropriations and its oversight functions. Annually enacted appropriations have long been a basic means of exerting and enforcing congressional policy. Oversight has often been conducted in the context of agency requests for funds. A 2-year appropriation cycle could lessen congressional influence or control over program and spending matters, largely because the process would afford fewer scheduled opportunities to affect agency programs and budgets. Although it could be argued that the existence of fixed-dollar caps on discretionary spending mean that multiyear decisions have already been made, that is so only at the aggregate level. The Congress has retained the right to rearrange priorities within those caps. A shift to a biennial appropriations cycle could lessen that flexibility. We have long advocated regular and rigorous congressional oversight of federal programs. Such oversight should examine both the design and effectiveness of federal programs and the efficiency and skill with which they are managed. Through the Chief Financial Officers Act and the Government Performance and Results Act, the Congress has put in place the building blocks to improved accountability—both for the taxpayer’s dollar and for results. Congressional involvement in reviewing agency strategic plans and in develop performance indicators will be critical to the success of these efforts. However, it is not necessary to change the budget and appropriations cycle to have effective congressional oversight. Indeed, as I mentioned before, the regular appearance before Appropriations committees historically has provided one vehicle for oversight. This brings me back to my main point: the decision on whether the budget and appropriations cycle should be annual or biennial is fundamentally a decision about the form and forum the Congress wishes to use to affect agency programs and operations. We believe that multiyear fiscal policy agreements and multiyear authorizations make a great deal of sense, but they do not require changing the appropriations decision cycle from annual to biennial. While biennial appropriations could save time for agencies, they would also result in a shift in congressional control and oversight. Proposals to change the process should be viewed partly in the context of their effect on the relative balance of power in this debate. While budgeting always involves forecasting, which itself is uncertain, the longer the period of the forecast, the greater the uncertainty. Increased difficulty in forecasting was one of the primary reasons states gave for shifting from biennial to annual cycles. Dramatic changes in program design or agency structure, such as those the Congress is considering in many areas, will make budget forecasting more difficult. Moving from an annual to a biennial appropriations cycle at the same time may not be wise, given that there may be program changes which could in turn create the need for major budgeting changes in the second year of a biennium. If this happens, biennial budgeting would exist only in theory. Biennial appropriations would be neither the end of congressional control nor the solution to many budget problems. The questions for the Congress are, how does it wish to exercise its constitutional authority over appropriations and in what forum will it conduct its oversight responsibilities? Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions you or Members of the Subcommittee may have. Budget Process: Evolution and Challenges (GAO/T-AIMD-96-129, July 11, 1996). Correspondence to Chairman Horn, Information on Reprogramming Authority and Trust Funds (GAO/AIMD-96-102R, June 7, 1996). Correspondence to Chairman Kasich, Budgeting for Federal Insurance (GAO/AIMD-96-73R, March 22, 1996). Budget Process: Issues Concerning the Reconciliation Act (GAO/AIMD-95-3, October 7, 1995). Budget Issues: Earmarking in the Federal Government (GAO/AIMD-95-216FS, August 1, 1995). Budget Issues: History and Future Directions (GAO/T-AIMD-95-214, July 13, 1995). Budget Structure: Providing an Investment Focus in the Federal Budget (GAO/T-AIMD-95-178, June 29, 1995). Correspondence to Chairman Wolf, Transportation Trust Funds (GAO/AIMD-95-95R, March 15, 1995). Budget Policy: Issues in Capping Mandatory Spending (GAO/AIMD-94-155, July 18, 1994). Budget Process: Biennial Budgeting for the Federal Government (GAO/T-AIMD-94-112, April 28, 1994). Budget Process: Some Reforms Offer Promise (GAO/T-AIMD-94-86, March 2, 1994). Budget Policy: Investment Budgeting for the Federal Government (GAO/T-AIMD-94-54, November 9, 1993). Budget Issues: Incorporating an Investment Component in the Federal Budget (GAO/AIMD-94-40, November 9, 1993). Correspondence to Chairmen and Ranking Members of House and Senate Committee on the Budget Committees and Chairman of Former House Committee on Government Operations (B-247667, May 19, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed several proposals to change the budget process from an annual to a biennial cycle. GAO noted that: (1) many congressional members believe a biennial budget cycle would streamline the budget process, provide longer-term funding levels, enhance agencies' ability to manage their programs, and provide more time for congressional oversight; (2) preparation and analysis for the annual budget process is time-consuming and burdensome for program managers; (3) although eight states have biennial budget cycles, state budgets fill a different role and are sensitive to different outside pressures; (4) the state agencies with the largest budgets submit annual budget requests, since these budgets are the most volatile and dependent on federal funding; (5) the state agencies that are on biennial budget cycles are typically small, single-program agencies that are funded by fees rather than general fund revenues; (6) budget agreements, authorizations, and budget resolutions do not have to cover the same time period; (7) Congress has routinely provided multiyear appropriations for those programs on the annual appropriation cycle; and (8) a 2-year budget cycle could lessen congressional control over program and spending matters.
As the economy begins to recover from the financial crisis, the extraordinary government interventions taken to stabilize the financial system will need to be withdrawn. The consequences of financial crises— specifically systemic bank-based crises—on economic activity have been well documented. As a result, governments and monetary authorities typically undertake interventions, even though the resulting actions raise concerns about moral hazard and can come at a significant expense to taxpayers. Given its severity and systemic nature, the recent global financial crisis prompted substantial interventions starting as early as September 2007, after the first signs of serious trouble in the subprime mortgage market surfaced (see app. II). In the early stages of the financial crisis, the observable policy responses were a Department of Housing and Urban Development (HUD)-initiated foreclosure prevention program, a Federal Reserve lending facility for depository institutions, and currency swap arrangements with various foreign central banks. As the crisis intensified, additional lending facilities were created, followed by separate actions by the Federal Reserve, Treasury, and others that dealt with financial sector issues on a case-by-case basis. These actions included facilitating JPMorgan Chase & Co.’s purchase of Bear Stearns Companies, Inc.; addressing problems at Fannie Mae and Freddie Mac by placing them into conservatorship; working with market participants to prepare for the failure of Lehman Brothers; and lending to American International Group (AIG) to allow it to sell some of its assets in an orderly manner. Although Treasury had begun to take a number of broader steps, including establishing a temporary guarantee program for money market funds in the United States, it decided that additional and comprehensive action was needed to address the root causes of the financial system’s stresses. The passage of EESA and authorization of TARP provided Treasury with the framework it needed to begin its more comprehensive and coordinated course of action that ultimately resulted in several programs. Some TARP funds were utilized to launch joint programs or to support efforts principally led by other regulators. Concurrent with the announcement of the first TARP program, the Federal Reserve and FDIC also announced other actions that were intended to stabilize financial markets and increase confidence in the U.S. financial system. This system-wide approach was also coordinated with a number of foreign governments as part of a global effort. The various initiatives under TARP are detailed below. Capital Purchase Program (CPP). CPP was intended to restore confidence in the banking system by increasing the amount of capital in the system. Treasury provided capital to qualifying financial institutions by purchasing preferred shares and warrants or subordinated debentures. Capital Assistance Program (CAP). CAP was designed to further improve confidence in the banking system by helping ensure that the nation’s largest banking institutions had sufficient capital to cushion themselves against larger than expected future losses, as determined by the Supervisory Capital Assessment Program (SCAP)—or “stress test”— conducted by federal regulators. Consumer & Business Lending Initiative (CBLI). CBLI was designed to support new securitizations in consumer and business credit markets, especially for auto, student, and small business loans; credit cards; and new and legacy securitizations of commercial mortgages to increase credit availability in these markets and now includes small business lending programs as well. A portion of the CBLI funds were used to support the Federal Reserve’s Term Asset-Backed Securities Loan Facility (TALF). Under TALF, the Federal Reserve provided loans to private investors who pledged securitizations as collateral and Treasury provided a government backstop against certain losses. Public Private Investment Program (PPIP). PPIP was designed to facilitate the purchase of “legacy assets” as part of Treasury’s efforts to facilitate price discovery in markets for these assets, repair balance sheets throughout the financial system, and increase the availability of credit to households and businesses. The legacy securities program, or “S-PPIP,” partnered Treasury and private sector equity funding leveraged by Treasury loans to purchase and hold legacy residential mortgage-backed securities (RMBS) and commercial mortgage-backed securities (CMBS). In the original plan, PPIP was to also include a partnership between Treasury and FDIC to purchase and hold legacy loans, through the legacy loans program, or “L-PPIP,” but it was never implemented as a joint venture using TARP funds. Making Home Affordable Program (MHA). MHA was launched to offer assistance to homeowners through a loss-sharing arrangement with mortgage investors and an incentive-based system for borrowers and servicers in order to prevent avoidable foreclosures. Under MHA, Treasury developed the Home Affordable Modification Program (HAMP) as its cornerstone effort to meet EESA’s goal of protecting home values and preserving homeownership by helping at-risk homeowners avoid potential foreclosure, primarily by reducing their monthly mortgage payments. Targeted Investment Program (TIP). The stated purpose of TIP was to foster market stability and thereby strengthen the economy by making case-by-case investments in institutions that Treasury deemed critical to the functioning of the financial system. TIP was designed to prevent a loss of confidence in financial institutions that could (1) result in significant market disruptions, (2) threaten the financial strength of similarly situated financial institutions, (3) impair broader financial markets, and (4) undermine the overall economy. The AIG Investment Program. Formerly the Systemically Significant Failing Institutions program, the goal of the AIG Investment Program was to provide stability in financial markets and avoid disruptions to the markets from the failure of a systemically significant institution. Treasury has purchased preferred shares and warrants in AIG and provided a facility for additional investment as needed up to a limit. Asset Guarantee Program (AGP). AGP provided government assurances for certain assets held by financial institutions that are viewed as critical to the functioning of the nation’s financial system. The goal of AGP was to encourage investors to keep funds in the institutions. According to Treasury, placing guarantees, or assurances, against distressed or illiquid assets was viewed as another way to help stabilize the financial system. Automotive Industry Financing Program (AIFP). The goal of AIFP was to help stabilize the American automotive industry and avoid disruptions that would pose systemic risk to the nation’s economy. Under this program, Treasury has authorized TARP funds to help support automakers, automotive suppliers, consumers, and automobile finance companies. A sizeable amount of funding has been to support the restructuring of Chrysler Group LLC (Chrysler) and General Motors Company (GM). Taken together, the concerted actions by Treasury and others have been credited by many market observers with averting a more severe financial crisis, although there are critics who believe that markets would have recovered without government support. Particular programs have been reported to have had the desired effects, especially if stabilizing the financial system and restoring confidence was considered to be the principal goal of the intervention. In our October 2009 and February 2010 reports we noted that some of the anticipated effects on credit markets and the economy had materialized while some securitization markets had experienced a tentative recovery. Yet, experience with past financial crises, coupled with analysis of the specifics of the current situation, has led the Congressional Budget Office to predict a modest recovery that will not be robust enough to appreciably improve weak labor markets through 2011. Full recovery will likely take some time given years of excesses, including imprudent use of leverage at financial institutions, overvalued asset prices, and major imbalances in the fiscal and household sectors. Negative shocks like the recent turmoil in international capital markets stemming from European sovereign debt issues have the potential to delay the recovery as well. Because markets have stabilized, private markets have reopened, and economic growth has resumed, the federal government has begun to move into the exit phase of its financial stabilization initiatives. The winding down of government support is made more pressing by the need to exit market distorting interventions as quickly as possible and to begin shifting focus from the financial crisis to stabilizing the government debt-to-gross domestic product ratio. Crisis-driven interventions are designed to be temporary because they distort the normal functioning of markets and involve public capital when, under normal conditions, private capital is more desirable. Moreover, as we have pointed out in previous reports, the U.S. government faces an unsustainable long-term fiscal path. While these fiscal imbalances predate the financial crisis, the government’s response to the crisis has exacerbated an already challenging fiscal environment. As a result, even as some programs have ramped up to address specific issues, many others have either expired or are already winding down— including those utilizing TARP funds (see app. II). Many programs were designed to wind down naturally, force financial institutions to raise private capital, or become unattractive to participants once markets recovered. Treasury’s authority under EESA to purchase, commit to purchase, or commit to guarantee troubled assets was set to expire on December 31, 2009, unless the Secretary submitted a written certification to Congress extending these authorities. In anticipation of the upcoming decisions on the future of TARP, the need to unwind the extraordinary federa l support across the board, and the fragile state of the economy we made recommendations to Treasury in our October 2009 report. Specifically, we suggested that any decision to extend TARP be made in coordination with relevant policymakers. We also suggested that Treasury make use of quantitative analysis wherever possible to support the rationale and communicate its determinations to Congress and the American peo noted that without a robust analytic framework, Treasury may be challenged in effectively carrying out the next stages of its programs. Treasury responded that in deciding whether to extend TARP auth beyond December 31, 2009, the Secretary would “coordinate with appropriate officials to ensure that the determination is considered in a broad market context that takes account of relevant objectives, cos measures” and would communicate the rationale for the decision. On December 9, 2009, the Secretary announced that he was extending Treasury’s authority under EESA to purchase, commit to purchase, or commit to guarantee troubled assets until October 3, 2010 (TARP expiration date). After the expiration date, no TARP funds can be committed, but there may be expenditures to fund commitments entered into prior to the expiration date. The extension of TARP permits Treasury to reallocate existing commitments and make additional funds available for some programs. As is shown in table 1, according to Treasury, new commitments through October 3, 2010, will be limited to MHA and small business lending programs through CBLI. The funds allocated to MHA have not been increased beyond the initial $50 billion Treasury estimated would be committed under the TARP-funded program. At time of the decision to extend, Treasury had committed $40 billion under existing MHA programs; however, according to Treasury, they had always contemplated additional MHA programs, such as programs to address negative equity. Treasury indicated that the extension of TARP gave them more time and flexibility to build out those programs as well as more time to decide how best to allocate the remaining $10 billion in order to prevent avoidable foreclosures. All other programs, including TIP, have closed or will close by June 30, 2010, and no additional funds will be committed under those programs. However, additional expenditures, which have already been apportioned and accounted for, could occur after the TARP termination date for TALF, PPIP, and the AIG Investment Program to fund commitments made prior to December 2009, and investments acquired through a variety of TARP actions remain under Treasury’s management. Nevertheless, the extension has formally moved TARP from a program with a heavy focus on capitalizing institutions and stabilizing securitization markets to one focused primarily on mitigating preventable foreclosures and improving financial conditions for small banks and small businesses. Treasury estimates that new commitments under MHA and CBLI could increase the costs of TARP by $25 billion. Even with these additional costs, Treasury expects that TARP will ultimately cost taxpayers $105.4 billion, more than $200 billon less than initially estimated. The Secretary also notified Congress that Treasury expected to use no more than $550 billion of the approximately $700 billion authorized by EESA but reserved the authority to use the remaining funds to respond “to an immediate and substantial” threat to the economy “stemming from financial instability.” In the absence of such threats, Treasury indicated that those resources would be used to pay down the federal debt over time. In his letter to Congress communicating the decision, the Secretary also expressed a desire to expedite both the liquidation of the equity investments and the repayment of funds extended to TARP recipients. As of June 7, 2010, total TARP repayments were roughly $195 billion. Pending legislation, if enacted, would require the Secretary to use any amounts repaid by financial institutions for debt reduction. The decision to extend TARP followed months of deliberation and internal discussions that began in August 2009. Treasury officials told us that while the decision to extend TARP could have been made earlier, it was not made until December to be certain that extension was necessary and so that the Secretary would be able to consider what conditions to place on the extension to balance the need to minimize the cost to taxpayers while ensuring that the program met its core objectives. According to Treasury officials, this decision was made at the highest levels within the agency. Discussions centered on how to phase out TARP and other government programs adopted in response to the financial crisis generally, as well as what limits to place on an extension, and what programs would not need to be continued beyond the original expiration date of December 31, 2009. Treasury officials indicated this discussion generally did not take place at the program level, but included a range of officials from various Treasury offices. Internal memos and briefing documents suggest considerable deliberation took place on the effectiveness of existing government actions as well as the likely effectiveness of potential policy options to address remaining threats to financial stability. Other programs operated by Treasury and other government agencies were important parts of these deliberations. According to Treasury, the modest pace of the economic recovery and concern about exiting TARP prematurely meant that the likelihood of not extending was low, but programs that were no longer needed were to be terminated. In addition, Treasury believed that while the decision could have been made at an earlier date, officials decided it was better to wait until closer to the certification deadline in order to have a more targeted response. Treasury also considered not extending TARP and instead making up front commitments to problem areas based on available information, but ultimately decided that the additional flexibility and better information that would come from the extension would be preferable. As part of a robust analytic framework for decision making, we recommended that the Secretary coordinate with the Federal Reserve and FDIC to help ensure that the decision to extend or terminate TARP was considered in a broader market context. Treasury officials said that it had external discussions and consultations in the months prior to the decision to help ensure that the decision-making process incorporated the actions of key financial regulators. Treasury officials also said that the Secretary had discussions with the Chairmen of the Federal Reserve and the FDIC regarding TARP and the status of crisis programs instituted at each respective agency. Treasury officials noted that EESA required additional coordination with the Federal Reserve because it required the Secretary to consult with the Chairman of the Federal Reserve in order to purchase financial instruments other than those related to residential and commercial real estate. This consultation, which included communication among principals and staff of the two agencies, is represented in several letters by the Chairman to the Secretary reflecting the required consultations prior to the initiation of several TARP programs unrelated to residential and commercial real estate. In addition, Federal Reserve officials stated that the Chairman and Vice Chairman of the Federal Reserve were broadly supportive of the decision to extend TARP. The officials said that the Chairman was consulted by the Secretary on multiple occasions. The Federal Reserve noted that there was consistent coordination at the staff level regarding the TALF program, primarily due to the joint nature of the program. Another forum for coordination around the decision to extend TARP was FinSOB. FinSOB meeting minutes detailed discussions of the decision to extend TARP and the general economic situation. While there was discussion of the decision, FinSOB did not, nor was it required to, authorize or approve the Secretary’s action. The Secretary also discussed the extension of TARP with the Chairman of FDIC. In particular, both agencies told us that they discussed the timing of FDIC’s exit from programs designed to support the banking system. According to Treasury officials, Treasury took into consideration the winding down of FDIC’s Temporary Liquidity Guarantee Program (TLGP), which was designed to support bank debt and transaction accounts, in deciding to extend TARP. At the time Treasury made the decision to extend TARP, TLGP was scheduled to end June 30, 2010. FDIC subsequently extended TLGP to December 31, 2010. As Treasury shifts into the exit phase of TARP, it faces upcoming decisions that would benefit from continued collaboration and communication with other agencies including: decisions about allocating any additional funds to MHA and CBLI, decisions about scaling back various programs, and ongoing decisions related to the general exit strategy, including unwinding the equity investments held as a result of actions taken under TARP. Similar to the need for a coordinated course of action to stabilize the financial system and re-establish investor confidence, the general exit from the government interventions will require coordination to develop a unified disengagement strategy. As mentioned previously, TARP is one of many programs and activities the federal government has put in place over the past year to respond to the financial crisis (see also app. II). In general, the extent of coordination with the Federal Reserve was consistent with our recommendation and represented the type of collaboration necessary for the next stage of the government response to the crisis. However, the extent of Treasury’s coordination with FDIC, while sufficient for the decision to extend TARP, should be enhanced and formalized for any upcoming decisions that would benefit from interagency collaboration. FinSOB, which was established to help oversee TARP and other emergency authorities and facilities granted to the Secretary under EESA, is composed of the Secretary, the Chairman of the Board of Governors of the Federal Reserve, the Director of FHFA, the Chairman of the Securities Exchange Commission, and the Secretary of HUD. Therefore many of the regulators who led the federal response to the financial crisis are already part of a collaborative body. As a result, FinSOB has been a vehicle for formal consultations over TARP decisions among the agencies that are represented on FINSOB under EESA. By adding future program decisions to the agenda, including decisions on future TARP commitments, FinSOB can continue to serve a role in the next phase of the TARP program as well as in the consideration of exit strategies. Because FINSOB membership is set by statute, Treasury should seek to conduct similar consultations with other agencies that are not represented on FinSOB, such as the FDIC, or these agencies could be invited occasionally to discuss specific issues. Treasury considered a number of qualitative and quantitative factors for key decisions associated with the TARP extension. Important factors considered for the extension of TARP centered on ongoing weaknesses in key areas of the economy. Treasury officials noted that housing market indicators, despite previously announced initiatives, and financial conditions for small businesses necessitated further commitments under MHA and small business lending programs. Treasury underscored that while analysis was possible on the need for or success of individual programs, the fragile state of the economy and remaining downside risks were an ongoing source of uncertainty. Considering this uncertainty, Treasury wanted to extend TARP through October 2010 in order to retain resources to respond to financial instability. On the other hand, Treasury noted that some programs had accomplished their goals and would be terminated. Treasury cited renewed ability of banks to access capital markets, improvements in securitization markets, and stabilization of certain legacy asset prices as motivating the closing of bank capital programs, TALF, and PPIP, respectively. Treasury could strengthen its analytical framework by identifying clear objectives for small business programs and explaining how relevant indicators motivated TARP program decisions. Treasury officials identified four documents that were central to its efforts to describe and communicate to Congress and the public the framework it used to make decisions related to the extension of TARP, the expansion of some efforts, and the termination of others. Those four documents were (1) the September 2009 report “The Next Phase of Government Financial Stabilization and Rehabilitation Policies”; (2) the December 9, 2009, letter to Congressional leadership certifying the extension of TARP; (3) Secretary Geithner’s December 10, 2009, testimony to the Congressional Oversight Panel; and (4) the “Management Discussion and Analysis” portion of the fiscal year 2009 Office of Financial Stability Agency Financial Report. Based on our analysis of these documents and interviews with Treasury officials, table 2 summarizes the key factors that contributed to Treasury’s program-level decisions associated with the extension of TARP. In addition, we note a number of quantitative indicators identified by Treasury that to some extent measure the key factors that influenced the decisions. We elaborate on the nature of these decisions and the indicators below. AGP, TIP, AIFP, and the AIG Investment Program amounted to exceptional assistance to key institutions on a case-by-case basis, and therefore, the expectation was that these targeted programs would be exited as soon as practical and would not be considered for additional commitments. Housing. Rather than allow the program to expire with $10 billion of the original $50 billion allocated to MHA remaining uncommitted, Treasury extended the program so that those funds could be used to address continued weaknesses in housing markets and roll out several additional programs that Treasury had not yet had the opportunity to design and implement. Treasury officials noted that various metrics they were monitoring indicated that the recovery had not successfully reached particular areas of the economy (see table 3). Specifically, housing market indicators, such as foreclosures and mortgage delinquencies, remained elevated around the time the decision to extend TARP was made, despite initiatives—like MHA—that were designed to preserve homeownership by directly modifying mortgages for qualified homeowners. The percentage of loans in foreclosure (foreclosure inventory) reached 4.58 percent at the end of the fourth quarter of 2009 and continued to increase to an unprecedented high of 4.63 percent in the first quarter of 2010 (see fig. 1). Over the same period the serious delinquency rate—defined as the percentage of mortgages 90 days or more past due plus those in foreclosure—fell only slightly from 9.67 to 9.54 percent. Although not shown, the serious delinquency rate for subprime loans exceeded 30 percent in the most recent two quarters, indicating the large proportion of subprime loans in trouble. Foreclosure starts, which reflect new foreclosures filings, peaked at 1.42 percent in the third quarter of 2009 before declining over the next two periods to roughly 1.2 percent. By any measure however, foreclosure and delinquency statistics for housing remain well above their historical averages. Moreover, although not explicitly mentioned by Treasury, a comparison of trends in delinquent mortgages and new foreclosure starts indicate that more foreclosures are looming. While the foreclosure start rate grew 36 percent from the last quarter of 2007 to the last quarter of 2009, the rate for delinquencies of 90 days or more grew by 222 percent over the same period (see fig. 1). This suggests mortgages are not rolling from delinquency to foreclosure as expected and that lenders are not initiating foreclosures on many loans normally subject to such actions. To the extent that foreclosure mitigation programs are ineffective, or a large number of the trial modifications represent unavoidable foreclosures, the resulting foreclosures will continue to weigh on the housing market. Treasury also noted that extending TARP provides the flexibility to modify MHA to respond to the changing dynamics of the foreclosure crisis. Treasury noted early in the crisis that many foreclosures were the result of subprime, predatory, and fraudulent lending activity; however, as the financial crisis progressed, Treasury has modified and expanded its efforts because unemployment and negative equity have become the primary drivers of foreclosures, calling for a different approach to homeownership preservation. Treasury has modified MHA to deal with these issues by allowing more borrowers to qualify for modification—including borrowers with Federal Housing Administration (FHA) loans, who are currently in bankruptcy proceedings or who owe more than the current value of their home. Moreover, Treasury also plans to increase the incentives provided to servicers for writing down mortgage debt, and has included incentives for writing down second liens. Treasury is also implementing programs in addition to existing MHA programs that will address these issues, such as the HFA Hardest-Hit fund and a refinance program with FHA, and expects to use the full $50 billion for all these combined efforts. Treasury officials acknowledged that the consequences of interventions may prevent the housing market from fully correcting and may also increase moral hazard by writing down mortgages for borrowers with negative equity. However, Treasury officials and others have identified reducing the number of unnecessary foreclosures as critical to the economic recovery. Because not all homeowners are expected to qualify for a HAMP modification or other mortgage relief programs under MHA, enhancements to the program are to include relocation assistance to some borrowers that use foreclosure alternatives such as a short sale or a deed-in-lieu of foreclosure. In addition to continued weakness in the housing markets and the need for flexibility, Treasury noted that when the decision to extend the program was made, HAMP had only recently been implemented and needed time to ramp up to its full potential and build out all program components. In our July 2009 report and March 2010 testimony on HAMP, we noted that the program faced implementation challenges and that Treasury’s projection that three to four million borrowers could be helped by offering loan modifications was based on several uncertain assumptions and might be overly optimistic. Treasury cited the slow pace of conversions of homeowners from trial modifications to permanent modifications as an important reason to extend its ability to have funds available for commitments related to foreclosure mitigation and housing market stabilization. Total trials versus permanent modifications continued to track the initial slow pace (see fig. 2). In October 2009, permanent modifications started totaled an estimated 2 percent of the total cumulative government-sponsored enterprise (GSE) and non-GSE HAMP trials started, before increasing to just 4 percent and 7 percent for November and December 2009, respectively. Treasury believed that the extension would allow the program the necessary time to reach its full potential by providing more time to complete the significant backlog of modifications, as well as giving the servicers the opportunity to build up their capacity, and finally allowing the public and investors time to better understand the requirements and opportunity presented by the HAMP process. The latest trial-to-permanent modification conversion rate has now reached an estimated 28 percent of total cumulative HAMP trials (see fig. 2). It should be noted that there is a 3-month wait time during the trial period. Therefore, contemporaneous comparison of trial versus permanent modifications is not the most meaningful, since trials entered into within the last 3 months are not eligible for conversion into payments. Our June 2010 report on Treasury’s implementation of HAMP is an update of our prior July 2009 report and March 2010 testimony findings. Specifically, it addressed (1) the extent to which HAMP servicers have treated borrowers consistently and (2) the actions that Treasury has taken to address certain challenges, including the conversion of trial modifications, negative equity, redefaults, and program stability. While one of Treasury’s stated goals for HAMP was to standardize the loan modification process across the servicing industry, we found inconsistencies in how servicers were treating borrowers under HAMP that could lead to inequitable treatment. Specifically, the servicers we contacted varied in the timing of HAMP outreach to delinquent borrowers, the criteria used to determine if borrowers were in imminent danger of default, and the tracking of borrower complaints about servicer’s implementation of HAMP. Additionally we found that while Treasury had taken some steps to address the challenges we had previously reported on, it urgently needed to finalize and implement remaining program components and ensure the transparency and accountability of these efforts. In particular, we reported that Treasury had been slow to implement previously announced programs it identified as needed to address the housing problems hindering the current economic recovery, including its second-lien modification and foreclosure alternatives programs. We noted that Treasury recently announced additional HAMP components to help deal with the high number of foreclosures such as programs to help borrowers with high levels of negative equity and unemployed borrowers, which needed to be prudently designed and implemented as expeditiously as possible. Going forward, as Treasury continues to design and implement new HAMP-funded programs, we reported that it will be important that Treasury develop sufficient capacity—including staffing resources—to plan and implement programs, establish meaningful performance measures, and make appropriate risk assessments. Treasury indicated that it plans to track performance measures of the number of HAMP modifications (trial and permanent) entered into, the redefault rate, and the change in average borrower payments to evaluate the program going forward. However, foreclosure and delinquency data used to motivate the decision to allocate the full budgeted resources to MHA and other housing programs, although also influenced by general market forces such as falling housing prices and unemployment, should provide an indication of the effectiveness of these efforts. Small business lending. Treasury decided to allocate new resources to small business lending based on the contraction in bank lending and other indicators of small business credit conditions. However, Treasury has yet to set explicit objectives for its small business lending programs. Treasury wants to support lending to creditworthy small businesses by providing capital to small banks. A drop in the volume of lending could be explained by a combination of reduced demand for loans, higher credit standards, or banks’ lack of capital to make new loans. Demand for business loans, including small business loans, has dropped considerably since 2008, and credit standards have risen, according to Federal Reserve data. At the time of the extension, Treasury set aside $30 billion for programs to support small business lending. Since that time, Treasury has decided to try to create a Small Business Lending Fund through legislation outside of TARP, due to concerns that many banks would not participate in a TARP program. In addition, Treasury expects to make up to $1 billion in new capital investments in community development financial institutions (CDFI) and purchase up to $1 billion in Small Business Administration loan securitizations, to improve access to credit for small businesses. Relative to larger corporations, small businesses generally have difficulty directly accessing capital markets as an alternative source of financing and are therefore largely reliant on bank lending. While Treasury has stated that bank lending has contracted, Treasury refers to data on outstanding bank loans (loan balances) of all sizes that reflect a number of economic conditions that may not be related to new lending and may not capture potentially divergent conditions for large and small firms. We found in previous work that changes in loan balances may not be a good proxy for new lending. In particular, while outstanding commercial and industrial loans and commercial real estate loans have fallen, losses on a loan portfolio and loan repayments may help explain this drop. For firms of all sizes, lack of comprehensive data on new lending makes assessing business credit conditions particularly difficult. For example, interest rates, on their own, may not be a good indicator of the availability of credit. Specifically, financial institutions may ration credit based on the quality of the borrower, rather than continuing to lend, but charging a wider distribution of interest rates to customers of varying credit quality. As a result, the volume of new lending (loan originations) would be a valuable indicator of credit availability; however, only limited data on loan originations exist. For example, origination data exist only for certain kinds of loans (e.g., mortgages) or only for a small subset of banks (e.g., the largest CPP participants). Moreover, there are no consistent historical data on lending to small businesses. Treasury officials and others have acknowledged the limitations of data in this area, which Treasury officials have noted, making determining when enough has been done difficult. While the availability of small business credit is difficult to quantify definitively, Treasury officials noted that a number of indicators of small business lending point to reduced access to credit. Officials identified the Federal Reserve’s Senior Loan Officer Opinion Survey (SLOOS) and the National Federation of Independent Business (NFIB) survey, among other sources. Taken together, these indicators, although imperfect, generally point to a tight credit environment for small firms. SLOOS surveys loan officers on, among other things, lending standards for commercial and industrial loans, and features responses by borrower size (small versus large and medium). The survey responses show significant tightening of lending standards for firms of all sizes, although conditions have tightened more in the last year for small firms than for larger firms. The NFIB Small Business Economic Trends survey contains a number of questions on access to credit. Respondents are NFIB members, with nearly half of all respondents from firms with five or fewer employees. A question on borrowing needs (“During the last three months, was your firm able to satisfy its borrowing needs?”) may be indicative of changes in access to credit for firms of this size. We compared responses to this question to interest rate spreads for loans of less than $1 million (a proxy for loans to small businesses) from the Federal Reserve’s Survey of Terms of Business Lending. These spreads are premiums over the federal funds rate and indicate the risk banks perceive in making small loans. We found that the percentage of respondents reporting that their borrowing needs had not been satisfied showed the same broad pattern as spreads for loans of less than $1 million (see figure 3). In particular, both show a spike in recent years, with increases in risk premiums for small loans and the proportion of small businesses reporting that their borrowing needs had not been met. Because the economy was still fragile and downside risks remained, Treasury identified the need to retain resources to respond to threats to financial stability as an important consideration in deciding to extend TARP. According to Treasury officials, if the economic recovery were in jeopardy, the TARP extension gave Treasury the capability to react should financial markets need further assistance. Treasury noted several continued areas of weakness that supported the need to retain resources, without making them available for commitment under specific programs. Areas of weakness included the elevated pace of bank failures, high unemployment, and commercial real estate losses. Although banks in the United States had made progress in raising capital and recognizing losses on legacy assets and loans, substantial asset deterioration is expected across some loan classes, such as commercial real estate and consumer and corporate loans. Because banks will likely continue to take steps to reduce leverage, credit conditions are expected to remain tight while high unemployment continues to weigh on residential real estate markets and consumer spending. As indicated above, uncommitted funds up to the total amount authorized by EESA could be used to respond to financial instability or growing weakness that would threaten the recovery. As of June 7, 2010, this amount is roughly $163 billion and remains available for commitment, assuming repayments are not deployed in other efforts. Treasury noted that, among other reasons, it extended TARP to maintain the capacity to respond to unforeseen threats or unanticipated shocks. Federal Reserve officials similarly noted that unanticipated events, not foreshadowed by market data, have been the hallmark of the crisis. The failure, or near failure, of a systemically important financial institution would be a critical threat to financial stability. Treasury, FDIC and the Federal Reserve responded to the failure, or near failure, of large financial institutions during the crisis with programs to provide assistance, such as guarantees and capital, to keep institutions solvent, including AGP for Citigroup and AIG Financial Assistance. According to Federal Reserve officials, one of the reasons they supported the extension of TARP was the inadequacy of available statutory tools to deal with threats to financial stability, such as the failure of a large financial institution. One proposed tool is an authority for the orderly resolution of large, nonbank financial institutions. In previous work, we have noted that some interventions to support failing institutions can undermine market discipline and increase moral hazard. For example, in the presence of a government back-stop, firms anticipate government assistance in the future and thus have less incentive to properly manage risk. Regulatory reforms that enhance oversight and capital requirements at large financial institutions—in essence making it more costly to be a large financial institution—would help to counter some erosion of market discipline. Similarly, an effective resolution authority could impose losses on managers, shareholders, and some creditors, but must also properly balance the need to encourage market discipline with the need to maintain financial stability. Treasury officials noted the importance of having financial regulatory reform in place before TARP expires in October 2010. Bank capital programs. Treasury has ended broad programs, such as CPP and CAP, established to improve the solvency of financial institutions to support their ability to lend, based on banks’ renewed ability to access private capital markets and issue new equity. Treasury has stated that by building capital, CPP was expected to increase lending to U.S. businesses and consumers. Treasury has disbursed more than $200 billion for the CPP, and has received $142 billion in repayments as of May 28, 2010. CAP was designed to help ensure that certain large financial institutions had sufficient capital to withstand severe economic challenges. It was supported by SCAP which assessed capital needs at the 19 largest bank holding companies in the United States. Banks that needed additional capital as a result of SCAP raised $80 billion from private sources, while GMAC received additional capital from Treasury under AIFP. No CAP investments were made as a result and the program closed on November 9, 2009. Treasury has indicated that the renewed ability of banks to raise capital on private markets was a key measure of success for CPP and CAP and a key consideration in ending these programs. From 2000 to 2007, banks largely did not need to raise capital by issuing common equity, averaging only $1.3 billion per quarter. Banks and thrifts raised significant amounts of common equity in 2008, averaging $56 billion per quarter, before issuance dropped precipitously in the first quarter of 2009 to $200 million—a 99 percent drop from the previous quarter and a 63 percent drop from the year before. Banks and thrifts raised $63 billion in common equity in the second quarter of 2009, an increase of 28,000 percent from the previous quarter and 236 percent over the year before (see fig. 4). Banks’ renewed ability to raise capital on private markets reflects improvements in perceptions of the financial condition of banks. The 3- month TED spread—the premium of the London interbank offered rate (LIBOR) over the Treasury interest rate of comparable maturity—indicates the perceived risk of lending among banks. The TED spread peaked at more than 450 basis points in October 2008 before falling to less than 15 basis points at the end of the third quarter of 2009 (see fig. 5). In previous work, we found that the decline in perceptions of risk in the interbank market could be attributed in part to several federal programs aimed at stabilizing markets that were announced on October 14, 2008, including CPP. Nevertheless, the associated improvement in the TED spread cannot be attributed solely to TARP because the announcement of CPP was a joint announcement that also introduced the Federal Reserve’s Commercial Paper Funding Facility program and FDIC’s TLGP. Financial stress re-emerged in the interbank market in May 2010, highlighting the fragile nature of the recovery in the financial system. The TED spread has increased moderately from a low of less than 10 basis points in March 2010 to more than 40 basis points as of mid-June 2010, as concerns about sovereign debt in the European Union has increased. U.S. banks’ exposure to credit risk in Europe and the sensitivity to the global economy has heightened risk premiums among banks lending to each other. While fluctuations in perceived risk in the banking system are natural, and necessary, if risk is to be priced and allocated efficiently, this re-emergence of risk offers some support for Treasury’s decision to retain resources to combat financial instability, especially in light of the limitations of the current financial regulatory system. The impact of CPP on lending is difficult to determine because data on loan originations are limited, and how much lending would have occurred in the absence of CPP is not known. We have noted in previous reports that some tension exists between the goals of improving banks’ capital positions and promoting lending—that is, the more capital banks use for lending, the less their overall capital positions will improve. Treasury collects data monthly on new lending from the largest participants in CPP, which included for a time as many as 22 institutions. As a result, more is known about recent loan originations by large banks than small banks. Ten institutions that repaid CPP in June 2009 stopped submitting data after November 2009. New lending by the largest CPP recipients was $244 billion in November 2009, up 2 percent from the prior month and 17 percent from the year before. However, lending in the third quarter of 2009 quarter of 2009 was down 12 percent from the second quarter (see fig. 6). was down 12 percent from the second quarter (see fig. 6). Support to securitization markets through TALF. With underwriters finding increasing success in bringing issuances to the ABS market and decreasing their utilization of TALF, Federal Reserve and Treasury decided not to extend TALF further. TALF expired on March 31, 2010, for loans backed by ABS and legacy CMBS, and is scheduled to terminate at the end of June 2010 for loans backed by newly-issued CMBS. The program was designed to increase liquidity and reopen the asset-backed securitization markets in an effort to improve access to credit for consumers and small businesses after the decrease in issuances and the refusal of market participants to purchase potential offerings at rates that were acceptable to issuers. TALF-assisted issuances began in March 2009 after an initial announcement in late 2008. Officials from the Federal Reserve and Treasury highlighted that TALF was designed to attract investors when market conditions were stressful, but lose its appeal as conditions improved and spreads tightened to the point that the rate on ABS bonds were lower than the cost of borrowing from the program. Federal Reserve and Treasury officials have also cited declining asset spreads in the ABS market as justification for not making new commitments under TALF (see fig. 7). While not at precrisis levels, spreads have tightened significantly from their heights at the beginning of 2009. Considering the excesses during the recent credit expansion, the desirability of a return to precrisis levels in many areas of the securitization markets is debatable. However, for most TALF-eligible assets, spreads have tightened significantly. For instance, average auto ABS spreads peaked at more than 400 basis points over the benchmark in late 2009, but have since returned to less than 100 basis points over the benchmark in early 2010. Private student loans ABS, however, have maintained spreads above precrisis levels. According to Federal Reserve officials this is partly due to the performance of the underlying student loans and because some of the securities were not structured well. Nevertheless, the contraction in spreads for most TALF-eligible ABS can be seen as normalization of the securitization markets as participants view new and existing issuances as less risky. Some of the decline in spreads and the perceptions of risk in recent securitizations may be attributable to the products themselves. Since the crisis, new securitizations have generally been structured with more credit protections through enhancements such as greater levels of subordination and overcollateralization. The Federal Reserve structured TALF to reduce the rate of utilization of the facility as the market returned to normalcy through relatively high pricing of TALF loans. As we noted in a previous GAO report, during 2009, returns generally decreased for select classes of TALF-eligible collateral between the first TALF operation in March 2009 and the latter part of the year, with limited exceptions. The report notes that as these returns generally became increasingly negative through the year, participants would have essentially locked in losses with certain issuances. To avoid this, many participants instead chose to forego TALF financing for these issuances and instead finance their own investments. ABS markets began to show signs of health as 2009 quarterly issuances were above their lows in 2008 and utilization of TALF began decreasing in mid-2009. ABS issuances experienced a significant decline in 2008, but stabilized in 2009 (see fig. 8). TALF issuance dollar volume peaked in the third quarter of 2009, but by the fourth quarter TALF volume decreased significantly and at a faster rate than the total decrease in ABS volume. Further, there has been one CMBS new issuance that utilized TALF financing although the commercial real estate market continues to experience stresses and there has been little activity in the sector as a whole. Partly as a result of the continuing difficulties in this market, TALF loans backed by newly issued CMBS will be allowed through June 2010 even though the rest of the program closed at the end of March. Addressing “troubled” (legacy) securities through PPIP. Initially announced at up to $100 billion, Treasury reduced the amount available for commitment under PPIP based on improvements in the prices for certain legacy assets. Announced in March 2009, Treasury offered equity and debt financing to nine private fund managers, however, no further commitments to new funds are planned. The Legacy Securities Public- Private Investment Program (S-PPIP) is a program whereby Treasury and private sector fund managers and investors partnered to purchase eligible securities from banks, insurance companies, mutual funds, pension funds, and other sellers defined as eligible under EESA. Treasury indicated that this process was designed to allow financial institutions to repair their balance sheets by removing troubled assets and allow for renewed lending to households. Treasury participates by providing matching equity financing and debt financing up to 100 percent of the total equity of the fund. A related program, L-PPIP, was also announced at the same time by Treasury and FDIC but never operated as a TARP program. This program, however, suspended its planned sale of legacy assets held by banks in order to focus its use in the sale of receivership assets in bank failures. Treasury did not include PPIP in its plans for new commitments in 2010, but has tracked the performance of each individual fund since inception. Treasury stated that a recovery in asset prices in the RMBS and CMBS markets was one indicator that PPIP was effective and achieved its stated purpose. The return of market confidence can be seen in the general recovery or stabilization of asset prices. PPIP and the TARP programs to support bank capital were both intended to improve bank balance sheets. As we noted previously, banks have already been able to raise large amounts of private capital and perceptions of risk in the banking system have declined markedly since the onset of the crisis. PPIP and various other programs and initiatives may have to some extent addressed concerns about bank balance sheets. An indication of the reduction in perceptions of risk is the general recovery in prices of legacy securities is the pricing of Jumbo and Alt-A RMBS securities (see fig. 9). Highly-rated CMBS prices also confirm that parts of the ABS and MBS markets have stabilized since PPIP was announced. Specifically, highly- rated CMBS prices have rebounded from their lows in late-2008, and we note that average spreads have also tightened in the same time period (see fig. 10). This, however, does not reflect the continuing troubles in the broader commercial real estate market as delinquencies have continued to increase. Treasury could strengthen its analytical framework by identifying clear objectives for small business programs and explaining how relevant indicators motivated TARP program decisions. As noted above, Treasury identified four public documents that represented its rationale and decision-making process for the decision to extend TARP. Our understanding of Treasury’s decision-making process was also informed by reading FinSOB quarterly reports and through our interviews with Treasury and other officials. Treasury often directly or indirectly linked program decisions to a variety of quantitative indicators, including surveys, financial market prices and quantities, and measures of program utilization, among others. As discussed previously, all of these factors played an important role in the decision to extend TARP, expand some programs, and end others. As noted in our October 2009 report, indicators are an important step toward providing a credible foundation for TARP decision making. However, how the performance of an indicator affected a program decision, or if and when that indicator would signal a program had or had not met its goals was not always clear. Balancing the costs and benefits of TARP programs effectively will require making objectives explicit, assessing the impact of any commitments under TARP programs, and accounting for the fiscal and other costs of continuing to support markets. Again, a set of indicators, although imperfect, might inform the proper timing for winding down the remaining programs and liquidating of investments. Treasury has yet to identify clear program objectives for small business lending, which raises questions about when Treasury will know that government assistance can be removed. Without a strong analytic framework that includes clear objectives and meaningful measures, Treasury will be challenged in determining whether the program is achieving its desired goals. Given the scale of TARP and importance of the government’s entry and exit from financial market interventions, decisions to allocate remaining resources should be subject to rigorous analysis. Because Treasury may decide to commit additional resources to problem areas before the expiration of TARP, or scale back commitments in others, it needs to be able to estimate the effect of program resources on meeting its objectives. Wherever possible Treasury should use quantitative factors in its decision making, but we recognize that qualitative factors are also important. While HAMP continues to face implementation challenges, the small business initiatives are challenged by a lack of data needed to clarify the root of the problem which may limit Treasury’s ability to effectively address it. For example, without data and analysis to determine the extent to which access to small business credit is being restricted by limited capital at institutions engaged in small business lending, Treasury will not have a sufficient basis to address the underlying issues that may be affecting small business lending. With a better understanding of the problem, Treasury can set clear, achievable goals to address it. The crisis and consequent interventions temporarily changed the U.S. financial system from one primarily reliant on markets and market discipline to one more reliant on government assistance and public capital. With the recovery underway, financial regulators in the United States have begun to shift focus from stabilizing the economy to exiting from crisis- driven interventions and transferring risk back into the hands of the private sector. Many TARP recipients have repaid loans and repurchased shares and warrants. A recent Federal Open Market Committee meeting focused on how the Federal Reserve should sell off assets acquired during the financial crisis. However, weaknesses in residential housing, commercial real estate, and labor markets, as well as risk from more global economic forces, limit the ability to withdraw rapidly and completely. For example, the Federal Reserve dollar liquidity swap lines were re-established with some central banks in response to the re- emergence of strains in short-term U.S. dollar funding markets as a result of European debt and currency issues. While the Secretary, in consultation with the Federal Reserve and FDIC, elected to extend TARP to address perceived weaknesses in the economy and respond to unanticipated shocks, Treasury still faces remaining decisions about allocating any additional funds to MHA and CBLI before its ability to take actions authorized by EESA expires on October 3, 2010. Moreover, ongoing decisions will need to be made related to the general exit strategy, including unwinding the equity investments and scaling back commitments in an environment where (1) other regulators are unwinding their programs, (2) the economy is still coping with the legacy of the crisis, (3) market distortion and moral hazard concerns are pressing, and (4) the long-term fiscal challenges facing the United States have become more urgent. While the level of consultation with the Federal Reserve was generally robust, broad coordination could be enhanced and formalized for future judgments. Similarly, decisions to allocate remaining resources and the timing of exits should be subject to rigorous analysis. By strengthening its framework for decision making, Treasury can better ensure that competing priorities are properly weighed and the next phase of the program is effectively executed. Although the economy is still fragile, a key priority will be to develop, coordinate, and communicate exit strategies to unwind the remaining programs and investments resulting from the extraordinary crisis-driven interventions. Because TARP will be unwinding concurrently with other important interventions by federal regulators, decisions about the sequencing of the exits from various federal programs will require bringing a larger body of regulators to the table to plan and sequence the continued unwinding of federal support. Similar to the need for a coordinated course of action to stabilize the financial system and re-establish investor confidence, the general exit from the government interventions will require careful coordination to avoid upsetting the recovery and help ensure the proper sequencing of the exits. Beyond the immediate costs of financial crises, these episodes can have longer term consequences for fiscal balances and government debt especially if the policy responses exacerbate the situation, lack coherency and effectiveness, or the exit strategy undermines the recovery because it occurs too soon or not soon enough. Moreover, as we discussed earlier in this report, the financial crisis and response has contributed to an already challenging fiscal legacy. As a result, the administration and Congress will need to apply the same level of intensity to the nation’s long-term fiscal challenge as they have to the recent economic and financial market issues. Coherent and effectively carried out exit strategies are the first step in beginning to address these challenges. We are making two recommendations to the Secretary of the Treasury: 1. To effectively conduct a coordinated exit from TARP and other government financial assistance, we recommend that the Secretary of Treasury formalize and document coordination with the Chairman of the FDIC for decisions associated with the expiration of TARP (1) by including the Chairman at relevant FinSOB meetings, (2) through formal bilateral meetings, or (3) by utilizing other forums that accommodate more structured dialogue. 2. To improve the transparency and analytical basis for program decisions made before TARP’s expiration, we recommend that the Secretary of the Treasury publicly identify clear program objectives, the expected impact of programs, and the level of additional resources needed to meet those objectives. In particular, Treasury should set quantitative program objectives for its small business lending programs and identify any additional data needed to make program decisions. We provided a draft of this report to Treasury for its review and comment. We also provided the draft report to the Federal Reserve and FDIC for their review. Treasury provided written comments that we have reprinted in appendix III. Treasury, the Federal Reserve, and the FDIC also provided technical comments that have been incorporated as appropriate. In its comments, Treasury generally agreed with our recommendations and noted that it would continue to consult extensively with the Federal Reserve and FDIC. Treasury agreed that publicly identifying clear program objectives was important and pledged to continue its efforts to do so. In commenting, the Federal Reserve questioned the use of FinSOB as a coordination mechanism for the next phase of the TARP program. We have amended our recommendation to clarify that we are not advocating an expansion of FinSOB membership or to otherwise change its structure or purpose. We continue to believe FinSOB is a potential forum for more formal interaction between agencies by including nonmembers at relevant meetings, not by expanding membership. Moreover, leveraging FinSOB is just one option for formalizing and documenting coordination between Treasury and FDIC. Bilateral meetings or using other forums that accommodate structured dialogue would be consistent with our recommendation. We are sending copies of this report to the Congressional Oversight Panel, Financial Stability Oversight Board, Special Inspector General for TARP, interested congressional committees and members, Treasury, the federal banking regulators, and others. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Richard J. Hillman at (202) 512-8678 or hillmanr@gao.gov; Thomas J. McCool at (202) 512-2642 or mccoolt@gao.gov; or Orice Williams Brown at (202) 512-8678 or williamso@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The objectives of this report are to determine (1) the process the Department of the Treasury (Treasury) used to decide to extend the Troubled Asset Relief Program (TARP) and the extent of coordination with relevant agencies and (2) the analytical framework and quantitative indicators Treasury used to decide to extend TARP. To determine the process Treasury used to decide to extend TARP and the extent of coordination with relevant agencies, we interviewed officials from Treasury and the Board of Governors of the Federal Reserve System (Federal Reserve), and received official responses to our questions from the Federal Deposit Insurance Corporation (FDIC). In addition, we reviewed Treasury documents and analyses, Financial Stability Oversight Board (FinSOB) reports, and previous GAO reports. In particular, we reviewed four public documents Treasury identified as central to its efforts to describe and communicate the framework it used to make decisions related to the extension of TARP to Congress and the public (1) the September 2009 report “The Next Phase of Government Financial Stabilization and Rehabilitation Policies”; (2) the December 9, 2009, letter to Congressional leadership certifying the extension of TARP; (3) Secretary Geithner’s December 10 testimony to the Congressional Oversight Panel; and (4) the “Management Discussion and Analysis” portion of the fiscal year 2009 Office Financial Stability Agency Financial Report. To determine the analytical framework and quantitative indicators Treasury used to decide to extend TARP, we similarly interviewed Treasury and the Federal Reserve and received official responses to our questions from FDIC. We also reviewed Treasury documents and analyses, FinSOB reports, and previous GAO reports. Based on the four key documents that Treasury identified and interviews with Treasury officials, we determined the key factors that motivated Treasury’s program-specific decisions associated with the extension of TARP and quantitative indicators that to some extent captured those factors. We furthermore analyzed data from Thomson Reuters, Treasury, the Federal Reserve, the National Federation of Independent Businesses, SNL Financial, and a broker-dealer to assess the state of the economy and financial markets. These data may also be suggestive of the performance and effectiveness of TARP. We believe that these data, considered as a whole, are sufficiently reliable for the purpose of summarizing TARP activity and Treasury’s decision-making process, and presenting and analyzing trends in the economy and financial markets. We identified some limitations of the data on credit conditions for small businesses, including the fact that the National Federation of Independent Business survey over- represents certain industries, and therefore may not represent the credit experiences of all small firms. Moreover, there are no consistent historical data on lending to small businesses. In addition, the data from Treasury’s survey of lending by the largest Capital Purchase Program (CPP) recipients (as of November 30, 2009, the last month in which all of the largest CPP recipients participated) are based on internal reporting from participating institutions, and the definitions of loan categories may vary across banks. Because these data are unique, we are not able to benchmark the origination levels against historical lending or seasonal patterns at the institutions. We conducted our audit from March 2010 through June 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The financial crisis prompted an extraordinary response from financial regulators in the United States. As table 3 shows, the crisis-driven interventions—both within and outside of TARP—can be roughly categorized into programs that: 1) provided capital directly to financial institutions, 2) enhanced financial institution’s access to liquid assets through collateralized lending or other credit facilities to 3) purchased nonperforming or illiquid assets, 4) guaranteed liabilities, 5) intervened in specific financial markets, and 6) mitigated home foreclosures. Some programs involved exceptional assistance to particular institutions, such as American International Group (AIG), because of its systemic importance or supported particular markets while others involved assistance to individuals through refinance or loan modification programs. Table 3 does not include interventions or programs that existed prior to the financial crisis, such as the Federal Reserve’s loan program through the discount window, FDIC receivership of failed banks, or interventions that did not expose the intervening bodies to risks or involve federal outlays such as the Securities and Exchange Commission’s temporary ban on short selling in financial stocks. In addition to the contacts named above, Lawrance Evans Jr. (lead Assistant Director), Benjamin Bolitzer, Timothy Carr, Emily Chalmers, William Chatlos, Rachel DeMarcus, Michael Hoffman, Steven Koons, Matthew Keeler, Robert Lee, Matt McDonald, Sarah McGrath, Harry Medina, Marc Molino, Joseph O’Neill, Jose Oyola, Rhiannon Patterson, Omyra Ramsingh, Matt Scire, Karen Tremba, and Winnie Tsen have made significant contributions to this report.
The Department of the Treasury's (Treasury) authority to purchase, commit to purchase, or commit to guarantee troubled assets was set to expire on December 31, 2009. This important authority has allowed Treasury to undertake a number of programs to help stabilize the financial system. In December 2009, the Secretary of the Treasury extended the authority to October 3, 2010. In our October 2009 report on the Troubled Asset Relief Program (TARP), GAO suggested as part of a framework for decision making that Treasury should coordinate with relevant federal agencies, communicate with Congress and the public, and link the decisions related to the next phase of the TARP program to quantitative analysis. This report discusses (1) the process Treasury used to decide to extend TARP and the extent of coordination with relevant agencies and (2) the analytical framework and quantitative indicators Treasury used to decide to extend TARP. To meet the report objectives, GAO reviewed key documents related to the decision to extend TARP, interviewed agency officials and analyzed financial data. The extension of TARP involved winding down programs while extending others, transforming the program to one focused primarily on preserving homeownership, and improving financial conditions for small banks and businesses. While the extension of TARP was solely the Treasury's decision, it was taken after significant deliberation and involved interagency coordination. Although sufficient for the decision to extend, the extent of coordination could be enhanced and formalized for any upcoming decisions that would benefit from interagency collaboration, especially with the Federal Deposit Insurance Corporation (FDIC). Treasury considered a number qualitative and quantitative factors for key decisions associated with the TARP extension. Important factors considered for the extension of new commitments centered on ongoing weaknesses in key areas of the economy. Treasury underscored that while analysis was possible on the needs or success of individual programs, the fragile state of the economy and remaining downside risks were difficult to know with certainty. Considering this uncertainty, Treasury wanted to extend TARP through October 2010 in order to retain resources to respond to financial instability. Going forward, Treasury could strengthen its current analytical framework by identifying clear objectives for small business programs and providing explicit linkages between TARP program decisions and the quantitative analysis or indicators used to motivate those decisions. GAO recommends that the Secretary of the Treasury (1) formalize coordination with FDIC for future TARP decisions and (2) improve the transparency and analytical basis for TARP program decisions. Treasury generally agreed with our recommendations.
This chapter first provides background information on the National Nutrition Monitoring and Related Research Program. Then, the objectives, scope, and methodology of our review of current and potential approaches to achieving a model program are described. The chapter concludes with an overview of the organization of the rest of the report. The NNMRRP is a complex system of data collection and research activities, including national surveys, state surveillance activities, and a variety of research programs. Over time, the NNMRRP has developed activities focused on five content areas: (1) food and nutrient consumption; (2) nutritional and health status; (3) dietary knowledge, attitudes, and behavior; (4) food composition; and (5) food supply. As shown in table 1.1, the information produced by these activities is used for a variety of purposes, from supporting basic research on human nutritional needs to informing policy decisions about health, agriculture, and food programs. Table 1.2 lists specific activities managed by the Departments of Agriculture and Health and Human Services, which have major responsibilities for the five areas. Other agencies, including Commerce, Defense, and the Environmental Protection Agency, also participate in the NNMRRP. USDA has major responsibilities for collecting information in all of the content areas except nutritional and health status. USDA gathers data on food and nutrient consumption through two national surveys—the Nationwide Food Consumption Survey (NFCS) and the Continuing Survey of Food Intake by Individuals (CSFII). In the past, NFCS has gathered nationally-representative information on the food consumption behavior of households and individuals. It provides detailed data on household costs for food. One of the major uses for these data is the development of the Thrifty Food Plan, which is the basis for calculating food stamp benefits. Implemented decennially, NFCS suffered from severe response rate problems (less than 40 percent) in 1987-88. As a result, the individual food consumption portion of the NFCS is expected to be dropped in the future. Since the mid-1980s, CSFII has supplemented NFCS by providing regular information on individual dietary intake. The data collected in its most recent implementation (1994-96) will be used to describe both general and low-income populations. The Diet and Health Knowledge Survey (DHKS), which collects data on dietary knowledge, attitudes, and behavior, is a follow-up to CSFII. Together, CSFII and DHKS are intended to inform policies relating to food production and marketing, food safety, food assistance, and nutrition education. The USDA activities focused on food composition and food supply are not surveys. For food composition, USDA gathers data from food industries and other sources on the nutrient content of foods. These data support the dietary surveys by translating the foods consumed into their nutrient components. Food supply is estimated by deducting data on exports, year-end inventories, and nonfood use from data on production, imports, and beginning inventories. HHS is responsible for the National Health and Nutrition Examination Survey (NHANES) and the state-based surveillance systems. These data collection activities provide information on the content areas (1) food and nutrient consumption, (2) nutritional and health status, and (3) dietary knowledge, attitudes, and behavior. Like NFCS and CSFII, NHANES collects data from a nationally-representative sample. However, NHANES’ unique contribution is its use of physical examinations and clinical and laboratory tests as well as traditional survey methods to gather information. NHANES’ data support research on the relationship between diet and health and inform health policy decisions, such as the promotion of cholesterol screening. After two earlier implementations in the 1970s, NHANES has just completed its third administration (1988-94). NHANES has been supplemented by the Hispanic Health and Nutrition Examination Survey (HHANES) in the early 1980s, follow-up surveys of respondents to NHANES I, and follow-up matches of the records from NHANES II to the National Death Index and other vital statistics records. The state-based surveillance systems were set up to provide quick information to states to use in planning and managing nutrition and health programs. They include the Pediatric Nutrition Surveillance System (PedNSS), Pregnancy Nutrition Surveillance System (PNSS), and Behavioral Risk Factor Surveillance System (BRFSS). Participating states collect the data for these systems with technical and other kinds of assistance from HHS. Both PedNSS and PNSS rely on data from clinic records from publicly funded health, nutrition, and food assistance programs, primarily the Special Supplemental Food Program for Women, Infants, and Children (WIC). PedNSS monitors nutritional status among low-income, high-risk children, while PNSS focuses on low-income, high-risk pregnant women, measuring nutrition-related problems and behavioral risk factors associated with low birthweight. In contrast to PedNSS and PNSS, data for BRFSS are gathered through telephone interviews, with respondents (adults 18 years and over) sampled through random digit dialing. In addition to a core set of questions on various health risk factors, BRFSS includes optional modules for the assessment of dietary fat and fruit and vegetable consumption. Other major HHS monitoring activities include the Total Diet Study, which analyzes nutrient and contaminant levels in the food supply; the Food Label and Package Survey, which monitors nutrition labeling practices; and the Health and Diet Survey, which assesses dietary knowledge and practices as they relate to health problems. Although the United States has one of the most comprehensive monitoring programs in the world, several problems with nutrition monitoring activities have been identified over the past two decades. Of key concern has been the lack of coordination and compatibility of different data collection activities. This encompasses differences across surveys in methods for assessing dietary intake and nutritional status, sampling designs, population descriptors and other measures, and the timing and reporting of results. To improve the coordination of federal nutrition monitoring activities and the quality of the data collected, the Congress passed the National Nutrition Monitoring and Related Research Act of 1990 (P.L. 101-445). The act established an Interagency Board, jointly chaired by USDA and HHS, to coordinate activities across the various agencies involved in nutrition monitoring. The Interagency Board was charged with developing a strategic plan that would establish a comprehensive nutrition monitoring and related research program. This plan—known as the 10-year comprehensive plan—was published in the Federal Register on June 11, 1993. It outlines a set of planning activities, including a general time frame and lead agencies for each activity. The activities are organized around six objectives, which are to • provide for a comprehensive NNMRRP through continuous and coordinated data collection; improve the comparability and quality of data across the NNMRRP; improve the research base for nutrition monitoring; • develop and strengthen state and local capacity for continuous and coordinated nutrition monitoring data collection that complements national nutrition surveys; improve methodologies to enhance comparability of NNMRRP data across federal, state, and local levels; and improve the quality of state and local nutrition monitoring data. In addition, an Advisory Council of experts from outside the federal government was created to guide the Interagency Board on scientific and technical matters. (Chapter 3 contains more information on the Interagency Board and its activities.) This is the third and final report in a series responding to a request from the former House Committee on Science, Space, and Technology. The first report reviewed past evaluations of federal nutrition monitoring and examined the progress of the NNMRRP since the passage of the 1990 act. It concluded that (1) a coherent program for nutrition monitoring was not yet in place and (2) although there has been progress in coordinating the program, the 10-year plan is incomplete because it does not include a framework for evaluating current and potential activities or detailed plans for achieving the objectives. Based on a survey of users of nutrition monitoring data, the second report described the purposes for which nutrition data are used and summarized respondents’ suggestions for improving NNMRRP activities. These suggestions, which addressed such issues as the timing of the surveys, their coverage of subpopulations, and the ease with which the data could be used, were consistent with the concerns raised by past evaluations of federal nutrition monitoring and indicated a continued need to address the long-standing problems of federal nutrition monitoring. Completing our response to the Committee’s request, this report builds on our earlier work to meet two objectives: (1) define a model nutrition monitoring program and (2) compare the current system with potential options for implementing the components of a model program in the NNMRRP. Before defining the features of a model program, we first limited our review to three of the five content areas covered by the NNMRRP: (1) food consumption and dietary intake; (2) health and nutritional status; and (3) knowledge, attitudes, and behavior. These three variables were selected for both substantive and methodological reasons. Substantively, the selected elements provide the data that support the planning and evaluation of interventions that directly affect health, such as nutrition education and food assistance programs. A substantive argument could also be made for the inclusion of food composition data, which are used to translate the information on what foods are eaten into estimates of nutrient intake. However, because an earlier GAO project focused on the NNMRRP’s food composition activities, we did not include this in our review. Methodologically, the three selected variables are linked because they rely on data obtained from individuals through surveys and physical examinations. In contrast, food composition information is based on chemical analyses of foods, and food supply is determined from macroeconomic data. Our focus on the components of the NNMRRP that rely on surveys facilitates the comparison of current and potential approaches to achieving a model program. To identify features of a model nutrition monitoring program, we used four sources: reviews of previous evaluations of federal nutrition monitoring activities, review of the objectives and related activities outlined in the 10-year comprehensive plan developed by the Interagency Board, consultation with expert advisers, and our survey of data users. These sources are detailed below, and supporting material is provided in appendix II. (See chapter 2 for a description of the features.) Past Evaluations of Federal Nutrition Monitoring. Past evaluations by such groups as the National Academy of Public Administrators, the Joint Nutrition Monitoring Evaluation Committee, and the National Research Council identified several concerns about the federal nutrition monitoring program. Because these evaluations also informed the Interagency Board’s development of its 10-year plan, we used them as the starting point for our identification of features of a model nutrition monitoring program. Our report, Nutrition Monitoring (GAO/PEMD-94-23), discusses these past evaluations and NNMRRP progress in addressing their recommendations. (Table II.1 in appendix II lists the criticisms identified in these evaluations and categorizes them by the features of a model program that they suggest.) The 10-Year Comprehensive Plan. As described above, the NNMRR Act required the Interagency Board to develop a plan for the program. The plan outlined six objectives, three with a federal focus and three emphasizing state and local monitoring, and listed 68 planned activities. These were reviewed for responsiveness to the model features suggested by the past evaluations. (See table II.2 in appendix II for examples of the 68 activities listed in the plan.) Consultation With Expert Advisers. To assist us at critical decision points in this project, we organized three panels. (The members of each panel are listed in appendix I.) The Core Policy Panel consisted of nationally-known experts in nutrition and nutrition monitoring policy. These panelists were consulted throughout the project. In addition, they helped us develop a framework of purposes for nutrition monitoring data that guided our survey of data users. The Methodology Panel included renowned experts in such fields as sampling, survey design, dietary assessment, and nutritional epidemiology. This panel met to help us identify promising approaches to critical elements of a nutrition monitoring system. In addition, the panelists assisted us on issues related to their areas of specialization. The Data Users Panel consisted of users of the nutrition monitoring data, chosen to reflect the broad range of purposes that the data must serve, including the support of state and local nutrition programs, academic research, food industry research, and the development and evaluation of federal food assistance programs. As with the Methodology Panel, this panel was convened once to help us identify promising approaches to nutrition monitoring. Individual panelists were consulted later about specific issues related to their expertise. Through the process of reviewing materials and participating in panel meetings, the expert advisers generated several suggestions for possible changes to the NNMRRP, examples of which are given in table II.3 in appendix II. In addition, they reviewed a draft of this report. Suggestions From the Survey of Data Users. Our survey of users of nutrition monitoring data focused on primary users of 14 of the NNMRRP surveys and surveillance systems. Primary users were defined as those who directly access the data rather than use information that has already been processed and interpreted by others in reports and other documents. Since there is no single list of primary users of NNMRRP data, we obtained lists of known and potential data users such as people who had requested the data from NNMRRP agencies, attendees at nutrition-related workshops, state and local government officials working in nutrition, and members of associations for nutrition professionals. A major portion of the survey was dedicated to determining how the respondents used the data. We also asked whether changes are needed to better meet the respondent’s information and data quality needs. If the respondent indicated a need for change, we asked for suggestions in the following categories: (1) data elements collected, (2) data collection methods, (3) units of analysis, (4) time of data collection, (5) population group coverage, (6) geographic area coverage, and (7) ease of use. These comments were analyzed to identify major themes for each of three groups of data collection activities—USDA surveys, HHS surveys, and HHS state-based surveillance systems. In appendix II, table II.4 identifies the themes associated with the features of a model program. From the features that were identified, we selected four as the focus of the second objective—the comparison of current and potential approaches to achieving the model program. We focused on features that reflect long-standing concerns about federal nutrition monitoring, that encompass other desired characteristics, and that generate debate about how they should be addressed. For each of the selected features, we identified current activities of the NNMRRP through interviews with staff in NNMRRP agencies, attendance at meetings of the Interagency Board and the Advisory Council, and reviews of program documents. Potential approaches were identified through literature review, analysis of our survey results, and consultation with the expert advisers. We did not identify the universe of potential approaches. Instead, our search was focused on those approaches deemed promising and feasible by our expert advisers. To assess each of the potential approaches, we first identified programs from the same set of sources that helped us define the features of a model program—that is, the literature, our survey of data users, and expert advisers. Where possible, we limited our consideration to programs that had some linkage to nutrition monitoring. For example, for separate studies of subpopulation groups, we looked at the experience with the Hispanic Health and Nutrition Examination Survey in the early 1980s. To describe the strengths and weaknesses of the programs illustrating the potential options, we reviewed program documents and related literature and interviewed managers and staff. This review was conducted between October 1993 and December 1994 in accordance with generally accepted government auditing standards. This report describes the results of a systematic examination of current and potential approaches to selected features of a model nutrition monitoring program. The strength of the review is its reliance on multiple sources of information. In addition to surveying users of nutrition monitoring data, consulting with experts, and reviewing both technical literature and program documents, we also interviewed officials and program staff in nutrition monitoring programs and in programs illustrating the alternatives. The major limitation of our work is its prospective nature. Because we were examining potential changes to the NNMRRP, hard evidence of the costs or effectiveness of the options was not available. Instead, the strengths and weaknesses of the options relative to the current nutrition monitoring system are supported primarily by logic and stated in tentative terms. Given this limitation, the report makes no recommendations for specific changes to the NNMRRP. In response to the first objective of the project, chapter 2 describes the model program and provides information on the selection of four model features as the focus of the report. (The sources used to develop the model are described above.) The second objective—comparing current and potential approaches to each model feature—is addressed in chapters 3-6, organized by the four model features. Specifically, chapter 3 examines coordination options; chapter 4 compares alternate approaches to providing continuous data; chapter 5 discusses different methods of supporting inferences about subpopulation groups and small geographic areas; and chapter 6 reviews approaches to assisting state and local monitoring activities. Appendix I provides additional detail on the expert advisers to the project, and appendix II describes the sources for the model features. Agency comments on a draft of the report are in appendixes III and IV. This chapter responds to the first objective of our review—the definition of a model nutrition monitoring program. First, the major features identified from the sources described in chapter 1 are outlined. From this set of features, we selected four as the focus for our response to the second objective of the review—the comparison of current and other approaches to achieving the model characteristics. The chapter describes our selection process and details the importance of the four features that are the subject of the rest of the report. Depending on the purposes that the data serve, the specific elements of a model nutrition monitoring program change. For example, researchers and program managers interested in food safety need detailed information on dietary intake, including specific brand names of the foods consumed. In contrast, a nutrition educator may place a higher priority on information about dietary knowledge, attitudes, and behaviors. However, at a more general level, some common ideal characteristics can be identified. Focusing on this general level, we used the sources described in chapter 1 to identify a number of features of a model program. Table 2.1 lists these features and the sources that support them. All of the features identified are clearly important elements of a comprehensive nutrition monitoring program. However, we selected four characteristics as the focus of our review: (1) a coordinated set of activities that (2) provides data on a continuous basis, (3) supports inferences about important population groups, and (4) assists state and local monitoring activities. These features encompass other desired characteristics, respond to long-standing concerns about federal nutrition monitoring, and generate debate about how they can be achieved. The first criterion for selection was to focus on the most general concerns. With these features, other desirable characteristics can be considered even though they are not emphasized. Specifically, a mechanism for evaluating the NNMRRP’s content and methods, including a review of the information needs of the data users, is described in chapter 3 as an element of a coordinated program. The theme of evaluating options in relation to the needs for data also underlies the discussion of the alternatives for the other features. Three other features—the comparability of data over time, the collection of longitudinal data, and the timeliness of data release—are related to the continuous collection of data and, as such, are considered briefly in chapter 4. The four features were also selected because they respond to long-standing criticisms of federal nutrition monitoring activities. Concerns about coordination, the continuity and timeliness of the data, the availability of information on subpopulations, and the role of states and localities were raised as early as 1977 by witnesses before a House subcommittee. In contrast, the concerns with response rates and the level of the data (individual or household) can be traced to the problems with the last NFCS (described in chapter 1), which USDA has taken steps to avoid in the future. The selected features are also not easily addressed; that is, there is debate about the best approach to achieving each feature. For example, some expert advisers stated that assistance to states and localities should focus on data analysis and interpretation, while others argued for a larger state and local role in data collection. Similarly, to provide information on subpopulation groups, national surveys could be supplemented by such means as the surveillance systems and oversampling or the national surveys could be abandoned and their resources dedicated to special studies of specific groups. In contrast to the debates about how best to achieve these and the other selected features, there has been consensus about the kind of dietary intake methodology that will be used by the NNMRRP surveys and ongoing research to improve these methods through automation and other means. An ideal federal nutrition monitoring program would have a coordinated set of activities that provides data on a continuous basis, covers important population groups, and supports state and local monitoring activities. Coordination is the key both to the efficiency of the system and its responsiveness to the needs of data users. A continuous flow of data would ensure that the information on the nation’s nutritional status was up-to-date and would also enable the tracking of dietary behavior and nutritional status over time. Information on population groups that are vulnerable or growing rapidly is needed to plan, manage, and evaluate programs intended to prevent or ameliorate nutritional problems. Because state and local governments are often the location of such programs, they need assistance in either interpreting available data or collecting their own information. The importance of these features is further explained in the following sections. The coordination of nutrition monitoring activities has implications for both the utility of the information produced and the costs of the program as a whole. The utility of the information is constrained when data from different data collection activities cannot be easily combined. For example, research on the relationship between diet and health could be strengthened if CSFII’s data on dietary intake could be combined with NHANES’ data on health and nutritional status. However, because of differences in the sampling designs and nutrition measures, combining the data from the two surveys is difficult and controversial. Similarly, poor integration of the data collected by the state-based surveillance systems with the national surveys presents a barrier to meaningful comparisons of state and national populations. To the extent that the lack of coordination results in unnecessary duplication, a fragmented system can also increase the costs of nutrition monitoring. For example, the current NNMRRP includes two surveys focused on dietary knowledge, attitudes, and behaviors—one operated by USDA and the other by HHS. The Interagency Board plans to review these surveys for duplication. While some overlap can be useful as a quality check on data from different sources, unnecessary redundancy in the system uses resources that could be used for currently unmet data needs—such as the need for information on specific subpopulations at risk for nutrition-related problems. The need for a coordinated system was supported by three of the four sources used to identify features of a model program. In addition to a general concern about the lack of coordination, past evaluations of federal nutrition monitoring have criticized the incompatibility of the data gathered by different enterprises. Specifically, these evaluations have called for compatible methods of assessing dietary intake, a core set of standardized measures for the major surveys, compatible sampling techniques in the national surveys, and integrated reporting. Another criticism related to coordination focused on the absence of a systematic process for determining the needs for nutrition monitoring data across the different data collection activities. Although needs are assessed for individual activities, no comprehensive assessment of needs in relation to the total system of activities has taken place. Coordination is also a major theme of the 10-year plan. Four of the six objectives discussed in the plan focus on coordinating data collection and improving the comparability of the data. Of the specific activities listed for each objective, some of those focused on coordination are: coordinating the planning for coverage, tracking, and reporting of findings from surveys and surveillance systems; identifying ways to increase comparability within a dietary method to improve the quality and usefulness of data; and establishing a mechanism for improved coordination among federal agencies that collect and use survey information about knowledge, attitudes, and behavior to assess gaps and duplications in existing surveys. The third source of support for the need for coordination comes from meetings of the advisory panels. The panelists noted that the different agencies involved in nutrition monitoring have different missions and priorities and, hence, coordination is difficult. Their major suggestions for improving coordination were to give coordination responsibility to a single lead agency, to coordinate from an interagency body with permanent staff and enforcement authority, to locate nutrition monitoring in statistical agencies within the user departments or within a central statistical agency, to centralize the congressional appropriations process for nutrition monitoring activities, and to ensure informed review of data collection plans by qualified staff at the Office of Management and Budget. Our survey queried respondents about changes to specific data collection activities, rather than the NNMRRP as a whole, so coordination was not a major theme of the comments provided by the data users. The Interagency Board defines continuous data as data collection that is “repeated regularly and frequently.” Two consequences follow when data are not regularly available. First, because the kinds of foods that are available and the eating patterns of the American people change rapidly, the data become outdated quickly. Compounded by delays between data collection and data release, long intervals between administrations of the surveys diminish the relevance of the data to the current situation and, hence, their utility in program planning and management. For example, if any of the policy changes currently being considered are implemented— such as the consolidation of food assistance programs—up-to-date information at regular intervals before and after the change will be needed to monitor any positive or negative effects of such policy changes on the population. A second consequence of pauses in data collection is that potential efficiencies of a continuous survey operation are lost. Each implementation of a national survey requires extensive planning, including reviews of the needs of data users, development and testing of data collection procedures, and all the steps involved in approving a contract. An ongoing data collection operation could streamline some of these processes. In addition, when surveys are not in the field continuously or even at dependable intervals, they may attempt to meet as many of the needs of data users as possible when they are administered. For example, the low response rates of the 1987-88 NFCS have been partially attributed to the burden on survey respondents resulting from its attempt to obtain both household and individual data with one interview. In contrast, an ongoing survey could consist of a core set of questions and rotating modules of questions that address the needs of specific users. The continuous collection of data was a theme in all four of our sources for the features of a model program. Past evaluations of nutrition monitoring have called for the continuous collection and timely release of nutrition-related data. Although the 10-year plan does not list specific activities focused on the continuous collection of data, two of the plan’s objectives indicate the Interagency Board’s concern with the timeliness of the data collected at the federal and state and local levels. The expert advisers also emphasized the need for regularly available data and suggested the following mechanisms for the ongoing collection of the information: • continuous national nutrition surveys, • addition of nutrition-related modules to existing surveys, • reliance on program data that are already collected, and • collection of longitudinal data (that is, data collected from the same sample over time). Finally, responses to our survey of the users of nutrition data not only stated a desire for continuous data, but also indicated that data that are collected at regular and frequent intervals serve some important purposes. For example, data that are currently available on a regular basis have been used to measure progress toward the Healthy People 2000 objectives and to evaluate policies such as the fortification of infant formula. The need for continued and improved information on subpopulation groups and small geographic areas is supported by several arguments. First, information is needed on specific populations known to be at risk for nutrition-related problems, such as Native Americans or homeless persons, in order to identify their needs and develop and target assistance programs. Second, some subpopulations, including Hispanics and the elderly, are growing rapidly. Their dietary patterns or nutritional needs may be different from the population as a whole; thus, information about these groups is needed to monitor their needs and to understand their effect on estimates of the prevalence of various nutritional problems in the overall population. Finally, the samples for the three major NNMRRP surveys are designed to yield national estimates. However, much of the planning for health and nutrition programs is conducted at the state and local levels. Hence, states and localities also need information on nutrition-related indicators for their populations. For the reasons given above, past evaluations of federal nutrition monitoring have criticized the program for not covering specific population groups and geographic areas. Although none of the overall objectives of the 10-year comprehensive plan focus on the need for information on subpopulation groups, several of the activities do. For example, one planned action is to develop and implement a plan for improved coverage of groups at nutritional risk. Our advisory panels also discussed the importance of information on subpopulations, noting differences in the kinds of foods consumed in different regions of the country and by different ethnic and racial groups as well as differences in nutritional needs at different ages. Their suggestions for improving the availability of data on subpopulation groups and small geographic areas included different sampling strategies for different populations, contracts with states and localities to gather information on geographically-based populations, and indirect estimation to support inferences about subpopulation groups and small geographic areas. The availability of data on important subpopulations and states and localities was one of the changes to the current data collection activities requested by respondents to our survey of data users. The users also emphasized the importance of information on subpopulations by describing the uses supported by currently available data on subpopulation groups such as determining dietary needs of the elderly, assessing differences between blacks and whites in the effect of obesity on diabetes, informing policies on the fortification of infant and toddler foods, and targeting a blood pressure screening program to the Mexican-American population. State and local governments are interested not only in the applicability of federal nutrition monitoring data to their jurisdictions, but also in having federal assistance in collecting and interpreting their own data. The major justification for the emphasis on state and local monitoring is the wide range of uses that states and localities have for nutrition data. The examples in table 2.2, drawn from responses to our survey of data users, illustrate the utility of existing NNMRRP data for state and local governments. While NNMRRP data collection systems meet some of the state and local needs for nutrition monitoring data, state and local officials have called for additional technical assistance in analyzing and interpreting existing sources of data and for federal support in collecting their own data. In addition to the support provided by the survey of data users, our other sources indicated the importance of building state and local capacity for nutrition monitoring. For example, past evaluations recommended assisting state and local nutrition monitoring activities. These recommendations are mirrored in the Interagency Board’s 10-year comprehensive plan, which clearly signals the importance of states and localities in the NNMRRP by devoting three of its six objectives to strengthening state and local monitoring activities. The expert advisers also noted the role of states and localities in nutrition monitoring, but disagreed about the responsibility states should have for data collection. Some argued for state-based data collection that feeds into a federal system, and others argued for less state responsibility for data collection, but for increased consideration of state needs in federal data collection activities. Their suggestions for assisting states and localities include • providing financial assistance to states to determine their own data needs, • creating a federal-state partnership in which states can provide funds for some extra sampling or extra questions on federal surveys, • developing standardized modules of interest for state data collection • assisting state collection of data on subpopulations, and • providing technical assistance in data interpretation. The rest of the report describes the strengths and limitations of current and potential approaches to achieving the selected model features. These approaches are listed in table 2.3. The alternate strategies were selected from the suggestions generated by the expert advisers using the criteria of responsiveness to criticisms of the current approach and feasibility. For example, the current approach to coordination—the Interagency Board—is criticized for its lack of authority over the member agencies. In contrast, an independent central authority could have influence over the NNMRRP agencies. The options were considered feasible if they were already used in other programs with similar issues (such as the lead agency approach for other cross-agency programs), past activities of the NNMRRP (such as the special study approach for information on subpopulations), or related current activities by NNMRRP agencies (such as indirect estimation). As described in chapter 2, a model program would have a mechanism for coordinating the various nutrition monitoring activities to maximize the utility of the data and minimize the costs of its collection. This chapter first reviews the status of current NNMRRP activities to improve coordination. Then, two other possible coordination mechanisms— coordination by a central authority and coordination by a single lead agency—are examined. (Table 3.1 provides an overview of the strengths and limitations of the various approaches to coordination.) As described in chapter 1, the NNMRRP meets multiple needs for nutrition-related data. Yet, historically, these needs have not been met by an integrated program with the capacity for evaluating data needs and making adjustments as those needs change. Instead, a fragmented system of activities developed over the decades as new needs for nutrition data were identified. For example, in the early 1930s, USDA developed its first national survey of household food consumption because data on the food supply provided no information about the distribution of food at the household and individual levels. Similarly, the nutrition component was added to the National Health Examination Survey in the early 1970s in response to a need for more information about hunger, and state-based surveillance systems were established in recognition of the primary role of states in providing services to populations at risk of nutritional problems. To address concerns about the lack of coordination across the agencies involved in nutrition monitoring, the National Nutrition Monitoring and Related Research Act of 1990 required the Secretaries of Agriculture and Health and Human Services to implement a coordinated program of nutrition monitoring. The act specified several tools: an Interagency Board, the development of a comprehensive plan for the program, a council of outside advisers, and an integrated budget. The Interagency Board created by the act has the difficult task of coordinating numerous data collection and analysis activities across several agencies that have traditionally had separate and distinct missions and operations. The Board has two chairpersons, one selected by the Secretary of Agriculture and one by the Secretary of Health and Human Services. For USDA, the chair is the Under Secretary for Research, Education, and Economics. The HHS chair is the Assistant Secretary for Health. Membership on the Board includes representatives of various agencies in USDA and HHS, as well as the Bureau of the Census, Agency for International Development, Bureau of Labor Statistics, Department of Defense, Department of Veterans Affairs, and Environmental Protection Agency, among others. The Executive Secretary for the Board rotates between USDA and HHS every 2 years. To facilitate coordination, the Interagency Board established working groups focused on survey comparability, food composition, and federal-state linkages and information dissemination. The Secretaries and the Interagency Board were charged with developing a 10-year strategic plan, which was published in the Federal Register on June 11, 1993. The plan outlines a set of planning activities, including a general time frame and the lead agencies for each activity. The activities are organized around six objectives, listed in chapter 1 (see p. 14). The Interagency Board clearly recognizes the need for improved coordination since four of these objectives focus on either the coordination of the data collection activities or the comparability of the data. To advise the Board on the development and implementation of the NNMRRP, the act established the National Nutrition Monitoring Advisory Council. The members of the Council represent academic institutions and other interested parties drawn from outside the federal government. The act also required the Interagency Board to submit annually a coordinated budget for nutrition monitoring. Both the concern that preceded passage of the act and the structure it created appear to have improved communication and cooperation among the agencies. The Board and its working groups provide mechanisms for communication and joint decision-making. Specific actions that demonstrate the increased coordination include the development of • common population descriptors for use in conducting and reporting the 1994-96 CSFII and the next NHANES, • a marketing and distribution plan for NNMRRP reports, • an automated dietary intake interview that would facilitate timely data release and linkage across CSFII and NHANES, and • a common set of questions on food security (a concept that addresses the certainty about having enough to eat) to be used in the Current Population Survey. In addition, a jointly funded research project explored the possible linkage of CSFII and NHANES sampling plans. Alternate sampling designs were evaluated using the criteria of (1) ability to satisfy the separate objectives of the two surveys, (2) benefits in overall costs or analytic power, and (3) feasibility, especially in terms of burden on the survey respondents. The draft report from the contractor on the project emphasized the compromises one or both surveys would have to make to link their sample designs. For example, while the combination of the NHANES and CSFII into a single survey could yield a rich database of information on diet and health, the likely increase in respondent burden could reduce response rates and response quality. Another alternative—using linked samples for NHANES and CSFII—could decrease CSFII’s precision because NHANES’ sampling design is determined by the survey’s reliance on mobile examination centers to conduct physical exams of the respondents. The Interagency Board concluded that the two surveys should remain independent, although work on improving the comparability of the data should continue. As required by the act, the Interagency Board submits an annual budget for the NNMRRP to the Congress. The budget report includes costs allocated for data collection, related research, information dissemination and exchange, and technical assistance; however, these different types of costs are not distinguished. Instead, the funds dedicated to nutrition monitoring and related activities are reported only by agency, not by type of activity. The budget report for fiscal years 1994-96 indicated that a total of $157.7 million was dedicated to nutrition monitoring or related research in 1994. Of that, $30 million was reported by the Centers for Disease Control and Prevention (which has responsibility for the HHS surveys and state-based surveillance systems) and $9.3 million was reported by the Human Nutrition Information Service (which had responsibility for NFCS and CSFII). The remainder was accounted for by agencies whose primary involvement in nutrition monitoring is related research. For example, HHS’ National Institutes of Health reported $25.9 million dedicated to NNMRRP activities and USDA’s Agricultural Research Service accounted for $50.7 million. The budget report is useful in communicating to the Congress a general sense of the cost of the NNMRRP across agencies. However, funds for nutrition monitoring cannot always be disaggregated from other purposes of the data collection and research programs. As a result, the budget report contains only approximate amounts dedicated to nutrition monitoring and related research. Moreover, with the recent incorporation of the office responsible for NFCS and CSFII into ARS, determining which funds are dedicated to the monitoring activities and which are used for related research will be more difficult. The literature on the development of objectives to increase accountability for program results indicates that (1) objectives should be written in terms that can be used to judge progress toward achieving them and (2) implementation plans and specific measures of progress should be developed for the goals and objectives. In the 10-year plan, the objectives are stated in general, global terms so that it is not clear when an objective can be considered achieved. For example, the first objective is to “provide for a comprehensive NNMRRP through continuous and coordinated data collection.” Neither the objective itself, nor the text following it clearly defines what the terms “comprehensive” or “coordinated” mean. Similarly, the activities are too vague to be considered implementation plans for the objectives; for example, no activity directly relates to the development of continuous data collection. As another illustration, the activity, “identify ways to increase comparability within a dietary method to improve the quality and usefulness of data,” specifies neither the degree of quality required nor the uses that will be facilitated. Without more concrete, measurable objectives, there is little accountability for the program because progress toward the objectives cannot be assessed. (Other examples of activities listed in the 10-year plan can be found in table II.2 in appendix II.) In addition, the plan did not describe how activities would be ranked by importance or addressed within current fiscal constraints. However, since the plan was published, the Interagency Board drafted an approach for ranking the 68 activities listed in the plan into three categories of priorities: (1) essential (mandatory, legislatively required), (2) necessary (critical but not mandatory), and (3) beneficial. An interagency implementation group, involving around 60 agency representatives, applied the approach. Twenty-five of the activities were ranked in the high-priority category, 33 were ranked as next most important, and only 10 were ranked as beneficial but not critical. While this is an important first step in setting priorities, the Interagency Board has not yet linked the top-ranked tasks to the costs and benefits of completing them. Such a framework could be used to understand the trade-offs in selecting one or another approach to each task. Since an assessment of the effectiveness of the different approaches to the objectives requires information on the uses of the data, another obstacle to the Interagency Board’s implementation of a coordinated system is the absence of a comprehensive assessment of how nutrition monitoring data are used across the federal government and by data users in other settings. For example, although USDA and HHS held a joint workshop to assess the needs of users of dietary intake data in August 1994, the workshop included only representatives of federal agencies. Finally, the ability of the Interagency Board to coordinate the NNMRRP is limited by the lack of resources dedicated to coordination. The Interagency Board has no staff, although two people—one from USDA and one from HHS—have been given primary responsibility for organizing NNMRRP activities. The NNMRR Act gives the Secretaries the option of appointing an administrator for the program, but so far they have chosen not to exercise that option. While recognizing the progress made by the Interagency Board, we also considered other coordination mechanisms. Two options for improving coordination—through a central authority or by a lead agency—are reviewed in detail in this section. In addition, other options discussed by our expert advisers are briefly presented. The suggestion of coordination through a central authority came in response to the lack of enforcement power held by the Interagency Board over the member agencies. The kind of central, coordinating agency envisioned by the expert advisers is most clearly exemplified by the executive offices in the White House. Therefore, to examine the advantages and disadvantages of having a central authority provide coordination, we reviewed the literature (including prior GAO reports and congressional hearings) and interviewed agency officials in three White House offices that have coordination responsibilities: Office of Management and Budget (OMB), Office of Science and Technology Policy (OSTP), and Office of National Drug Control Policy (ONDCP). As indicated by the brief descriptions of each office provided in table 3.2, the coordination tools used by the White House offices are similar to those used by NNMRRP’s Interagency Board, including interagency committees and working groups, development of plans, and review of budgets. One potential advantage of elevating the coordination of nutrition monitoring to a high-level central authority is the increased participation of high-level officials in coordination activities. This participation could, in turn, increase the ability of the coordinating body to establish priorities. Currently, some of the agencies participating in the Interagency Board are represented by administrators or directors, while other agencies are represented by staff members who have no authority to change agency activities or establish priorities. In contrast, the political visibility of a program under White House management could encourage agencies to send representatives in positions of authority, capable of establishing priorities and committing resources to support them. The political visibility that comes with the participation of a high-level central authority may also contribute to the effectiveness of the various coordination tools. For example, the Interagency Board compiles budgets obtained from each of the agencies involved in nutrition monitoring without reviewing them for consistency with the activities identified in the 10-year comprehensive plan. In contrast, the White House offices use the budget review process to bring activities of the agencies into line with overarching policy goals. For example, ONDCP can threaten to decertify an agency’s drug budget if it is not consistent with the National Drug Control Strategy. Decertification has no practical ramifications, but it sends a politically important message about the priority given to drug control activities by the White House. An additional potential advantage of having a central authority provide coordination is that it can be a central location for assistance to all data users. For example, ONDCP has a Bureau of State and Local Affairs that works with state and local government agencies involved in drug control activities. The Bureau serves as a clearinghouse for information about state and local activities and uses conferences to increase communication with and among state and local officials. In addition, the Bureau can communicate the concerns of state and local governments and community groups to the federal agencies involved in drug control programs. For nutrition monitoring, such an office could provide a central contact point for users in a variety of settings, including federal, state, and local governments, food industry, and health care organizations. Although the increased political visibility of a high-level central coordination office may facilitate the development of priorities, it may also increase the potential for political pressure on the data collection and research. A conflict between ONDCP and HHS over the data collection and reporting of drug data illustrates this issue. HHS was concerned about the degree of ONDCP’s involvement in how the data were collected and reported, while ONDCP expected HHS to meet its data needs. Political influence on the scientific agenda is also a concern for OSTP, where priorities may change with changing administrations even if scientific issues remain the same. In addition, although White House offices have more influence over the budgets reported by the different agencies, the budgets developed for other programs that cut across agency jurisdictions share the limitation of the coordinated NNMRRP budget. Specifically, agency activities may serve multiple purposes, thus making it difficult to determine how much of the overall costs are dedicated to the interagency program. For example, Coast Guard patrol boats serve drug interdiction purposes, but are also used in search and rescue missions that are not related to drug control. Thus, the Coast Guard can only estimate the portion of its resources dedicated to supporting national drug control efforts. Ambiguity about the portion of a multipurpose program that serves the interagency purpose makes it difficult to monitor the costs of the program. Although central authorities like the White House offices are financed separately from the agencies they oversee, resource limitations persist. For example, HHS staff attributed part of their conflict with ONDCP to a lack of technical and substantive expertise among ONDCP staff. Similarly, a past director of OSTP identified limited staff resources as a reason why long-term planning received less attention than short-term problems. Another possible approach to coordinating the NNMRRP is locating the responsibility within a single agency. To investigate this approach, we discussed the option with our expert panels and used program documents and evaluations conducted by GAO and the Congressional Budget Office to examine the experience of the High Performance Computing and Communications Program (HPCCP). Like the NNMRRP, the HPCCP involves multiple agencies with different strengths and missions. Unlike the NNMRRP, the oversight of the HPCCP is located in OSTP, which delegated the responsibility for coordinating the activities across the agencies to a single agency, the National Library of Medicine. While not a major player in high-performance computing, the National Library of Medicine was seen as an independent, unbiased participant with interest in and knowledge about the technology. The Library of Medicine’s role has been to pull together materials for program reports, convene meetings, and provide a clearinghouse. HPCCP’s National Coordination Office shares two of the strengths of the Interagency Board: It appears to have facilitated communication among the agencies and coordination of individual activities. In addition, it has provided the Congress with a budget that looks at the costs of high- performance computing activities across agencies. Moreover, the National Coordination Office has the added advantage of providing the Congress and the public with a central contact point for information on high-performance computing. “an explicit technical agenda, identifying and prioritizing specific technology challenges and establishing a framework of expected costs and results . . . clarify the program’s goals and objectives, focus efforts on critical areas, and serve as a baseline for measuring program progress and results.” In addition, HPCCP did not have uniform guidelines for which research activities should be included in the budgets submitted by the different agencies, mirroring the difficulty the other coordination mechanisms have in tracking funds used for multipurpose programs. The National Coordination Office also shares the NNMRRP’s lack of budget and staff resources for coordination. An additional concern about the lead agency model was raised by our expert advisers: If responsibilities for nutrition monitoring were located in one agency, nutrition monitoring might become a monopoly, serving the needs of only one agency. Its current dispersion across agencies allows the different components to serve different purposes. However, safeguards—such as that used for the HPCCP when it was located in an agency that did not have a large investment in high-performance computing, relative to some of the other agencies—could address this concern. In addition to coordination by an interagency body with permanent staff and enforcement authority or a single lead agency, other suggestions for improving coordination of the NNMRRP were to locate nutrition monitoring in statistical agencies within the user departments, centralize the congressional appropriations process for nutrition monitoring activities, and ensure informed review of data collection plans by qualified staff at the Office of Management and Budget. The first idea—locating nutrition monitoring in statistical agencies within the departments that use the data—was intended to focus the program on the quality of the data collected. However, a concern about this suggestion was that it could result in decreased responsiveness to the needs of the data users. We did not pursue it because the major surveys are already located in statistical or research branches of HHS and USDA. (NHANES is operated by the National Center for Health Statistics, and NFCS and CSFII were recently relocated to ARS.) Similarly, OMB already has responsibility for reviewing the data collection activities of the agencies, although OMB staff reported that they rely on the agencies to describe coordination efforts. Finally, we did not review the advantages and disadvantages of centralizing the congressional appropriations process for nutrition monitoring activities because its relevance to the utility and efficiency of the data collection activities was not clear. Lack of coordination across program activities has implications for both the effectiveness and the efficiency of the NNMRRP as a whole. The Interagency Board has made progress toward coordinating activities across the different agencies. However, the other approaches to coordination suggest mechanisms that could further strengthen the work of the Interagency Board. First, because its responsibilities are shared by USDA and HHS, the Interagency Board does not provide a central contact point for users of nutrition monitoring data. A central contact could be established if the Secretaries of USDA and HHS used their option to appoint an administrator of the program or if responsibility for responding to requests for information about the program was assigned to a single agency, as it was for high-performance computing. Second, the Interagency Board does not review the agency budgets it compiles for consistency with the overarching priorities of the program. Before such a review could occur, specific objectives and priority activities would need to be identified. Then, the Interagency Board could work with OMB to secure funding of NNMRRP priorities. Evaluating programs intended to reduce diet-related chronic disease, tracking progress toward health objectives, and monitoring changes in our diets are the kinds of activities that require continuous data. The advantages and disadvantages of the current and alternate approaches to collecting continuous data are summarized in table 4.1 and detailed below. Although the provision of continuous data is one of the objectives identified in the Interagency Board’s 10-year plan, not one of the three major NNMRRP surveys—NFCS, CSFII, or NHANES—is implemented continuously. In fact, planned future administrations of two of the surveys have been postponed, potentially compromising the ability to monitor trends in diet-related health risks over time and evaluate the effect of any changes in food assistance policy. NFCS has been administered at approximately 10-year intervals since the 1930s. Its next implementation was planned to begin in 1996, but is now tentatively scheduled for 1998, depending on funding. CSFII, originally intended to provide continuous data on dietary intake, has had three separate administrations: 1985-86, 1989-91, and 1994-96. After a 1-year pause in 1997, it is expected to resume for another 3-year period in 1998. Since the National Health Examination Survey gained a nutrition component in 1971, NHANES has been fielded three times: 1971-75, 1976-80, and 1988-94. Like NFCS, its future implementation is uncertain because of budget constraints. Its planned implementation in 1997 is now expected to be postponed. Because of the lack of certainty about the implementation of the national nutrition surveys, the state-based surveillance systems are currently the primary source of continuous data in the NNMRRP. As described in chapter 1, PedNSS and PNSS rely on data from clinic records from publicly funded health, nutrition, and food assistance programs and BRFSS collects information through telephone interviews, with respondents (adults 18 years and over) identified through random digit dialing. While a valuable source of quick information for state and local program managers, the surveillance systems do not meet the needs of researchers or program decisionmakers who require either national data or in-depth food intake data. Compared to the national surveys, one of the strengths of the state-based surveillance systems is that they not only provide data continuously, but they are also able to process and report the data relatively quickly. For PedNSS and PNSS, information is collected as part of the process of receiving services from WIC and other publicly funded health, nutrition, and food assistance programs. Because they depend on program records, PedNSS and PNSS do not burden respondents the way surveys dependent on interviews do. The information is transmitted from the records of local health and nutrition programs to the state, which then forwards the records to HHS for analysis. Similarly, the data collected by the states for BRFSS are sent to HHS for processing. According to HHS officials, all three systems report data back to the states within a year and generally in less than 9 months. HHS has also helped states conduct their own analyses by distributing a standardized software package. Another strength of the state-based surveillance systems is that the data they collect are directly linked to program decisions. Although PedNSS and PNSS include only a few indicators of nutritional deficiencies and behaviors, they are selected to support state data needs for program planning and management. For example, the data are used to target resources for the WIC program. Similarly, BRFSS data have been used to inform decisions about nutrition education programs, such as campaigns to encourage the consumption of five servings of fruits and vegetables a day. In contrast to the national surveys, the surveillance systems do not permit examination of diverse diet-health associations across the entire population. Instead of collecting extensive biochemical, anthropometric, and interview data, they focus on a narrow range of variables relevant to specific programs or nutritional risks. For example, PedNSS collects clinical data on weight and height, monitors infant feeding practices, and assesses anemia. PNSS collects information on anemia and behaviors associated with low-birthweight babies. BRFSS asks respondents to report on their consumption of fat, fruits, and vegetables. This focus limits the breadth of uses that can be supported by the data; however, as noted above, it also limits the burden placed on respondents. While the systems are currently limited in the amount of dietary data they collect, HHS is exploring other methods of gathering these data. For example, with HHS support, the University of Texas examined the use of bar code data to look at dietary patterns. They concluded that the technology is not yet ready for use, but through a partnership with food manufacturers could be a promising method for the future. In addition, with USDA, HHS is evaluating the feasibility of collecting additional dietary data in the clinics that provide the PedNSS and PNSS records. Another concern about surveillance systems is the quality and completeness of the data across the different states. For example, for PedNSS, error can be introduced by variations in practice in weighing infants, such as with or without the baby’s winter clothes, or by clerical errors in entering the data in states that do not have automated data systems. PNSS, which attempts to collect a wider range of information than PedNSS, suffers from missing data on several variables, such as the pregnancy risk factors of smoking and alcohol consumption. However, HHS provides technical assistance to help states standardize their data collection procedures. In fact, by flagging biologically implausible values for the physical measures, HHS analyses of the surveillance data help identify clinics that may have poor procedures. The surveillance systems are also limited because, within the participating states, only certain groups of the population are covered. As described earlier, PedNSS and PNSS primarily provide information on mothers and children participating in the WIC program. As a result, data may not be available on other populations that are potentially at risk, such as homeless people or older children not eligible for WIC. In contrast, BRFSS has a wider target population, collecting data from randomly selected adults 18 years and over. However, neither adults in households without telephones nor children are covered by BRFSS. Since there is evidence that some health risk factors, such as smoking, are associated with living in a household without a telephone, this could affect estimates of the extent of diet-related risk factors as well. Finally, not all states participate in the PedNSS and PNSS surveillance systems and, therefore, they are not good sources of national-level data on their populations of interest. In 1993, 38 states participated in PedNSS and only 20 states participated in PNSS. (In contrast, BRFSS has good state coverage; in 1993, only Wyoming did not participate.) In addition to the state-based surveillance systems, we reviewed two other approaches to providing continuous data. The first option is a national nutrition-related survey that is operated continuously. To describe the strengths and weaknesses of this approach, we consulted with our expert advisers and interviewed managers of the current national nutrition monitoring surveys about the continuous operation of the surveys. The second option is the inclusion of nutrition-related questions on continuous non-NNMRRP surveys. The strengths and weaknesses of this alternative were explored by examining the NNMRRP’s recent experience developing food insecurity questions for the Current Population Survey (CPS). In addition, our expert advisers and the data users who responded to the survey made other suggestions, which are also briefly presented. An ongoing national survey is a possible approach to providing continuously updated information that addresses some of the limitations of the state-based surveillance systems. One of the current national surveys, the CSFII, was developed in response to calls for the continuous collection of individual data on food intake. It has not been implemented regularly in the past; however, current plans are for the ongoing implementation of CSFII as a 3-year survey followed by a 1-year pause for planning and development before the next 3-year period. An ongoing survey has the potential to yield continuously updated, timely data for monitoring the nutritional status of the nation’s population. The increased timeliness of the data could decrease the current pressure from collaborating federal agencies to include components on each implementation of the periodic surveys. For example, in contrast to the recent implementation of NHANES, which attempted to meet as many data needs as possible within a single survey, the ongoing implementation of NHANES could contain a core set of data items that would be collected continuously, supplemented at intervals by rotating modules. Such a streamlining would also reduce the burden on respondents. Moreover, the surveys could become more flexible by distinguishing between variables that change rapidly, variables that need regular but not continuous monitoring, and variables of emerging policy importance. The core set of items in continuous implementation could gather information on rapidly changing variables. Topical modules on issues that do not change as rapidly could be included on a regular schedule. Finally, as new issues arise, additional questions could be added. In addition to increased timeliness and flexibility, a continuous survey operation could be more efficient than periodic surveys because current costly start-up activities, such as planning, designing sampling strategies, and training interviewers, would be diminished. Thus, the data could be collected for less cost per respondent. However, as described below, without concomitant streamlining of the survey, overall costs could increase. While an ongoing survey could save money on start-up costs, it could increase costs overall because activities that are now funded and staffed sequentially would require a continuous flow of resources to be conducted concurrently. For example, as NHANES is currently implemented, staff change their activities as the survey moves through the phases of planning, implementation, analysis, and dissemination. If NHANES was implemented continuously, all of these activities would be going on at the same time. According to both USDA and HHS officials, the primary constraints on the continuous operation of their surveys are the need for dependable funding and sufficient staff resources. As described above, the absence of dependable funding has affected the frequency with which the national surveys are now implemented. The development of a survey that continuously collects data on a core set of items and intermittently collects data on other issues raises two difficult issues. First, the definition of the core items is complex. An expert panel convened by the Federation of American Societies for Experimental Biology was charged with identifying a set of core indicators to assess the nutritional status of difficult-to-sample populations. The report summarizing the work of the panel noted that (1) the suitability of an indicator changes with the purposes for the data and (2) information on the determinants and the consequences of each indicator is also needed.The panel ended by identifying three sets of indicators—minimal, intermediate, and comprehensive—without recommending specific measures for the indicators. The second issue with a continuous survey is the potential inflexibility of the core items once they are selected and implemented. While opportunities to test new methods are enhanced because the survey is in the field continually, making changes to the data collection procedures can be difficult because of the pressure to ensure that the measures are consistent over time. Without such assurance, changes in an indicator such as obesity may be the result of changes in how obesity is measured rather than changes in the prevalence of the condition itself. Our expert advisers suggested a survey with built-in periods of transition to allow the survey to incorporate new methods and new data elements as they emerge. A third approach to obtaining frequently up-dated information is the addition of nutrition-related questions to existing continuous surveys. The potential of existing surveys to provide data regularly on some variables is demonstrated by plans that included NNMRRP questions on food security in the April 1995 Current Population Survey conducted by the Census Bureau. Food security is a concept intended to go beyond the idea of hunger to measure the availability of food for a family or individual. The food security questions were developed by an interagency working group cochaired by HHS’ National Center for Health Statistics and USDA’s Food and Consumer Service. While the working group developed questions, USDA reserved space on the CPS for 1995. The question development process included determining how the data would be used and soliciting input from both federal and other data users. The food security module contains both core questions and supplemental ones. The Census Bureau will include the complete module. NNMRRP surveys could include the smaller core set of questions. If the initial implementation of the questions yields useful data, USDA plans to continue to support the inclusion of the food security questions annually in the CPS. The same strategy of piggy-backing nutrition-related questions on continuous surveys could be used with other surveys, such as the Survey of Income and Program Participation or the National Health Interview Survey. The latter has the added advantage of collecting health data that could be linked to nutritional indicators. The major advantage of this approach is the efficiency with which data on specific issues can be gathered. USDA will pay for the cost that the food security questions add to the CPS without having the responsibility or the cost of fielding and managing the survey itself. A potential disadvantage of this approach is that, just as core questions on nutritional status could be inflexible, existing continuous surveys can be hard to change because of their momentum. Moreover, existing surveys have their own set of constraints and limitations. For example, while CPS could accommodate the addition of questions about food security, it probably could not accommodate a module obtaining data on an individual’s dietary intake over the last 24 hours. A 24-hour recall instrument requires considerable training to administer and adds substantially to the burden on the respondent. For these reasons, opportunities to piggy-back nutrition-related questions on other surveys may be limited. In addition to the approaches reviewed, other actions were suggested by our expert advisers and data users who responded to our survey. Specifically, longitudinal surveys were suggested as an efficient way to collect data over time because a new sample does not have to be selected every time a survey is fielded. Since original respondents are followed up in subsequent administrations of the survey, longitudinal surveys can also be useful for tracking individual-level changes in food consumption behavior. However, longitudinal surveys have their own costs, including the need to collect additional data so that respondents can be found for later surveys and the likelihood of attrition as respondents either drop out or cannot be found. Another suggestion was the collection of survey data using automated survey technology. Because direct entry of the data into an automated system can speed the processing and aggregation of the survey, it can also accelerate the release of the data for analysis. However, it does not affect the regularity with which the data are collected in the first place. HHS already uses automated data collection for NHANES. USDA is planning to automate the next administration of the CSFII. The funding constraints that have caused the postponement of two of the national surveys jeopardize the availability of periodic data on the population as a whole. Although the state-based surveillance systems provide continuous information, they are inadequate to meet the need for data on the population at large or on in-depth nutrition and health status. Moreover, if approved, proposals to collapse funds for the WIC program with other food assistance programs into a block grant for the states could affect the major source of data for two of the state-based surveillance systems, PedNSS and PNSS. Although the NNMRRP has pursued such creative solutions as including food security questions on the CPS, the availability of up-to-date information could worsen. To decide how best to meet the needs for continuous data in the future, the NNMRRP would first need to analyze the purposes that require frequently collected data and the current mechanisms for supporting those purposes. Within this framework, the strengths and limitations of the different approaches to increasing the frequency with which important indicators are measured could be weighed according to which purposes are supported and which are diminished. Subpopulations can be defined by geographic location as well as by age and sex (such as infants or elderly women), physiological characteristics (such as pregnancy), ethnicity or race (such as Hispanic or Native American), income, and the intersection of any of these groups (such as low-income children). As described in chapter 2, information on subpopulations is needed to appropriately target and evaluate programs that address nutrition-related programs. As summarized in table 5.1, this chapter describes current approaches and some potential options to responding to the calls for better information on subpopulation groups and small geographic areas in the NNMRRP. Other issues of federal assistance to states and localities for nutrition monitoring are discussed in chapter 6. Subpopulation groups are covered in two ways by current NNMRRP activities. First, the three national surveys use oversampling of certain groups to ensure the selection of enough respondents to support subpopulation estimates. NHANES focuses on racial and ethnic subpopulations, while CSFII and NFCS have included subpopulations defined by income, reflecting USDA’s focus on food assistance to low-income populations. The second way in which data on subpopulations are gathered is through the state-based surveillance systems—PedNSS, PNSS, and BRFSS. Oversampling includes members of subpopulation groups in a sample at a rate greater than their proportion in the population. The purpose of oversampling is to ensure that data will be collected on enough group members to support inferences about the group as a whole. Oversampling is already used in the major NNMRRP surveys. NHANES III (1988-94) oversampled non-Hispanic blacks and Mexican-Americans, as well as persons 60 years or older and children 1-5 years old. NHANES staff indicated that two groups (Hispanics and persons 75 years and older) are likely to be important groups in the next implementation of the survey because both are growing and have significant health-related issues to study. In addition to estimates of the general population, the current administration of CSFII (1994-96) is expected to produce estimates for low-income populations through oversampling. Also, staff of both USDA and HHS surveys stated that the national surveys could oversample a state’s population, but that the state would have to finance the added costs. Oversampling has two major strengths. First, because it can be used in conjunction with a national survey, it has efficiencies of scale. Specifically, the planning and implementation costs are diminished because they are part of a larger survey. Second, data on both the group and the rest of the population are collected at the same time and with the same survey procedures, facilitating the comparison of the two population groups. Oversampling implies that the subpopulation group of interest is included in the sampling frames used to identify participants in the national surveys. However, not all special populations are well covered by a national sampling frame. For example, homeless individuals, persons who live in institutions, and American Indians and Alaska Natives living on reservations would not be included in the national household sampling frame. Even for those individuals who are included in the sampling frame, oversampling may not be appropriate because of the costs incurred in screening for members of the group. Screening is the process of asking questions at sampled households to identify whether they represent (or include representatives of) the subpopulation of interest. Screening adds to the costs of the survey because enough households have to be sampled and screened to identify the smaller number of households or individuals that meet the definition of the subpopulation. To reduce screening costs, oversampling is most effective for subpopulations that are geographically clustered or fairly well represented in the general population, such as persons with low income who are often clustered by neighborhood. In contrast, oversampling is not appropriate for groups that are few in number or geographically dispersed, such as pregnant women. A possible response to some of the limitations of oversampling is the use of multiple sampling frames. For some subpopulations, alternative frames or lists may be available that can be used in conjunction with the national sampling frame. Samples can be selected from both the subpopulation frame and the general population frame, and weighted estimates can then compensate for the fact that some group members could be selected from two different sources. For example, to oversample for the frail elderly, the elderly individuals identified in the sample drawn from the national sampling frame could be supplemented by samples drawn from lists of elderly who participate in congregate meals programs. This approach is useful because, relative to screening in the general population, it is an inexpensive way to identify members of the subpopulation. Although identified as a limitation in the previous chapter, the focus of the state-based surveillance systems on particular subpopulations can also be seen as a strength. As described in chapters 1 and 4, states use BRFSS to collect data on the health behaviors of their adult population. PedNSS and PNSS are sources of information on the nutritional status of low-income mothers and children. In addition, HHS is exploring opportunities to expand the program to other populations, such as schoolchildren. However, the other disadvantages of the systems, such as the limited amount of nutritional data they collect and the incomplete participation of the states, diminish their utility as a source of information on subpopulations. The two major alternate approaches to providing subpopulation data are special studies and indirect estimation. Special studies are those that use a separate sampling frame from a national survey and are not necessarily conducted at the same time as a national survey. To describe the strengths and weaknesses of special studies as an approach to collecting information on subpopulations, we examined HHS’ past experience with the Hispanic Health and Nutrition Examination Survey (HHANES) and discussed the option with our expert advisers. Indirect estimation uses data that are not direct observations of the group of interest to develop inferences about the subpopulation. Our review of this approach relied on reviews of technical literature, interviews with USDA and Bureau of Census staff responsible for indirect estimation programs, and consultation with methodological experts on our panel of advisers and in HHS. These and the current approaches address the suggestions made by the expert advisers and data users who responded to our survey. An alternate approach to covering subpopulations is conducting a special study. A model of this approach is the Hispanic Health and Nutrition Examination Survey conducted in 1982 by HHS. HHS developed HHANES in response to recommendations made by the National Academy for Public Administration, which identified the Hispanic population as growing, likely to have low income, and potentially at risk for health and nutrition problems. HHANES was conducted as a separate study rather than integrated into the national survey because NHANES II was already completed and funding was not available to conduct HHANES as part of a national survey. One of the advantages of conducting HHANES as a separate study was the opportunity to change the content of the survey instrument to address issues of special relevance to the Hispanic population. For example, unlike NHANES II, HHANES gathered information on health services use and gallstone disease. In addition, HHS took steps to address the cross-cultural issues of applying HHANES to different Hispanic populations. The survey instrument was translated into the idiomatic Spanish of each of the three Hispanic groups surveyed (Cuban-Americans, Mexican-Americans, and Puerto Ricans) and an appropriate plan for reaching out to the respondents was developed. For example, unlike the regular NHANES, which relies primarily on press releases and the formal leaders (such as mayors) of the places they have sampled to communicate the importance of participation in the survey, HHANES used informal leaders (such as church leaders) and Spanish-language media. The opportunity to tailor a special survey to the population of interest is an advantage of the approach, but it can be complicated for a population whose members speak multiple distinct languages and have varying degrees of assimilation into the U.S. population. Although not demonstrated by HHANES, another advantage of a special study is the ability to study populations that are not appropriately studied through oversampling. As described above, this includes groups that are not included in the national sampling frame, such as homeless persons, and groups that are geographically dispersed or not well represented in the general population, such as pregnant and lactating women. Unlike oversampling, in which the subpopulation is surveyed at the same time and usually with the same procedures as the population as a whole, the data collected by special studies may not be comparable to the population as a whole. For example, differences found between HHANES data on Hispanic groups and NHANES data on the nation could be the result of national changes in health and nutrition status during the gap between the two surveys rather than actual differences between the groups. Of course, conducting special studies in tandem with a national survey is possible. In fact, HHS is considering conducting special subpopulation surveys at the same time as the next NHANES. Special studies that are conducted in addition to the national surveys clearly add to the overall cost of data collection. Specific costs for surveys of ethnic populations include bilingual interviewers, the translation of the survey instrument, and outreach to the group. Other costs are the development of a separate sampling frame and screening potential respondents to identify those that belong to the group of interest. Finally, if a special study is conducted at the same time as the national survey, the burden on the survey support facilities, such as the laboratories that analyze blood and urine specimens, is increased, which may slow down the processing and release of the data. If a special study is conducted because the group of interest is not well covered by the national sampling frame, it needs a sampling frame that allows for generalization to the subpopulation group, which may be difficult to construct. For example, to use survey results to draw conclusions about the population of people who are without homes, one needs to sample from a complete list of the members of that group. For the homeless population, such a list would be very costly to construct. As a result, other means—such as sampling shelters—would have to be used. The survey results, though, would probably not be generalizable to the overall population of homeless people because these other means are likely to be incomplete. Indirect estimation (also known as small area or synthetic estimation) refers to procedures that use values of the variable of interest from an area or time other than the area and time of interest. For example, to develop an indirect estimate of the prevalence of iron-deficiency anemia in a county, the national estimate of iron-deficiency anemia can be adjusted based on the county’s demographic profile. Both USDA and HHS have experience with indirect estimation. Since the early part of the century, USDA has had a program to develop indirect estimates of crop yields. Although some information is available from state surveys of nonprobability samples of farmers, the USDA program adjusts these less dependable estimates so that they aggregate to the more reliable regional and national estimates that are based on a survey of a national probability sample of farmers. HHS has no regular program to produce indirect state estimates, but since 1968, it has supported the occasional development and evaluation of indirect estimates from National Health Interview Survey data. In addition, HHS produced state estimates from the National Natality and National Fetal Mortality Surveys conducted in 1980 using demographic data from the states to adjust the national estimates. Indirect estimation models range from the relatively straightforward adjustments of national or regional estimates to match local demographic profiles to more complex models. The major strength of indirect estimation is that, compared to increasing survey sample sizes to obtain data for direct estimates at state and local levels, it is far less costly. In addition, it is a means of extending the usefulness of costly national survey data to inform decisions made at state or local levels of government. Moreover, it is an established approach. Indirect estimates of such variables as state and local populations, employment and unemployment, and crop yields are already used by the federal government in formulas for determining eligibility and benefit amounts for federal programs. Some state governments have also used indirect estimation to conduct analyses for economic and other types of programs. Indirect estimation also responds to limitations of the data on which direct estimates might be based. Program records, such as those used by PedNSS and PNSS, have the advantage of timeliness since they are usually collected continuously, but their relevance may be limited because the data are collected for specific administrative purposes, not just for nutrition monitoring. Sample survey data, however, have the advantage of relevance, but the data can be costly to obtain at a level of detail that will support estimates for states, localities, or other subpopulation groups. In contrast, indirect estimation is an approach to producing timely, relevant, and detailed information without a major increase in cost. The major limitation of indirect estimation is the difficulty of determining the quality of the estimate. The best way is to compare it to the true value for the population. For example, comparing an indirect estimate of a county’s population to census data on the county’s population would enable an assessment of the bias of the estimate. However, since indirect estimation is used when data are not available for direct estimation, such a comparison is usually not feasible. Moreover, the quality of the estimates yielded by a model changes for different populations and for different times. In other words, even if a model yields estimates that prove to be unbiased in comparison to direct observations for the same year, estimates from the same model for subsequent years may be biased if the relationships between the variables change over time. This limitation has an implication for the use of indirect estimates. Specifically, indirect estimates may be difficult to defend in the political arena because they are based on models rather than direct observations. However, in the absence of direct data from other, more expensive approaches, indirect estimation is preferable to no information at all. Although indirect estimation has been used successfully with other federal surveys, there are constraints on the development of indirect estimation programs for the major nutrition monitoring surveys. According to USDA and HHS staff, major obstacles include the lack of staff resources to support the program and the complexity of the surveys. Despite these concerns, both agencies have long experience with indirect estimation in other arenas, which could be applied to nutrition monitoring. Information on subpopulation groups and small geographic areas is used to identify nutrition-related problems that are associated with specific populations and to target programs to those most at risk. Different strengths and weaknesses are associated with the four approaches to meeting this need that we reviewed. Surveillance systems and indirect estimation programs are likely to be less expensive than oversampling or special surveys. However, oversampling and special surveys can yield more detailed information than the surveillance systems that rely on program records and more dependable estimates than indirect estimation models. Of the four approaches, only indirect estimation has not been a part of NNMRRP activities at one time or another. From programs in other areas, this approach appears to be a potentially efficient means of expanding the information available on subpopulations and small geographic areas. However, before the Interagency Board can determine what priority to give to this promising approach, a complete picture of what needs would be met by each option is required. State and local governments are interested not only in the applicability of federal nutrition monitoring data to their jurisdictions, but also in having federal assistance in collecting and interpreting their own data. This chapter follows on the previous discussion about NNMRRP support for reliable inferences about state and local populations to describe current and potential options for assisting states and localities in their own monitoring activities. The NNMRRP currently assists states through the state-based surveillance systems, which provide technical and other assistance to states. The alternative that we examined, community-based nutrition monitoring, is a response to states and localities that are interested in building their capacity to collect their own data. The strengths and weaknesses of these two approaches to assisting states and localities are summarized in table 6.1 and detailed below. Other suggestions by our expert panelists—such as providing financial and technical assistance, developing standardized modules of interest for state data collection activities, and assisting state collection of data on subpopulations—can be implemented as part of either of the two approaches. The strengths and limitations of the surveillance systems as sources of continuous data and of information on state and other population groups have been discussed in chapters 4 and 5. However, they have additional strengths and limitations related to their ability to meet state and local needs for assistance. Surveillance systems currently balance the federal interest in information that is collected in a standard format across states and the states’ needs for flexibility. For example, with BRFSS, HHS supplies states with standardized modules of questions, training in collecting the data, and support in analysis and reporting. In addition, changes to the survey content are made in consultation with state participants in the surveillance system, and states have the opportunity to add their own questions to the survey. The surveillance systems have also played a role in building state capacity for data collection and analysis. Specifically, some states have used their experience with BRFSS to implement their own telephone surveys using the random digit dialing procedure. In addition, standardized software developed by HHS for PNSS has enabled states to generate their own reports. Yet another strength of the surveillance systems is the foundation of federal-state partnership it provides for future improvements of federal technical assistance to state and local nutrition monitoring activities. For example, as mentioned in previous chapters, HHS is exploring ways to use the surveillance systems to gather additional nutritional data, such as more in-depth information on dietary intake, and to cover new populations, such as schoolchildren. Additional technical assistance in data analysis and interpretation could also be provided through the surveillance system structure. While the surveillance systems could be expanded and improved to further respond to the interest of states and localities in receiving more technical assistance, they are limited in their flexibility because of the federal interest in standardization across states. In addition, users of surveillance system data who responded to our survey identified some concerns about the surveillance systems. Specific issues included the availability of the data to localities, timeliness of HHS’ processing of the data, and the formats of the reports that HHS provided. Recommendations for improving the surveillance systems included increasing local access to data, reducing the time it takes HHS to process the data, simplifying reports for local users, and providing additional technical and financial assistance in data collection and interpretation. The federal-state linkage forged by the surveillance systems could be further extended to support community-based nutrition monitoring. To explore the strengths and limitations of this approach, we reviewed the literature on two models of community-based programs. HHS has funded Planned Approach to Community Health (PATCH) projects, which used the BRFSS survey instrument as the basis for a community needs assessment. The survey data, in combination with interviews with knowledgeable informants in the community, were used to plan health promotion programs. HHS provided technical assistance in the needs assessment portion of the projects and small awards of a few thousand dollars for project activities. Researchers at Cornell University developed a similar model specifically for nutrition monitoring that was pilot-tested in three New York counties with funding from the State Department of Health and technical assistance from the university. In their approach, a coalition of interested community members first articulates potential information needs. Then, the group selects specific needs on the basis of the likelihood that the data will be used by the community. To meet the selected information needs, feasible sources of routinely available data are identified. According to the model, data collection should depend on procedures that are already in place, from such sources as program participation records, patient charts, and school health screenings. In its reliance on program records, this approach is similar to PedNSS and PNSS. However, there is no expectation that the same issues will be targeted or the same sources of data used in each community. To facilitate the final component of the system—the communication of monitoring results—a network of users must be cultivated and the appropriate vehicle for communicating the results must be used. Drawing on the experience of these projects and HHS’ PATCH program, the following strengths and limitations of community-based nutrition monitoring were identified. Evaluations of both types of community-based programs found evidence that local capacity for collecting and using data was developed. By involving community members in assessing needs, PATCH built skills in identifying health risks and cultivated community and organizational supports for health promotion programs. In the New York counties that pilot-tested the Cornell model of community-based nutrition monitoring, data were collected and compiled to describe access to food and nutrition services and nutritional health of county residents. To disseminate the information and guide decisions based on the data, interagency coalitions were formed. These coalitions brought together local nutrition-related professionals, so that nutrition interventions were better coordinated and information was shared. The limited federal investment in the community-based programs is another strength. HHS supported PATCH with small grants and technical assistance, while the community-based nutrition monitoring projects were sponsored by the state and received technical assistance from the state’s land grant institution. “At the end of the pilot stage, one county obtained local funding to continue their monitoring activities, including the issue-based coalitions; in the other counties, local nutrition councils are coordinating continued monitoring efforts.” The program has also spread to other counties, some of which have initiated monitoring activities without outside funding. The experience with community-based programs indicates that communities require a set of resources—specifically, technical skills and dedicated personnel—to fully benefit from the projects. One evaluation of PATCH found that the projects were most effective in communities that already had human services and health professionals who were involved in community health promotion efforts. In addition, the projects that had directors seemed to have the greatest implementation successes. Both kinds of projects required considerable time and effort to collect and interpret data. Moreover, while HHS provided technical assistance in using the BRFSS survey in the local needs assessments, additional assistance was needed to help communities set priorities based on the data. The reliance on BRFSS was a limitation for PATCH because BRFSS did not necessarily include the issues of primary concern for the community. For example, one PATCH project was specifically interested in water quality, which is not addressed in BRFSS. In contrast, the Cornell model emphasized local sources of data. Based on the initial experience with PATCH, HHS no longer expects communities to use BRFSS for its needs assessment. Data from the HHS surveillance systems are already extensively used in state and local program planning. In addition, the systems are flexible and are exploring ways to increase the information they collect on dietary intake. However, they seem less responsive to the needs of localities than to the needs of states. Recent efforts to build local capacity for data collection and interpretation indicate that community-based programs are a promising approach to responding to local needs for nutrition information. Before the Interagency Board can decide what priority to place on community-based nutrition monitoring, however, it must first identify the objectives of the NNMRRP that would be furthered and the importance of these objectives relative to others that are competing for program resources.
Pursuant to a congressional request, GAO examined a model nutrition monitoring system and potential approaches to incorporating it into the National Nutrition Monitoring and Related Research Program (NNMRRP). GAO found that: (1) a model nutrition program would have a coordinated set of activities, provide data continuously, generate reliable subpopulation and small geographic area, and support state and local monitoring activities; (2) although NNMRRP has elements of a model program, other strategies may lead to improved nutrition monitoring capabilities; (3) alternate approaches to coordination may not provide any clear advantages to the current structure; (4) the current state-based surveillance systems cannot meet certain information needs; (5) alternative approaches to achieving a model program are to attach modules of nutrition-related questions to other ongoing surveys or to field a core set of questions continuously, supplemented periodically by questions of emerging interest; (6) current approaches to providing information on subpopulations and small geographic areas are to oversample selected groups as part of the national surveys and to collect data on specific high-risk groups through the surveillance systems, which could be complemented by special studies and indirect estimation; and (7) to support state and local monitoring activities, the Department of Health and Human Services provides technical and financial assistance for state-based surveillance systems, but community-based data collection might provide more relevant data to localities.
To determine what is known about the supply and domestic demand for lithium-7, we analyzed data provided by industry representatives, reviewed agency and industry documents, and interviewed agency officials and industry representatives. Specifically, to understand the supply and domestic demand of lithium-7, we reviewed data from the three brokers that purchase lithium hydroxide from China and Russia and sell it to utilities and other companies in the United States. To assess the reliability of the data, we interviewed lithium-7 brokers about the data and found the data to be sufficiently reliable for purposes of this report. We also obtained information on China’s supply and demand for lithium-7 from an expert on nuclear reactors at the Massachusetts Institute of Technology that was identified by DOE and Y-12 officials. Additionally, this expert has been working with DOE in its meetings with scientists from the Chinese Academy of Sciences regarding China’s research on new reactor designs. We also reviewed documents provided by DOE, Y-12, and two utilities that operate pressurized water reactors—Tennessee Valley Authority (TVA) and Exelon. We also interviewed representatives of companies that buy, sell, and/or handle lithium hydroxide, including Ceradyne, Inc., Isoflex, Nukem Isotopes, and Sigma Aldrich and officials from DOE, NNSA, and Y-12. To examine the responsibilities of DOE, NRC, and other entities in assessing risks to the lithium-7 supply, and what, if anything, has been done to mitigate a potential supply disruption of lithium-7, we reviewed documents from DOE, Y-12, and NRC. We also interviewed officials from DOE’s Isotope Program and the Office of Nuclear Energy; NNSA’s Office of Nuclear Materials Integration, Office of Nuclear Nonproliferation and International Security, and Y-12; and NRC. We also interviewed representatives from Exelon, TVA, EPRI, North American Electric Reliability Corporation, Nuclear Energy Institute, Pressurized Water Reactors Owners Group, Ceradyne, Inc., and Isoflex. In addition, we compared actions DOE is taking to manage and communicate lithium-7 supply risks with federal standards for internal control. To identify additional options, if any, for mitigating a potential lithium-7 shortage, we reviewed technical articles and documents from industry and academia, DOE, Y-12, and NRC. We also interviewed officials from DOE’s Isotope Program, Office of Nuclear Energy, and Idaho National Laboratory; Y-12; and representatives from Exelon, TVA, and EPRI. We conducted this performance audit from June 2012 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Lithium-7 was produced in the United States as a by-product of enriching lithium-6 for the United States’ nuclear weapons program. Lithium-7 and lithium-6 are derived from natural lithium, which contains about 92.5 percent lithium-7 and about 7.5 percent lithium-6. Lithium-6 was enriched in the United States by separating it from lithium-7 using a column exchange process, called COLEX, that required very large quantities of mercury, which can harm human health and the environment. Y-12 built a COLEX facility and began operations in 1955 and used it through 1963 to enrich lithium-6 and lithium-7. Y-12 experienced several problems with the COLEX process, including equipment failures, worker exposure to mercury, and mercury contamination of the environment. Y-12 shut the COLEX facility down in 1963 and has not operated it since then. While the United States still has a stockpile of lithium-6, DOE sold the lithium-7 by-product to commercial companies, though some was enriched and still remains stored at Y-12. Lithium-7 is used in two functions of a pressurized water reactor—to produce lithium hydroxide that is added to the cooling water to reduce the acidity, and lithium-7 is added to demineralizers to filter contaminants out of the cooling water. The cooling water becomes acidic due to the addition of boric acid, which contains boron-10, an isotope of boron that is used to manage the nuclear reaction in the core—the use of both boron- 10 and lithium hydroxide is based on reactor core design requirements and water pH requirements for corrosion control. Lithium hydroxide, made with lithium-7 rather than lithium-6, is added to the cooling water to reduce the acidity of the water and boric acid. Lithium-7 is used rather than lithium-6 or natural lithium, which contains lithium-6, because lithium- 6 would react with nuclear material in the reactor core to produce tritium, a radioactive isotope of hydrogen. According to industry representatives, lithium hydroxide is added directly to the cooling water, via a chemical feed tank, when a pressurized water reactor is started up after being shut down, such as after refueling. Lithium-7 is also used in special water purifiers—called demineralizers—that remove radioactive material and impurities from the cooling water. Figure 1 shows the flow of water through a typical pressurized water reactor, though some variations among reactors may exist. As the cooling water circulates in the primary cooling loop, as shown in figure 1, some of the water flows through pipes to the demineralizers and the chemical feed tank where the lithium hydroxide is added. There is no domestic production of lithium-7, and little is known about the lithium-7 production capabilities of China and Russia and whether they will be able to provide future supplies. China and Russia produce lithium- 7 as a by-product of enriching lithium-6 for their nuclear weapons programs, according to a DOE official, much like the United States previously did. Because of the secrecy of their weapons programs, China and Russia’s lithium-7 production capabilities are not fully known, according to lithium-7 brokers. According to industry representatives, lithium-7 brokers, and NNSA documents, China and Russia have produced enough lithium-7 to meet the current U.S. demand, which is not expected to increase a significant amount in the near future, based on DOE’s information that shows five new pressurized water reactors scheduled to begin operating by 2018. Additionally, during the course of our review, utilities announced that four pressurized water reactors would be decommissioned, eliminating their demand for lithium-7. China’s continued supply of lithium-7 may be reduced by its own growing demand created by the construction of new reactors and the development of new reactor designs. China’s demand is expected to increase because, according to information from DOE, the International Atomic Energy Agency, and an expert on nuclear reactors who has met with Chinese scientists on this topic, China is constructing over 25 pressurized water reactors that are scheduled to begin operating by 2015. Additionally, China is planning to build a new type of nuclear power reactor—a molten salt reactor—that will require dramatically larger amounts of lithium-7 to operate. China is pursuing the development of two different types of molten salt reactors, according to the expert, each of which will result in a reactor that requires 1,000s of kilograms of lithium-7 to operate, rather than the approximate 300 kilograms (about 660 pounds) annually needed for all 65 U.S. pressurized water reactors combined, according to lithium- 7 brokers. China’s first molten salt reactor is expected to be finished by 2017, and the second reactor by 2020, according to the reactor expert.Furthermore, molten salt reactors require a more pure form of lithium-7— 99.995 percent, or higher—than what is currently produced by China and Russia, according to the reactor expert and a lithium-7 broker. To obtain the higher enriched lithium-7, according to the reactor expert who is familiar with China’s research, China built a small facility that will feed in lower-enriched lithium-7 and enrich it to the higher level of purity that is needed. An Isotope Program official suggested to us that China’s new facility could increase the available supply of lithium-7 for pressurized water reactors. However, according to the reactor expert, this new facility may reduce the supply of lithium-7 available for export since it uses lithium-7 as feedstock. This expert said that China has obtained lithium-7 from its own supplies and has purchased additional lithium-7 from Russia to enrich in its own facility, possibly making China a net importer of lithium-7. It is unknown, however, whether China has enough lithium-7 for its increased nuclear fleet and molten salt reactors, or if it will need to import additional quantities, which could reduce the available supply of lithium-7. For example, one lithium-7 broker told us in June of this year that China had no lithium-7 that it could sell to this broker. Russia’s supply of lithium-7, on the other hand, may be largely available for export because Russia is believed to have very little domestic demand for lithium-7. Russia’s fleet of pressurized water reactors does not use lithium hydroxide because they were specifically designed to use potassium hydroxide to lower the cooling water’s acidity. However, because Russia’s production capacity of lithium-7 is not known, U.S. utilities cannot be assured that Russia will continue to meet their demand for lithium-7 as China’s demand increases. For example, one lithium-7 broker told us in June 2013 that he is having difficulty getting lithium-7 from Russia, though he is unsure if it is because Russia is unable to meet demand or for some other reason. The risk of relying on so few producers of lithium-7 leaves the 65 pressurized water reactors in the United States vulnerable to supply disruptions. In 2010, for example, we reported on the challenges faced by the Department of Defense when it experienced supply disruptions in rare earth elements—17 elements with unique magnetic properties that are Specifically, we reported that a produced almost exclusively in China.Department of Defense program was delayed due to a shortage of rare earth elements. Controlling most of the market on rare earth materials production, China caused a shortage when it decreased its exports of rare earth materials. At the time of our report, the Department of Defense and other federal agencies were taking steps to mitigate a shortage to prevent future supply disruptions. In the case of lithium-7, according to representatives of two utilities, if not mitigated, a lithium-7 shortage could possibly lead to the shutdown of one or more pressurized water reactors. Pressurized water reactors are temporarily shut down to refuel about every 18 months, after which time lithium-7, in the form of lithium hydroxide, is added to the cooling water, according to industry representatives. TVA representatives explained that nuclear reactors are scheduled for refueling during times when there is low demand for electricity, such as the spring or fall, when there is less need for heating or air-conditioning of homes and businesses. During peak times of electricity use, such as the summer months, commercial nuclear reactors are critical for maintaining the stability of the electrical grid, according to industry representatives. Without lithium hydroxide or some alternative, industry representatives told us that they would not be able to restart the pressurized water reactors after refueling. According to NRC officials, operating a pressurized water reactor without lithium-7 could be done, but it would significantly increase the corrosion of pipes and other infrastructure. No federal entity has taken stewardship responsibility for assessing risks to the lithium-7 supply for the commercial nuclear power industry. However, DOE has taken some steps in this area. Specifically, DOE studied lithium-7 supply and demand and concluded that no further action is needed, but our review found shortcomings in DOE’s study. No federal entity has taken stewardship responsibility for assessing and managing risks to the supply of lithium-7 for commercial use. Federal stakeholders—DOE, NRC, and NNSA—told us they view lithium-7 as a commercial commodity for which industry is responsible. Officials in DOE’s Isotope Program told us that because lithium-7 is a material bought and sold through commercial channels and used by industry, industry is responsible for monitoring the supply risks and managing those risks as it would do for any other commercial commodity. The Isotope Program produces isotopes that are in short supply and not those that are produced commercially and not in short supply. Notwithstanding, Isotope Program officials told us that the program’s mission includes isotopes that have the potential for being in short supply and that they see the Isotope Program’s role as being the lead office within DOE on issues related to lithium-7. Additionally, an Isotope Program official told us that the program must be careful not to address lithium-7 risks too aggressively because that may signal to industry stakeholders that DOE is taking responsibility for mitigating these risks—risks that DOE views as the responsibility of industry to manage. NRC officials also told us that they believe industry is better suited to address any problems with the lithium-7 supply because the utilities are more likely to be aware of and have more information related to supply constraints than NRC or other federal government agencies. Similarly, officials in DOE’s Office of Nuclear Energy said that, in their view, industry is responsible for addressing lithium-7 risks, and their office’s role is to serve as liaison between DOE and industry. One DOE official said that industry probably would be aware of a shortage before any government agency would be. An official in NNSA’s Office of Nuclear Materials Integration noted that NNSA is responsible for ensuring there is a sufficient supply of lithium-7 for federal demand but not for industry’s demand. Furthermore, this official said that utilities are in the electricity business and should, therefore, assume the responsibility of assessing and managing risks. This official also stated that, in his view, given the importance of lithium-7 to the nuclear power industry, the commercial market would respond by increasing production to bring supply and demand into balance. However, our review found no other countries with the capability to enrich lithium-7 and, as described above, it is unclear if Russia and China will be able to meet increased demand. We reported in May 2011 on the importance of stewardship responsibility for critical isotopes. Specifically, our review found that a delayed response to the shortage of helium-3 in 2008 occurred because, among other things, there was no agency with stewardship responsibility to monitor the risks to helium-3 supply and demand. The shortage was addressed when an interagency committee took on a stewardship role by researching alternatives and allocating the limited supply, among other things. In that report, we recommended the Secretary of Energy clarify which entity has a stewardship role for 17 isotopes that are sold by the Isotope Program. In its comments on that report, NNSA stated that it could implement our recommendation, but to date, DOE and NNSA have not determined which entity or entities should serve as steward for lithium-7, and no federal entity has assumed such responsibility. The nuclear power industry may not be concerned about lithium-7 supply disruptions because it may not be aware of all the risks. Industry representatives we spoke with said that they have no concerns over the lithium-7 supply because they have not experienced any supply problems. For example, representatives from one utility said they have never had a problem obtaining lithium-7 so they did not see a need to consider actions to mitigate future supply disruptions. Similarly, representatives from EPRI said that they are not doing any work related to lithium-7 because there is no demonstrated need. However, EPRI representatives said they were surprised to recently learn from DOE that China is researching the development of molten salt reactors. These representatives said that such a development is important for EPRI’s considerations of the lithium-7 issue. EPRI representatives told us they need to learn about all the factors relating to the current and future supply and demand of lithium-7 so those factors can be incorporated into EPRI’s decision-making process and long-term planning. We discussed this point with DOE officials, and they were surprised to hear that industry was previously uninformed about China’s development of molten salt reactors. An official from DOE’s Office of Nuclear Energy told us the risks to the lithium-7 supply had been discussed with industry representatives in October 2012, including China’s increased domestic demand for new reactors and for research on molten salt reactors, all of which could impact the lithium-7 supply. In addition to the longer term supply challenges created by increased Chinese domestic demand for lithium-7, there are also the recent examples of brokers facing supply disruptions. As previously discussed, two of the lithium-7 brokers told us they are having difficulty obtaining lithium-7 from China and Russia. Given the recent nature of this information, the uncertainty over whether these are isolated difficulties or indicative of a trend, and that the impact has not yet been felt by utilities, could also contribute to industry’s current assessment that the risks of a possible lithium-7 supply disruption are low. Some industry representatives stated that, if there is a shortage, the federal government should be involved to ensure the reliability of the electrical grid.example, EPRI representatives said that, in the event of shortage, EPRI’s role would be to research options for replacing lithium-7, but also said that government involvement is needed to ensure the reliability of the electrical grid. GAO/AIMD-00 -21.3.1. not have access to all the sources of information that are available to DOE. DOE studied the supply and demand of lithium-7 and concluded that no further action is needed to mitigate a potential lithium-7 shortage, but our review found shortcomings in its assessment of domestic demand and the mitigation measures it identifies for industry to consider implementing. In conducting this study, Isotope Program officials collaborated with officials in DOE’s Offices of Nuclear Energy and Intelligence and Counterintelligence and NNSA’s Office of Nuclear Materials Integration and had discussions with EPRI and other industry representatives. DOE’s study, which was completed in May 2013, identifies some risks to the lithium-7 supply, describes several actions that industry could take to help mitigate a shortage, and lists the steps that DOE’s Isotope Program is taking, or plans to take. According to DOE’s study, there are several risks to the lithium-7 supply that could result in a shortage in a matter of years. Specifically, DOE’s study points out that increasing demand for lithium-7 from construction of additional pressurized water reactors and the development of molten salt reactors are risks to the lithium-7 supply because demand could exceed the supply in a matter of years, if production does not increase. The study also points out the risks of relying on two foreign suppliers for lithium-7 and notes that a supply shortage is a low probability risk, but it is one with high consequence. DOE’s study also describes several actions that industry could take to help mitigate a lithium-7 shortage. In its discussions with industry representatives, representatives identified the following four actions that the nuclear power industry could take should a shortage of lithium-7 occur: recycling lithium-7 from the demineralizers; increasing the burnable poisons in the reactor fuel;reducing the acidity of the cooling water to reduce the amount of lithium-7 needed by using boric acid that is enriched with boron-10, which would reduce the amount of boric acid added to the cooling water, thus reducing the acidity; and developing alternative sources of lithium-7, including building a domestic lithium-7 production capability. DOE’s study of lithium-7 also lists two steps the Isotope Program is taking and concludes that no further action is needed. First, the study states that the Isotope Program will work with NNSA to prevent its inventory of contaminated lithium-7 at Y-12 from being disposed of or distributed without approval from DOE and will request that NNSA retain 200 kilograms (441 pounds) of this inventory to be purified and then sold to the nuclear power industry in the event of a supply disruption.according to Isotope Program officials, as part of its mission to support isotope production research and development, the program is also funding research on enriching lithium-7 without employing the mercury- intensive COLEX method that was previously used. The study concludes that the listed steps serve as an acceptable short-term strategy for mitigating the risks of a lithium-7 shortage and concludes that no additional action is needed. Nevertheless, our review found several shortcomings in DOE’s study regarding its assessment of domestic demand for lithium-7 and the feasibility of the actions it says industry can take to mitigate the risks of a supply disruption. First, our review found that DOE’s Isotope Program, as well as Y-12, underestimated domestic demand for lithium-7. While studying lithium-7 supply and demand, DOE’s Isotope Program and Y-12 both estimated annual domestic demand for lithium-7 to be about 200 kilograms per year, whereas the lithium-7 brokers estimated domestic demand to be over 300 kilograms (662 pounds) per year, on average, from 2008 through 2012. Isotope Program and Y-12 officials told us that their estimate of 200 kilograms per year includes lithium-7 used in cooling water, but it does not include lithium-7 used in demineralizers, which the lithium-7 brokers did account for. Second, DOE’s study concludes that there is enough lithium-7 in inventory held on-site at reactors to keep the reactors operating during the approximately 7 months required to purify Y-12’s lithium-7. However, DOE officials involved in the study said they did not collect any data from utilities to determine what quantities they held in inventory, and industry representatives told us that they are not aware of any entity that keeps records of the amount of lithium-7 inventory held at utilities across the industry. Some industry representatives also said that there is no standard practice for when to purchase lithium-7 or how much inventory to have on hand and that they believe inventory practices vary from utility to utility. Regarding the measures the study indicates industry can take to mitigate a potential lithium-7 supply shortage, our review found that DOE’s study provides more optimistic assessments than industry’s view about the challenges involved in implementing these actions. For example, DOE’s study characterizes the process for recycling lithium-7 from demineralizers to be straightforward and of low technical risk, and it states that recycling can be implemented within a year. However, according to representatives of a utility with whom we spoke, there is no existing method to retrieve and recycle the lithium-7 from the demineralizers. According to EPRI representatives who provided information for DOE’s study, the process is challenging because extracting lithium-7 from the demineralizers may require a special process to separate it from the other materials in the demineralizers, some of which pose radiation risks. In addition, there are also application challenges to recovering the lithium-7, such as modifying the plants to implement the process. EPRI representatives estimated it would take more than a year to develop the technology, and potentially many years to address the application challenges before this process could be implemented. Another mitigation option that DOE’s study identifies is increasing burnable poisons—isotopes added to the nuclear fuel to help control the nuclear reaction—that would decrease the amount of boron required in the cooling water, in turn reducing the amount of lithium-7 needed to decrease acidity. The study states that doing so should not take a long time to implement, based on the premise that the modified fuel could be changed when plants refuel, which is about every 18 months. EPRI representatives, however, said this would be a longer process because any given fuel assembly is typically in the reactor for three operating cycles of 18 months each, which means a fuel assembly would be in the reactor for a total of about 4½ years before being replaced. Also, according to NRC officials, a change in the fuel would require extensive modeling, testing, and regulatory reviews, which could take considerably longer than 4½ years. As a result of the shortcomings in DOE’s study, combined with the recent supply problems reported by brokers, as we previously discussed, it is unclear if its conclusion is correct that no additional actions need to be taken. Based on information from government officials and industry representatives, we identified three options for mitigating a potential lithium-7 shortage in the near and long term, which could be implemented by government, industry, or even a committee of federal and industry stakeholders. The three near- and long-term options are: building a domestic reserve of lithium-7, building domestic capability to produce lithium-7, and reducing pressurized water reactors’ reliance on lithium-7. The first option—building a domestic reserve of lithium-7—is a relatively low-cost option and would provide a fixed quantity of lithium-7 that, in the event of a shortage, could be used until a long-term solution is implemented. Establishing a domestic reserve would involve building up a stockpile of lithium-7 by importing an additional quantity above what is needed each year, purifying all or a portion of the existing supply of lithium-7 at Y-12 to make it suitable for use in pressurized water reactors, or a combination of these two. Stockpiling could be accomplished by individual utilities or, for example, by a steward that could maintain the supply for all utilities. Increasing imports to establish a domestic reserve could be initiated immediately, and the cost would be based on the market price of lithium-7, which is currently less than $10,000 per kilogram (about 2.2 lbs). However, stockpiling lithium-7 would have to be carefully managed to avoid a negative impact on the market—stockpiling lithium-7 too aggressively could cause the price to increase or otherwise disrupt the available supply. A second way to help build up a reserve is the purification of all or a portion of the 1,300 kilograms of lithium-7 at Y- 12. DOE has plans to set aside 200 kilograms of the 1,300 kilograms of lithium-7 at Y-12, which could be purified and sold to utilities. DOE estimates it would take about 7 months to purify 200 kilograms and cost about $3,000 per kilogram for a total cost of about $600,000; purifying the remainder of the 1,300 kilograms would likely incur additional costs. The second option—building a domestic lithium-7 production capability— is a longer-term solution that would reduce or eliminate the need for importing supplies, but it would take several years to develop the technology and construct a production facility. While lithium separation was done in the United States until 1963 using the COLEX process, DOE and Y-12 officials told us that the COLEX separation method will not be used for a new production facility because of the large quantities of mercury it requires. Officials from DOE and Y-12, as well as industry representatives, identified several other potential separation techniques that do not use mercury, such as solvent extraction, a process in which the components to be separated are preferentially dissolved by a solvent and are thus separated, and electromagnetic separation, a process that uses electric and magnetic fields to separate isotopes by their relative weights. While these techniques have been developed and used to separate other materials—for example, electromagnetic separation was used to separate isotopes of uranium—further development of the techniques specifically for use with lithium-7 would be needed, according to DOE documentation. In particular, DOE’s Isotope Program is funding a proposal from scientists at Oak Ridge National Laboratory and Y-12 to conduct research on lithium separation techniques using solvent extraction processes, which have been used in the pharmaceutical industry. If successful, according to Y-12, its proposed research would provide the basis for an industrial process to produce lithium-7. According to Y-12 officials, the entire research and development process, and the construction of a pilot facility capable of producing 200 kilograms of lithium-7 per year, would take about 5 years and cost $10 to $12 million. The third option—reducing pressurized water reactors’ reliance on lithium-7–is also a longer-term option that would generally require changes in how reactors are operated and may produce only modest reductions in the use of lithium-7. Four possible changes that could be made to reactors include the following: Lithium-7 can be recycled from used demineralizers. According to industry representatives, the chemistry required for the recycling process would be challenging, would require plant modifications, and may pose risks to workers due to the presence of radioactive materials. This option would reduce the amount of lithium-7 needed for demineralizers but not reduce the amount of lithium-7 needed for the cooling water. Potassium hydroxide can be used in lieu of lithium hydroxide in the cooling water. According to nuclear power industry representatives, making such a change would require about 10 years of research to test the resulting changes in the rate of corrosion of pipes and other infrastructure in the reactor. Using enriched boric acid in the cooling water in place of natural boric acid would require less boric acid to be used, which would reduce the acidity of the water and result in less lithium-7 being needed. According to industry representatives, however, enriched boric acid is expensive, and this change may require plant modifications and would only modestly reduce the amount of lithium-7 needed. The nuclear fuel used in pressurized water reactors could be modified to reduce the need for boric acid and thus also reduce the amount of lithium-7 needed. According to industry representatives, however, this would be expensive and require long-term planning because utilities typically plan their fuel purchases for refueling 1½ to 4½ years in advance. According to one utility, changing the fuel could also have widespread impacts on operations and costs that are difficult to quantify. Industry representatives characterized all four possible changes to pressurized water reactors for reducing the demand for lithium-7 as requiring significant modifications to reactor operations at all 65 pressurized water reactors. Furthermore, these possible changes would need to be studied in more detail to determine the associated cost, time, and safety requirements before implementation and, if necessary, approved by NRC, all of which may take several years. DOE studied the lithium-7 supply and demand situation, including identifying some supply risks, and is undertaking some actions to help mitigate a potential shortage, such as setting aside 200 kilograms of lithium-7 as a reserve. However, relying on two foreign producers to supply a chemical that is critical to the safe operation of most of the commercial nuclear power reactors in the United States places their ability to continue to provide electricity at some risk. Furthermore, the recent problems some brokers reported in obtaining lithium-7 from Russia and China, combined with China’s increasing demand for lithium-7 suggest that the potential for a supply problem occurring may be increasing. DOE has not taken on stewardship responsibility, in part because lithium-7 it is not in short supply, at which time it could fall under the Isotope Program’s mission. However, waiting for a critical isotope with increasing supply risks to become short in supply before taking action does not appear consistent with the mission of the Isotope Program. Because no entity has assumed stewardship responsibility for lithium-7, supply risks may not have been effectively communicated to industry, which could then weigh the risks and respond appropriately. Furthermore, there is no assurance that the risks have been fully analyzed and mitigated, as outlined in federal standards for internal control. Similarly, a shortage of helium-3 occurred in 2008 because, among other things, there was no agency with stewardship responsibility to monitor the risks to helium-3 supply and demand. The shortage was addressed when an interagency committee took on a stewardship role by researching alternatives and allocating the limited supply, among other things. Some DOE officials have described lithium-7 as a commercial commodity used by industry and, therefore, they assert that industry is responsible for addressing any supply problems, despite its importance to the electrical grid; NNSA and NRC concur that industry is responsible. Yet, industry is not in a position like DOE to be aware of all the risks. DOE has studied lithium-7 supply and demand to guide its decisions related to lithium-7. However, its study contains shortcomings, including underestimating the domestic demand, and may be underestimating the technological challenges industry will face in trying to adjust to a supply disruption. These shortcomings bring into question DOE’s conclusion that no additional actions are needed to mitigate a potential lithium-7 shortage. In the end, without a full awareness of supply risks and an accurate assessment of domestic demand, utilities may not be prepared for a shortage of lithium-7. This leaves the reactors that depend on lithium-7 vulnerable to supply disruptions that, if not addressed, could lead to their shutdown. To ensure a stable future supply of lithium-7, we recommend that the Secretary of Energy direct the Isotope Program, consistent with the program’s mission to manage isotopes in short supply, to take on the stewardship role by fully assessing supply risks; communicating risks, as needed, to stakeholders; ensuring risks are appropriately managed; and fully and accurately determining domestic demand. We provided a draft of this report to DOE and NRC for review and comment. In written comments, DOE’s Office of Science’s Acting Director, responding on behalf of DOE, wrote that DOE concurred with our recommendation. DOE’s written comments on our draft report are included in appendix I. In an e-mail received August 15, 2013, NRC’s Audit Liaison in the Office of the Executive Director for Operations stated that NRC generally agreed with the report’s content and recommendation. DOE and NRC provided technical comments that we incorporated as appropriate. In its comment letter, DOE concurred with our recommendation and stated that, in its view, ongoing efforts by DOE’s Isotope Program satisfy the recommendation. Specifically, DOE’s letter states that to further address lithium-7 utilization, demand, and inventory management, the Isotope Program has initiated the development of a more in-depth survey coordinated directly with the power industry through the Electric Power Research Institute—a new undertaking that we learned about after providing a draft of our report to DOE for comment. We believe that this undertaking is especially important since we found that few people in industry were aware of the lithium-7 supply risks. In its written comments, DOE also states that the report includes several inaccurate descriptions of the federal role with respect to the response to lithium-7 availability and demand. Specifically, DOE does not agree with our characterization that there is a lack of federal stewardship for assessing and managing risks to the lithium-7 supply. DOE states that it has been active in assessing and managing supply risks, including engaging with stakeholders, forming an internal working group, and identifying actions to be taken to mitigate a shortage. We disagree and believe that DOE’s comment letter overstates both the department’s level of awareness of lithium-7 supply risks and its involvement in mitigating these risks. At no time during our review did any DOE official characterize DOE as a steward of lithium-7 or state that the agency will manage supply risks. Notably, during our review, the Director of the Facilities and Project Management Division, who manages the Isotope Program, told us that the Isotope Program is not the steward of lithium-7, nor should it be. Regarding engagement with stakeholders, we found that Isotope Program officials were aware of only two of the three key brokers of lithium-7 until we informed them of the third broker during a meeting in June 2013—over a year after the program became aware of a potential lithium-7 supply problem. Moreover, at this same meeting, program officials were not yet aware of recent lithium-7 supply problems experienced by two of the three lithium-7 brokers. Regarding mitigation actions, while DOE states in its comment letter that industry stakeholders identified actions for consideration should a shortage of lithium-7 occur, industry stakeholders told us that they were not aware that their input was being used for a DOE study and would not characterize the actions as DOE did in its study. We also disagree with DOE’s comment letter suggesting that the shortcomings identified in our report regarding the department’s demand estimates for lithium-7 were simply due to differences between our estimates and the DOE internal working group’s estimates as a result of the demand quantities identified being for specific and different applications. To identify the actions needed to mitigate a lithium-7 shortage, all the uses of lithium-7 must be considered. By not accounting for the lithium-7 used in demineralizers, DOE left out an important use of lithium-7 that may represent about one-third of the total demand for pressurized water reactors. As DOE engages collaboratively with industry for ensuring a stable supply of lithium-7, accurately accounting for lithium- 7 demand will be essential. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, Secretary of Energy, Executive Director for Operations of NRC, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact David C. Trimble at (202) 512-3841 or trimbled@gao.gov or Dr. Timothy M. Persons at (202) 512-6412 or personst@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the individuals named above, Ned H. Woodward, Assistant Director; R. Scott Fletcher; Wyatt R. Hundrup; and Franklyn Yao made key contributions to this report. Kevin Bray, Cindy Gilbert, Karen Howard, Mehrzad Nadji, and Alison O’Neill also made important contributions.
About 13 percent of our nation’s electricity is produced by pressurized water reactors that rely on lithium-7, an isotope of lithium produced and exported solely by China and Russia, for their safe operation. Lithium-7 is added to the water that cools the reactor core to prevent the cooling water from becoming acidic. Without the lithium-7, the cooling water’s acidity would increase the rate of corrosion of pipes and other infrastructure—possibly causing them to fail. Utilities that operate the pressurized water reactors have experienced little difficulty obtaining lithium-7, but they may not be aware of all the risks of relying on two producers. GAO was asked to review the supply and domestic demand for lithium-7 and how risks are being managed. This report examines (1) what is known about the supply and demand of lithium-7, (2) what federal agencies are responsible for managing supply risks, and (3) alternative options to mitigate a potential shortage. GAO reviewed documents and interviewed officials from DOE, NNSA, and NRC, in addition to industry representatives. This report is an unclassified version of a classified report also issued in September 2013. Little is known about lithium-7 production in China and Russia and whether their supplies can meet future domestic demand. According to industry representatives, China and Russia produce enough lithium-7 to meet demand from U.S. pressurized water reactors, a type of commercial nuclear power reactor that requires lithium-7 for safe operation. However, China's continued supply may be reduced by its own growing demand, according to an expert that is familiar with China's plans. Specifically, China is building several pressurized water reactors and developing a new type of reactor that will require 1,000s of kilograms of lithium-7 to operate, rather than the 300 kilograms needed annually for all 65 U.S. pressurized water reactors. Relying on two producers of lithium-7 leaves U.S. pressurized water reactors vulnerable to lithium-7 supply disruptions. No federal entity has taken stewardship responsibility for assessing and managing risks to the lithium-7 supply, but DOE is taking some steps. Risk assessment is the identification and analysis of relevant risks, communication of risks to stakeholders, and then taking steps to manage the risks, according to federal standards for internal control. Officials at DOE, the National Nuclear Security Administration (NNSA), and the Nuclear Regulatory Commission (NRC) told GAO they view lithium-7 as a commercial commodity for which industry is responsible. Industry representatives told GAO that they had no concerns about the lithium-7 supply, as they have experienced no problems in obtaining it. But GAO learned that industry representatives may not be familiar with all the supply risks. Notwithstanding, DOE plans to set aside 200 kilograms of lithium-7 and is funding research on lithium-7 production methods. DOE also studied lithium-7 supply and demand and concluded that no further action is needed. However, GAO found several shortcomings in its study, including that DOE underestimated the amount of lithium-7 used domestically. Industry estimates show that about 300 kilograms of lithium-7 are used annually in the United States, whereas DOE estimated that 200 kilograms are used annually. This and other shortcomings make it unclear if DOE's conclusion is correct that no additional action is needed. Based on information from agency officials and industry representatives, GAO identified three options to mitigate a potential lithium-7 shortage: (1) building a domestic reserve is a low-cost option that could help in the short-term; (2) building a domestic production capability is a longer-term solution that could eliminate lithium-7 imports, but take about 5 years and cost $10-12 million, according to NNSA; and (3) reducing pressurized water reactors' reliance on lithium-7 is another longer-term solution, but may require years of research and changes in how reactors are operated. GAO recommends that the Secretary of Energy ensure a stable future supply of lithium-7 by directing the Isotope Program to take on a stewardship role for lithium-7 by taking steps, including fully assessing risks and accurately determining domestic demand. DOE concurred with the recommendation.
DOD manages the acquisition of weapon systems through the Defense Acquisition System, which is an event-based process. Acquisition programs proceed through a series of milestone reviews and other decision points that may authorize entry into the next program phase (see figure 1). Based upon DOD’s acquisition-related guidance, for major defense acquisition programs that begin during the materiel solution analysis phase, intelligence inputs into the acquisition process are expected to be provided prior to the Milestone A review, the point at which approval is sought to proceed to the next phase in the process. Further, most of the intelligence inputs are to be verified or updated at points prior to the Milestone B decision and again prior to the Milestone C decision, the point at which approval is sought in order to progress to the production and deployment phase. We describe the intelligence inputs and key DOD guidance for providing intelligence support to acquisition programs in appendix III. The USD(AT&L) is responsible for acquisition policy and oversight, and as the Defense Acquisition Executive has responsibility for supervising the Defense Acquisition System. The milestone decision authority is the designated individual with overall responsibility for a program and has the authority to approve its progression to the next phase of the acquisition process. The milestone decision authority is accountable for cost, schedule, and performance reporting. The service acquisition communities are led by Service Acquisition Executives who are assistant secretaries within their respective military departments. For example, the Assistant Secretary of the Army (Acquisition, Logistics, and Technology) serves as the Army acquisition executive, while the Assistant Secretary of the Navy (Research, Development, & Acquisition) serves as the Navy acquisition executive. The program manager is the designated individual with responsibility for individual acquisition programs who has the authority to accomplish that program’s development, production, and sustainment objectives to meet the user’s operational needs, and is accountable for cost, schedule, and performance reporting to the milestone decision authority. At the Chairman of the Joint Chiefs of Staff level, the J-8 Directorate provides support to the Joint Staff for evaluating and developing force structure requirements, and its director serves as Secretary of the Joint Requirements Oversight Council and as Chairman of the Joint Capabilities Board. In these capacities, the director orchestrates Joint Staff support of the capabilities development process through the Joint Capabilities Integration and Development System. One of the J-8 Directorate’s objectives is to provide early capability development guidance to the services. The services have different organizational structures that define their respective requirements communities. For example, according to the Air Force, via its Major Commands, the Air Force has personnel responsible for 12 capability portfolios such as Air Superiority and Global Precision Attack that manage Air Force capability requirements. According to DOD, the speed of technical innovation and the complexity of advanced weapon systems, such as the F-35, are creating an increasing demand for specialized intelligence mission data to provide information for sensors and automated processes supporting the warfighter. There are several types of intelligence mission data, each used by weapon systems in different ways, including signatures, electronic warfare integrated reprogramming data, characteristics and performance, order of battle, and geospatial intelligence (see figure 2). Signatures are distinct, repeating characteristics, such as radio frequencies or acoustic characteristics, which are associated with a particular type of equipment, materiel, activity, individual, or event. For example, a weapon system may associate a specific signature with an enemy system, and identify it as such. Electronic warfare integrated reprogramming data also describe radio frequencies, and are typically used to attack or control other electronic systems—for example, jamming enemy radar capabilities. Characteristics and performance data describe the abilities of a particular foreign military system, while order of battle describes the strength and structure of armed forces. These can assist a weapon system and its operator to prioritize and determine appropriate actions against the enemy. The title, specific duties, and organizational structure for the personnel providing intelligence support to acquisition programs vary by service (see table 1). Personnel who provide intelligence support to acquisition programs may also coordinate, and in some cases create, the completion of key intelligence products that accompany the acquisition process through documented processes such as Threat Steering Groups, which assemble intelligence and acquisition representatives with knowledge of systems that are specific to the acquisition program. In the Army, during the acquisition lifecycle, the threat integration staff officers assigned to Army Intelligence coordinate intelligence support to acquisition programs through the Threat Steering Group, in which Training and Doctrine Command threat managers and Army Materiel Command foreign intelligence officers participate. Threat assessments before Milestone B are generally managed by Training and Doctrine Command threat managers. Threat assessments from Milestone B and beyond are typically managed by Army Materiel Command foreign intelligence officers. In the Navy, intelligence support to acquisition programs is provided by intelligence personnel in the Office of Naval Intelligence, which is responsible for the production and validation of intelligence inputs to Navy acquisition programs. The acquisition programs are supported by scientific and technical intelligence liaison officers who are hired and funded by the Navy entities responsible for management of assigned acquisition programs, called system commands, and are responsible for coordinating between the system command and the intelligence community. For example, the scientific and technical intelligence liaison officer is responsible for requesting the production and validation of intelligence inputs such as threat assessments, which are used to obtain the threat intelligence required to inform acquisition cost, schedule, and performance decision making by program managers. For programs that are managed by the Marine Corps acquisition agencies (Marine Corps System Command and Program Executive Officer Land Systems), intelligence support is provided by military and civilian intelligence analysts at Marine Corps Intelligence Activity, the service intelligence center. Marine Corps System Command’s intelligence support is coordinated by a scientific and technical intelligence liaison officer. Marine Corps-funded programs at other Navy system commands follow the intelligence support processes for the hosting organization. For example, the Marine Corps version of the F-35 or helicopter acquisitions would both follow processes for intelligence support to acquisition programs used by Naval Air Systems Command. The Air Force materiel commands (Air Force Space Command and Air Force Materiel Command) use acquisition intelligence specialists to support acquisition programs identified as intelligence sensitive. These specialists, along with intelligence analysts at the National Air and Space Intelligence Center, provide intelligence products and input based on their individual levels of experience and training. DOD has processes and procedures for the certification of both intelligence and acquisition personnel, but it has not established certifications for personnel providing intelligence support to acquisition programs. Though DOD has not developed certifications specific to personnel who provide intelligence support to acquisition programs, the Air Force and the Army have each developed certifications for these personnel. In the absence of department-wide certifications, the services have developed varying levels of training for personnel providing intelligence support to acquisition programs, and this training may not be specific to providing intelligence support to acquisition programs. Neither USD(I) certifications for the defense intelligence workforce nor USD(AT&L) certifications for the defense acquisition workforce include a certification specific to those personnel providing intelligence support to acquisition programs. The Defense Acquisition Workforce Improvement Act generally requires DOD to establish policies and procedures for the management of DOD’s acquisition workforce, including education, training, and career development. USD(AT&L) subsequently organized certain acquisition-related positions into 14 career fields and established a certification process by which DOD components determine that employees have met standard requirements for education, training, and experience for each field. However, personnel who provide intelligence support to acquisition programs are not included in the 14 career fields with established certifications. According to service officials, acquisition certifications for personnel who provide intelligence support to acquisition programs have not been developed because there is no career field for intelligence support to acquisition. As a result, personnel providing intelligence support to acquisition programs are not required to obtain certification for any of the acquisition-related career fields. Officials from the Air Force stated that the lack of certification has resulted in critical skill gaps for personnel providing intelligence support to acquisition programs. Similarly, USD(I) is responsible for establishing a department-level certification program for the defense intelligence workforce. As a result, certifications for 15 different intelligence disciplines have been developed, such as geospatial intelligence and collection management, and several other certifications are currently being developed for disciplines such as all-source analysis—an intelligence activity that involves the integration, evaluation, and interpretation of information from all available data sources and types. Intelligence officials stated that personnel providing intelligence support to acquisition programs may become eligible for fundamental intelligence certifications, such as all-source analysis certification. However, these officials stated that the certification is designed to certify fundamental competencies for all intelligence analysts, and is not specific to providing intelligence support to acquisition programs. Though DOD has not developed certifications specific to personnel who provide intelligence support to acquisition programs, the Air Force and the Army have each developed certifications for these personnel. The Air Force has established a certification for personnel who provide intelligence support to acquisition programs via both service-wide guidance and guidance from organizations involved in acquisition, such as Air Force Materiel Command. The Air Force requires that, for initial certification, personnel assigned to positions providing intelligence support to acquisition programs must complete certain training, including Air Force and Defense Acquisition University classes, and have 1 year of experience in a designated “acquisition intelligence” position. Additionally, individual acquisition organizations, such as the Air Force Life Cycle Management Center, require additional training and also require that intelligence managers certify that personnel providing intelligence support to acquisition programs have completed the required training, met the experience requirements, and can complete a list of unique tasks specific to the performance of their individual job. The Army developed an optional certification for civilian personnel in intelligence positions, including intelligence support to acquisition programs. A 2001 training plan describes a process for individuals to document competency in different specialty areas based on the job duties and seniority of the position. The plan shows that the intelligence support to acquisition specialty requires competency in areas such as threat intelligence and technical knowledge of acquisition organizations. Personnel may achieve these competencies through any combination of previous experience, classroom, and on-the-job training. Subsequently, they may request an optional certification from their command organization if a supervisor certifies the individual’s qualifications. The services have developed varying levels of training for the personnel who provide intelligence support to acquisition programs in the absence of certifications required by DOD for these personnel. Air Force officials stated that training for personnel providing intelligence support to acquisition programs is accomplished through their certification process, which requires these personnel to complete a series of classroom and on- the-job training units, including Defense Acquisition University classes in acquisition management fundamentals, among others, and an Air Force 4-day training course called the Acquisition Intelligence Formal Training Unit. Individual acquisition organizations such as the Air Force Life Cycle Management Center require additional training, including courses on intelligence acquisition life-cycle management and the Joint Capabilities Integration and Development System; a review of key acquisition documents; and on-the-job training based on unit-specific missions. Air Force officials also stated that experience as an active-duty intelligence officer and an acquisition program manager all also provided training for the position. Army officials stated that threat managers and foreign intelligence officers, two of the three groups that provide intelligence support to acquisition programs, are required to take several courses through the Defense Acquisition University and Defense Security Service. In addition, the threat intelligence branch of Army Intelligence has an annual training course that includes training in subjects specific to providing intelligence support to acquisition programs, such as critical threats and technology protection. However, according to Army officials, Army personnel providing intelligence support to acquisitions are not required to take this course. An Army intelligence official stated that the course is optional because of a lack of travel and training funds. The Navy and Marine Corps have identified and required different levels of training relevant for their personnel. Navy officials stated that, as of June 2016, there was no formal training across the department for personnel providing intelligence support to acquisition programs, although some Navy organizations have developed training policies specific to their organizations. For example, according to Navy officials, Naval Air Systems Command, the Navy acquisition organization generally responsible for naval aircraft, weapons, and systems, has a training program for its scientific and technical intelligence liaison officers. This training includes intelligence community and Defense Acquisition University courses, computer-based training, and a certification exam, which requires its liaison officers to attend the Air Force’s Acquisition Intelligence Formal Training Unit. According to Navy officials, personnel providing intelligence support to acquisition programs at other naval system commands, such as those responsible for sea and space systems, receive primarily ad hoc and on-the-job training. Marine Corps officials stated that, as of June 2016, personnel who provide intelligence support to acquisition programs were required to take an online Defense Acquisition University course on acquisition management fundamentals that is not specific to providing intelligence support to acquisition programs, in addition to on-the-job training in order to perform their job duties. These officials also stated that in 2016 personnel were required to attend a version of the Air Force Acquisition Intelligence Formal Training Unit, and that although the Marine Corps is exploring the use of the Army Intelligence training course, it is not required for personnel to attend the training. USD(AT&L), working with the Defense Acquisition University, established training related to the integration of intelligence and acquisition. In May and June 2015, Defense Acquisition University increased the integration of intelligence and acquisition in its curriculum. For example, the university added a case study regarding critical intelligence parameters to the program manager’s course, and it also added discussion topics about the need for intelligence in acquisition programs to several courses intended for acquisition executives and senior officials. Both entry-level and advanced courses were modified to include content on the relationship between intelligence and acquisition organizations. For example, officials stated that they invited a speaker from the Joint Staff to an advanced course to speak about the relationship between intelligence and acquisition. While this training is intended to address the identified need for greater intelligence training for acquisition personnel, the training may not be accessible to personnel providing intelligence support to acquisition programs. Service intelligence officials stated that because positions for providing intelligence support to acquisition programs are not designated as acquisition-related for the purposes of Defense Acquisition Workforce Improvement Act certification, some courses are available to these personnel only on a space-available basis. Some other courses, such as those for program managers, require Defense Acquisition Workforce Improvement Act certification in designated career fields as a pre- requisite. As described above, DOD has not required certification for personnel providing intelligence support to acquisition programs. As a result, these personnel may be unable to access these courses. Without requiring certifications for personnel who provide intelligence support to acquisition programs, DOD has no assurance that these personnel are qualified and prepared to carry out their duties. The department has established certifications for both acquisition and intelligence positions in order to ensure that those respective workforces are qualified to carry out their duties. Key principles for the management of federal employees state that agencies should develop training strategies and tools that, among other things, can be aligned to improve the critical skills needed in their workforce. We previously found that when intelligence training is not fully implemented or required, programs and organizations may be unable to fully succeed in their goals. The DOD Inspector General has also found that the lack of common training standards has resulted in difficulties for personnel in performing common tasks and in a critical skills gap across military intelligence services and agencies. While all four services have established or identified training for personnel providing intelligence support to acquisition programs, without department-wide required certifications that include training standards, there may be inconsistent levels of expertise and skill among these personnel. Without a certification process that includes required training for personnel who provide intelligence support to acquisition programs, DOD may not be able to ensure that all personnel who provide intelligence support to acquisition programs are familiar with and able to provide intelligence inputs to their assigned acquisition programs. As of July 2016, DOD had multiple efforts underway to improve processes and procedures for integrating intelligence into major defense acquisition programs. For example, USD(AT&L) had identified several intelligence-related tasks in its Better Buying Power 3.0 initiative. Further, USD(AT&L), USD(I), and the Joint Staff had created an executive steering group and a task force—the Acquisition Intelligence Requirements Task Force—to improve the integration of intelligence into major defense acquisition programs. This task force has identified the need for intelligence mission data to be prioritized, but DOD has not required such prioritization. In order to increase the productivity, efficiency, and effectiveness of DOD’s acquisition, technology, and logistics efforts, USD(AT&L) issued the Better Buying Power 3.0 initiative in January 2015. This initiative contains nine tasks related to integrating intelligence into acquisitions, which are described in table 2. Among the nine tasks, USD(AT&L) describes the use of critical intelligence parameters as a key aspect of the linkage among the acquisition, intelligence, and requirements communities. Critical intelligence parameter thresholds, if breached, indicate an adversary’s potential ability to substantially reduce the performance or even defeat the capability of the weapon system undergoing acquisition. The intelligence community monitors foreign threat capabilities and informs the acquisition community of a breach, which triggers a review process to resolve or mitigate the breach. Other intelligence-related tasks under Better Buying Power 3.0 include direction to the Assistant Secretary of Defense for Acquisitions to work with the Office of the USD(I) to review DOD Directive 5250.01, Management of Intelligence Mission Data (IMD) in DOD Acquisition. As of July 2016, this review is being facilitated by the Acquisition Intelligence Requirements Task Force, described below, which is coordinating a revised draft among stakeholder entities. DOD created an executive steering group and task force to better integrate intelligence into acquisition programs. On December 4, 2015, the offices of USD(AT&L) and USD(I), along with the Joint Staff, created the Acquisition Intelligence Requirements Executive Steering Group and the Acquisition Intelligence Requirements Task Force through a joint memorandum to better integrate, coordinate, and prioritize intelligence processes and procedures for providing intelligence support to acquisition programs. The steering group is co-chaired by senior level members of the offices of USD(AT&L) and USD(I), and the Joint Staff, and it is composed of representatives from the Office of the Director of National Intelligence, Director of Operational Test and Evaluation, Office of Cost Assessment and Program Evaluation, service acquisition executives, military service intelligence staffs, and DIA, among other stakeholders. The memorandum states that the steering group replaces the Intelligence Mission Data Oversight Board and the Intelligence Mission Data Senior Steering Group that were previously established in DOD Directive 5250.01, issued in January 2013. A DIA official explained that the Intelligence Mission Data Senior Steering Group never met and that the Acquisition Intelligence Requirements Task Force was created to address intelligence support to acquisition, including intelligence mission data issues. Both the executive steering group and the task force began to meet prior to their formal creation in December 2015. The task force initially met in October 2015 and subsequent to January 2016 has generally held weekly meetings, while the executive steering group initially met in August 2015 and met quarterly subsequent to December 2015. Since February 2016, a senior executive service-level director has led the task force composed of O-6 level representatives from the organizations that form the steering group. According to task force officials, early efforts of the task force included engaging major defense acquisition programs that the task force identified as intelligence mission data-dependent in order to identify and summarize intelligence supportability issues and raise them to decision makers prior to milestone reviews for the acquisition programs. Officials from USD(AT&L), USD(I), and Joint Staff stated that while there have previously been other weapon systems with intelligence mission data shortfalls, such as the F-22 and F-18G, the F-35’s greater reliance on intelligence mission data and concerns regarding the service intelligence centers’ ability to produce the needed data brought the problem to the forefront. For example, DOD reported in 2013 that the initial release of intelligence mission data requirements for the F-35 in 2008 presented a unique challenge with regard to the amount and breadth of intelligence requirements for the intelligence community, and for the service intelligence centers specifically. These officials stated that the main impetus for creating the Acquisition Intelligence Requirements Executive Steering Group and the Acquisition Intelligence Requirements Task Force concerned the shortfall in providing intelligence mission data to the F-35 program—data needed for the F-35 to perform its mission once it became an operational weapon system. Task force officials stated that prioritizing intelligence mission data will ensure that the data provided are sufficient to meet the requirements of advanced weapon systems, such as the F-35. DOD has processes and procedures related to intelligence mission data, such as those in DOD Instruction 5000.02 and DOD Directive 5250.01, but they do not require prioritization of the data. For example, DOD guidance requires DOD’s intelligence mission data-dependent acquisition programs to develop a Lifecycle Mission Data Plan to identify anticipated intelligence mission data needs over the life of a weapon system, from program start through the life-cycle of the program to disposal. However, the plan categorizes the intelligence mission data needs by means of a spectrum arranged by data availability—that is, showing data ranging from those that are most available to those that are least available. That presentation of information does not convey a prioritization of what the weapon system most needs to perform its mission. DOD Directive 5250.01 directs DIA to establish the Intelligence Mission Data Center, which is to serve as the focal point for intelligence mission data development, production, and sharing, but it does not assign the agency the role of prioritizing intelligence mission data needs. Task force officials stated that a DIA working group for intelligence mission data was created to oversee and coordinate intelligence mission data production across the defense intelligence enterprise, and that the Intelligence Mission Data Center will support the working group by facilitating the discovery and sharing of existing intelligence mission data. However, although it may be helpful in preventing the duplication of efforts in the collection of intelligence mission data, the Intelligence Mission Data Center’s work does not constitute a means for prioritizing mission data by need for individual acquisition programs. Officials from USD(AT&L), USD(I), Joint Staff, and the task force described a lack of prioritization at multiple levels, to include within the individual acquisition programs, as well as at the service and department levels. Officials from USD(AT&L), USD(I), and Joint Staff, as well as service-level officials on the task force, stated that there were currently no required processes or procedures for prioritizing intelligence mission data needs at any of these levels. For example, at the acquisition program level, an F-22 will have different intelligence mission data needs and priorities from those of a Navy submarine. At the service level, each service will have intelligence mission data needs based on the types of weapon systems it is developing and has already deployed. At the department level, the Air Force and the Navy may have similar intelligence mission data needs for their respective fighter aircraft, but the Army’s intelligence mission data needs will likely differ greatly from those of its sister services based on the respective threats each faces. As of July 2016, no requirements existed within DOD guidance to prioritize intelligence mission data, though there were efforts underway in 2016 by the task force and within the Air Force to develop processes and procedures for intelligence mission data prioritization. According to task force and Air Force officials, the Acquisition Intelligence Requirements Task Force worked in parallel with an Air Force effort to identify potential processes to prioritize intelligence mission data at the acquisition program and service levels, respectively. These officials presented proposals in June and July 2016 for potential processes and procedures to prioritize intelligence mission data at the acquisition program level. They also described how prioritization at the service and department levels may be accomplished, as shown below: Acquisition program prioritization: The task force proposal would involve prioritization of intelligence mission data into the following four levels of impact on the acquisition program’s capabilities were the data not acquired or unavailable: 1. Level I denotes unacceptable degradation: intelligence mission data requirements that, if not satisfied, would result in unacceptable mission task degradation with no work-around possible; 2. Level II denotes significant degradation: intelligence mission requirements that, if not satisfied, would result in a significant mission task degradation that is unacceptable to the operator but for which a work-around is available, acceptable to the operator, and must be applied; 3. Level III denotes partial degradation: intelligence mission data requirements that, if not satisfied, would result in a partial or minimal degradation that is acceptable to the operator and for which a work-around is optional; and, 4. Level IV denotes little to no impact: intelligence mission data requirements that, if not satisfied, would result in little to no degradation to the mission. Service prioritization: Air Force officials described an effort undertaken in May 2016 to apply the individual program approach using levels 1 to 4 described above to categorize 150 intelligence mission data needs at the service level. Task force officials stated that the Air Force effort undertaken in fiscal year 2016 would use a cost-capability approach to better inform the fiscal year 2017 service-wide prioritization effort. Department prioritization: Task force officials indicated that implementing prioritization at the individual program and service levels would be required prior to developing an enterprise-wide capability to prioritize intelligence mission data. Officials from USD(AT&L) stated that the enterprise-wide prioritization of intelligence mission data could also be informed by efforts related to developing Integrated DOD Intelligence Priorities. Furthermore, officials from the DIA stated that there is a lack of coordination regarding how the service intelligence centers conduct their business, and that the centers currently were not prioritizing, verifying, or balancing the work related to producing intelligence mission data. Task force officials stated that DIA was developing an intelligence mission data production prioritization process that would respond to enterprise-wide intelligence mission data priorities. Though DOD has taken efforts to identify and develop processes and procedures to prioritize intelligence mission data, previous efforts have not succeeded in implementing intelligence mission data prioritization. Federal internal control standards state that management should establish an organizational structure, assign responsibility, and delegate authority to achieve objectives. Per the joint memorandum that established the Acquisition Intelligence Requirements Executive Steering Group, the steering group and associated task force were created to integrate, coordinate, and prioritize intelligence support functions and processes. However, previous efforts have not succeeded in implementing a system to prioritize these data at any level. For example, the Intelligence Mission Data Senior Steering Group never met, and the DIA’s standing working group efforts may lead to a prevention of duplication of intelligence mission data production efforts, but would not lead to prioritization of the data. The Acquisition Intelligence Requirements Task Force was created to address intelligence support to acquisition, including intelligence mission data issues, and it has developed some proposed processes and procedures for prioritization of intelligence mission data. Without specific DOD guidance requiring intelligence mission data prioritization, new processes and procedures such as those developed by the task force and the services may not be fully implemented. With no required process to prioritize intelligence mission data, the intelligence community may continue to process requests for intelligence mission data as they are received, and thus weapon systems may not have the intelligence mission data they need to successfully perform their missions once operational. As of July 2016, DOD was developing new tools to better integrate intelligence into acquisition programs. DIA was developing the Validated Online Lifecycle Threat, an online tool to provide threat information to acquisition programs in a more timely and effective manner than the current manually generated process in use. However, DIA had not effectively communicated with stakeholders about the tool or sought feedback from its intended users. Separately, officials from Performance Assessments and Root Cause Analyses, an office within USD(AT&L), were developing a tool for the acquisition community to communicate intelligence needs from individual acquisition programs to the intelligence community. However, intelligence community users had not expressed a need or defined requirements for this tool. If the tool does not meet the user’s needs, or will not be used, moving forward with its development could use funds unnecessarily. DOD began developing a new tool in fiscal year 2015 to better report threat information from the intelligence community to acquisition programs, but it has not effectively communicated with stakeholders about the tool or sought feedback from its intended users. As reported by DIA and described in USD(AT&L)’s Better Buying Power 3.0 initiative, the Validated Online Lifecycle Threat is a new tool that DIA began developing in fiscal year 2015 to develop and report threat information to acquisition programs. DOD guidance describes the System Threat Assessment Report as the primary threat document for supporting the Defense Acquisition Board’s milestone reviews. DOD officials described the System Threat Assessment Report as a primary intelligence input into major DOD acquisitions. DOD officials described challenges regarding the timeliness and usefulness of the System Threat Assessment Report. Specifically, officials from USD(AT&L), Joint Staff, and DIA, as well as service intelligence officials, stated that the System Threat Assessment Reports historically arrived at acquisition program offices late, not until after the requirements for a new weapon system had been identified and approved as part of the Joint Capabilities Integration Development System process well after the designing of the weapon system had begun. Furthermore, officials from DIA, USD(AT&L), and the service intelligence and acquisition communities stated that these reports are often several hundred pages in length, take as long as 9 months to produce, and are not substantively used by acquisition program managers. According to these officials, as well as officials from the Acquisition Intelligence Requirements Task Force, program managers simply used System Threat Assessment Reports to check off a box on a list of required documents for the next acquisition milestone decision meeting. Specifically, Air Force officials from the Joint Surveillance Target Attack Radar System Recapitalization acquisition program stated that the system threat assessment reports were not usable because they did not contain the level of relevance and specificity needed by the acquisition program, and because they were too long, fragmented, and difficult to navigate. Lastly, DIA officials stated they had determined that as much as 80 percent of all System Threat Assessment Reports are repetitive with each other and are not program specific. According to DIA, the Validated Online Lifecycle Threat is a planned system-specific threat tool created by selecting relevant modules from a library of threat information. DIA and officials from USD(AT&L) described the planned threat library as consisting of dynamic modules based upon threat category, such as fighter aircraft, that would be updated by the analyst at the services’ intelligence centers with new threat information as it is produced within the intelligence community. DIA officials reported that as part of a broader piloting effort the agency completed in 2015, they were able to develop a Validated Online Lifecycle Threat in 3 months for the Joint Surveillance Target Attack Radar System recapitalization program. According to DIA officials, the agency will have spent nearly $2.5 million from fiscal year 2015 through the end of fiscal year 2016 to begin developing the Validated Online Lifecycle Threat and the associated threat library, and it plans to have the tool completed by the end of fiscal year 2017. DIA officials reported that they had not effectively “marketed” the Validated Online Lifecycle Threat tool to its intended stakeholders and users. While Marine Corps officials stated that the threat modules can be updated in a shorter timeframe than a System Threat Assessment Report, officials from the Navy and the Army did not know that DIA intended the Validated Online Lifecycle Threat to be a dynamic system, and they stated that they believed the planned tool to be a static, online version of the System Threat Assessment Report. For example, Army officials stated that the tool may be less useful than its predecessor because it is composed of static modules that may not provide the same level of individualized detail. Though some Navy officials who provide intelligence support to acquisition programs indicated that they had received briefs and other information about the new tool, other Navy officials expressed concerns that the Validated Online Lifecycle Threat would not be customizable to programs and could, similar to the System Threat Assessment Report, include extensive information that was not program-specific and could therefore be as inefficient as the System Threat Assessment Reports. Air Force officials from the Joint Surveillance Target Attack Radar System Recapitalization acquisition program stated that the Validated Online Threat Assessment tool was intended to alleviate resource constraints in the intelligence community, but that the tool was being implemented without input from the acquisition community and individual program management offices. The Project Management Institute’s A Guide to the Project Management Body of Knowledge states that managing stakeholders’ engagement helps to increase the probability of project success by ensuring that stakeholders clearly understand the project goals, objectives, benefits, and risks. This enables stakeholders to be active supporters of the project and to help guide activities and project decisions. Federal internal control standards state that management should communicate quality information externally so that external parties can help the entity achieve its objectives and address related risks. We found that some potential users of the Validated Online Lifecycle Threat report have not received information regarding the intended capabilities of the new system because DIA has not effectively communicated information about the tool with stakeholders and intended users. A communication plan would include processes for communicating the intended capabilities of the Validated Online Lifecycle Threat tool to stakeholders such as USD(AT&L) and USD(I) and users such as personnel who provide intelligence support to acquisition programs. Without effectively communicating such information to potential users, DIA may not receive useful feedback as it develops the tool, and concerns regarding timeliness, usability, and redundancy may not be effectively addressed. Officials from Performance Assessments and Root Cause Analyses, an office within USD(AT&L), are developing a tool for communication of intelligence needs from acquisition programs to the intelligence community, but intended users have not expressed a need or defined requirements for the tool. The office is responsible for conducting root- cause analyses of acquisition programs that encounter Nunn-McCurdy breaches, among other things. According to these officials, the Assistant Secretary for Defense (Acquisitions) requested that the office perform an analysis of the root causes of challenges faced by the integration of intelligence into acquisitions. This analysis identified issues with threat intelligence and intelligence mission data and resulted in pilot projects of new tools such as the Validated Online Lifecycle Threat. According to Performance Assessments and Root Cause Analyses officials, acquisition programs request intelligence both through informal means, such as conversations and emails, and through formal means via information systems, such as the Community On-Line Intelligence System for End Users and Managers. These officials reported that the formal requests for intelligence often contain vague or inaccurate information and do not allow the intelligence community to prioritize or fulfill requests efficiently. Officials told us that to resolve this issue they are developing an online tool called the Acquisition Intelligence Support Assessment that would allow acquisition personnel to communicate intelligence needs to the intelligence community over an online system. According to Performance Assessments and Root Cause Analyses officials, personnel providing intelligence support to acquisition programs will be able to access the online tool and determine whether particular threat intelligence is currently available. If it is not, requests can be made via the tool to the intelligence community, which would then use the tracking capabilities planned for the tool to monitor requests from multiple programs, and assign staff and resources as necessary. Performance Assessments and Root Cause Analyses officials stated that this tool could also be useful for personnel with limited knowledge or familiarity with acquisition programs due to time and resource constraints. They also stated that it would be useful for the intelligence community to manage and prioritize intelligence requests from the acquisition community. Officials from the office of Performance Assessments and Root Cause Analyses stated that they were independently developing the tool for the acquisition and intelligence communities before integrating it into existing processes. These officials stated that they chose this approach after conducting their root cause analysis and identifying challenges related to integrating intelligence into acquisition programs. DOD awarded contracts in August and December 2015 for the initial analysis and for commencing development of the communication tool for a cost of approximately $1.1 million. Officials stated that they expected to spend in total about $1.2 million, sourced from available operational funding within the Office of Performance Assessments and Root Cause Analyses. While Performance Assessments and Root Cause Analyses has funded the development of the Acquisition Intelligence Support Assessment tool, these officials stated that there is currently no mechanism to fund future implementation and operation of the tool once fully developed, and they estimated that the system will cost $3 million to $5 million per year to operate. These officials reported that another office must be tasked to oversee the implementation of the tool, and suggested that the Acquisition Intelligence Requirements Task Force or a Joint Staff office might assume responsibility for the system. Officials from the task force have recommended that the task force evaluate whether the Acquisition Intelligence Support Assessment tool should be used or merged with existing tools, but no decision regarding the planned implementation or operation of the tool had been made as of July 2016. Performance Assessments and Root Cause Analyses officials described several steps they had taken to introduce the tool to potential stakeholders, including holding demonstration events, working groups, and briefings. Specifically, these officials stated that they had introduced the Acquisition Intelligence Support Assessment tool to acquisition and intelligence stakeholders in October 2015 and June 2016, but service officials who are potential users of the tool told us that they had not identified a need for the tool. Air Force officials stated that they already track intelligence requests through existing information systems, and that the new tool would likely duplicate existing processes. Officials from the Navy stated that the developmental nature of the tool prevents a full assessment of its strengths and weaknesses. Officials from the Army stated that they would wait until the tool is fully developed before deciding whether to use it, and officials from the Marine Corps stated that their input had not been solicited. We have previously identified leading practices for increased collaboration among agencies, including defining and articulating a common outcome; agreeing on roles and responsibilities, and establishing compatible policies, procedures, and other means to operate across agency boundaries. Given that acquisition and intelligence personnel have not identified requirements for the Acquisition Intelligence Support Assessment tool, it may not fulfill the needs of acquisition programs and the intelligence community or work as intended, and the services may prefer to use existing systems. As a result, DOD may use funds unnecessarily to develop a tool that is not needed. Further, without plans or funding for implementation and operation, the Acquisition Intelligence Support Assessment tool may not be fully implemented or sustained once operational. DOD has long recognized the need to improve its process for the acquisition of major weapon systems, and it has recently undertaken efforts to improve intelligence input both during the acquisition process and, subsequently, to help enable weapon systems to more effectively perform their missions once deployed. For example, the department has worked to integrate intelligence into its acquisition program manager courses and has developed potential processes for prioritizing intelligence mission data needs. Addressing gaps we identified in several key areas will enable DOD to better leverage its efforts. First, without a department- wide certification process that includes training standards, DOD may not be able to ensure that all personnel who provide intelligence support to acquisition programs are familiar with and able to provide intelligence inputs to their portfolios of acquisition programs. Second, without specific requirements for intelligence mission data prioritization in DOD guidance, DOD may not be able to ensure that weapon systems have the data they need to successfully perform their missions once operational. Third, potential users of DOD’s planned Validated Online Lifecycle Threat report have not received information or provided feedback regarding the intended capabilities of the new tool because the DIA has not effectively communicated the intended capabilities to stakeholders and potential users. Without a communication plan, DIA may not receive useful feedback as it develops the system, and ongoing concerns regarding timeliness, usability, and redundancy of threat information may not be effectively addressed. Fourth, without conducting an assessment of the need for and defining requirements for development of its proposed Acquisition Intelligence Support Assessment tool, DOD may be using funds to unnecessarily develop a tool that is not needed or, if needed, may not be fully implemented or sustained once operational. To enhance DOD’s efforts to better integrate and improve intelligence support to major defense acquisition programs, we recommend that the Secretary of Defense direct—as appropriate—the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Under Secretary of Defense for Intelligence; and/or the Secretaries of the military departments, to take the following four actions in coordination with one another: To better enable personnel to provide intelligence inputs to their portfolios of acquisition programs, establish certifications that include having these personnel complete required training. To facilitate implementation of improved processes and procedures developed by the Acquisition Intelligence Requirements Task Force and by the Air Force for the integration of intelligence into major defense acquisition programs, revise relevant guidance and procedures—including DOD Instruction 5000.02 and DOD Directive 5250.01—to require that intelligence mission data at the acquisition program, service, and department levels be prioritized. To better ensure that DOD obtains useful feedback from stakeholders and the intended users of the Validated Online Lifecycle Threat tool, instruct the Director of the Defense Intelligence Agency to develop a communication plan for the tool that includes plans for communicating with and obtaining feedback from stakeholders and intended users such as acquisition program offices and personnel providing intelligence support to acquisition programs. To ensure that it fulfills the needs of acquisition programs and the intelligence community and works as intended, assess the need for the Acquisition Intelligence Support Assessment tool and, if validated by this assessment, define this tool’s requirements for development and identify the entity responsible for providing oversight and funding for its continued development, implementation, and operation. We provided a draft of this report to DOD for review and comment. DOD provided technical comments, which we incorporated as appropriate. DOD concurred with all four of our recommendations and the responses are reprinted in their entirety in appendix IV. Based on discussions with the department, we also revised our recommendations to more accurately characterize the relevant DOD organizations and offices. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology, and Logistics; the Under Secretary of Defense for Intelligence; the Secretaries of the Air Force, Army, and Navy; and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9971 or kirschbaumj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. We reviewed relevant acquisition-related processes and procedures and found that the Department of Defense (DOD) has no requirement to assess a weapon system’s ability to gather intelligence beyond or outside the scope of its mission. DOD officials we spoke with from Joint Staff, the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD(AT&L)), and the services indicated that there are no requirements for them to consider the ability of a system to gather intelligence beyond or outside the scope of its mission during the acquisition process, or otherwise, and that currently there were no plans to consider performing such assessments. Officials from the Army and Marine Corps indicated that this had not been considered because their services were generally focused on non- advanced weapons such as tanks that were ill-suited for this purpose. Air Force officials told us that current advanced weapon systems such as the B-2 and F-22, and future systems such as the F-35 have or will have the capability to gather intelligence outside and beyond the scope needed to perform their missions, but that there were not currently any plans to assess this capability during the acquisition process. These officials told us that these assessments may occur, such as for the non-traditional use of intelligence, surveillance, and reconnaissance systems, in the post- deployment period as opportunities and capabilities arise. Officials from some of the service intelligence communities as well as from USD(AT&L) indicated that one of the challenges faced by current and future advanced weapon systems is the ability to store and then offload intelligence data in such a way as to be immediately useful to analysts in the intelligence community. Air Force officials we spoke with indicated that there have been efforts to make use of non-traditional intelligence, surveillance, and reconnaissance systems in the field as a way of gathering signals intelligence information, but that assessing this capability was not considered during the acquisition process. The National Defense Authorization Act for Fiscal Year 2016 includes a provision that we review the processes and procedures for the integration of intelligence into the defense acquisition process. This report evaluates, for major defense acquisition programs, the extent to which DOD has (1) processes and procedures for certifying and training personnel assigned to provide intelligence support to acquisition programs; (2) efforts to improve processes and procedures for integrating intelligence into its acquisition programs; and (3) efforts to develop new tools for integrating intelligence into its acquisition programs. We also collected information related to DOD’s efforts to identify opportunities for weapon systems to collect intelligence even when unrelated to their primary mission, which is presented in appendix I. To determine the extent to which DOD has processes and procedures for certifying and training of personnel assigned to provide intelligence support to acquisition programs, we reviewed DOD guidance governing the management of intelligence and acquisition personnel. We interviewed officials from the offices identified in this appendix who participate in the development of guidance and management of personnel providing intelligence support to acquisition. We submitted written requests for guidance regarding staffing, qualifications, certification, and training of personnel to these officials, and we reviewed their responses. We also interviewed and received written responses from officials from the Defense Acquisition University regarding changes to the acquisition curriculum that included additional intelligence material. We reviewed the certifications and qualifications DOD has established in implementing the Defense Acquisition Workforce Improvement Act, and other DOD guidance for the training and management of acquisition personnel, and Under Secretary for Intelligence (USD(I)) guidance related to certifications and qualifications for intelligence personnel. We reviewed training and certification guidance for both acquisition personnel, as administered by USD(AT&L), and intelligence personnel, as administered USD(I), because interviews with DOD officials indicated that personnel who provide intelligence support to acquisition programs are managed by acquisition and intelligence components, depending on the service. To determine the extent to which DOD has processes and procedures for the integration of intelligence into the acquisition of weapon systems, we reviewed department-level directives, instructions, and other guidance that governs intelligence input into the acquisition process. To identify additional processes specific to the military services, we interviewed officials from the offices identified in this appendix, and we submitted written requests for information in order to obtain the documents identified by these officials. To determine the validity of the document sources used to identify the intelligence inputs, we reviewed written responses from service acquisition and intelligence officials at the Army, Navy, Air Force, and Marine Corps to verify that the documents were current and in use by the respective services. To determine the extent to which ongoing DOD initiatives will address identified issues with acquisition and intelligence integration, we interviewed officials from offices identified in this appendix and requested documentation on the progress of intelligence-related tasks identified in USD(AT&L)’s Better Buying Power 3.0 initiatives. We also observed meetings of the Acquisition Intelligence Requirements Task Force, and we observed briefings from the task force to the Acquisition Intelligence Requirements Executive Steering Group. We also compared the proposed intelligence mission data prioritization processes with Standards for Internal Control in the Federal Government, which states that management should establish an organizational structure, assign responsibility, and delegate authority to achieve objectives. After receiving the documents, an analyst reviewed each document and identified actions (such as formation of a working group or certification), products (such as reports or data), or processes (such as an intelligence parameter breach or formal review) that could be considered as intelligence inputs required for an acquisition program classified by DOD as Acquisition Category I. We defined an intelligence input as any action, process, or product that involved or included the participation of an intelligence professional and was provided for a specific acquisition program. The analyst categorized those inputs with similar names or document sources and then entered each category and input onto a spreadsheet. The intelligence inputs are provided in appendix III. To verify our identification of intelligence inputs, we created a standard data collection instrument based on the spreadsheet of identified intelligence inputs. The data collection instrument asked respondents to identify, for Acquisition Category I programs initiated as of June 1, 2016, (1) when, if at all, each of the identified intelligence inputs would be used throughout an acquisition lifecycle; (2) whether their office would provide input into the item; (3) whether these inputs were required to be provided; and (4) to provide any comments or additional inputs, if necessary. We sent this data collection instrument to officials of the intelligence components of the Army, Navy, Air Force, and Marine Corps; the DIA; the Office of the Under Secretary of Defense for Intelligence; and the Joint Staff Directorate for Intelligence, J-2, for a total of seven responses. We selected these officials and organizations because document sources identified these organizations as providers of intelligence inputs, and thus the most likely to identify intelligence inputs. Data were reviewed by two analysts to ensure that all data were fully extracted and correctly tabulated. To provide illustrative examples and determine how processes and procedures are implemented for individual acquisition programs, we selected a nongeneralizable sample of six major defense acquisition programs. We used a stratified purposeful sampling procedure in which we intentionally chose acquisition programs with particular characteristics to capture both important similarities and variations. We selected from a population of acquisition programs identified by the Acquisition Intelligence Requirements Task Force as having significant intelligence needs. Two analysts then classified each program with characteristics using information from a GAO assessment of major defense acquisition programs, including whether the program was focused on warfare domains of land, air, or sea; and what service was primarily responsible for the program. We excluded space and satellite programs from selection due to the unique differences and higher security classifications of these programs, as compared with other major defense acquisition programs. Based on these characteristics, we then selected two Air Force programs, two Navy programs, one Army program, and one Marine Corps program. The programs included the following: Army, Armored Multi-Purpose Vehicle Navy, Ohio-Class Replacement Navy, Air and Missile Defense Radar Air Force, F-22 Increment 3.2B Modernization Air Force, Joint Surveillance Target Attack Radar System Marine Corps, CH-53K Heavy Lift Replacement Helicopter We selected this number and distribution of acquisition programs because officials from the Air Force and Navy stated that they had many programs that used intelligence mission data, and officials from the Army and Marine Corps stated that they did not have many intelligence mission data-dependent programs. We submitted identical questions and requests for information to officials from each program management office, as well as individuals identified by the program as being personnel who provide intelligence input into the acquisition program. We discussed the questions orally or received written responses from officials and intelligence personnel from each program. While the responses we obtained are not generalizable to all major defense acquisition programs, the information obtained from program officials provided context and important insights for our understanding of the interaction of acquisition and intelligence personnel. To examine the extent to which DOD has efforts to develop new tools for integrating intelligence into acquisitions we identified two tools that were currently in development through discussions with Acquisition Intelligence Requirements Task Force officials. We verified that these tools were in development through interviews with officials involved in oversight of acquisitions and intelligence, including officials at USD(AT&L), USD(I), and DIA. We conducted a site visit to DIA’s Technology and Long-Range Assessment offices in Charlottesville, Virginia, where we interviewed officials and observed a demonstration of a developmental version of the Validated Online Lifecycle Threat tool. We also interviewed officials from Performance Assessments and Root Cause Analyses, and we viewed a presentation and demonstration of a developmental version of the Acquisition Intelligence Support Assessment tool. We collected developmental plans and briefings for both of these tools, and we compared our observations and statements made by DIA and Performance Assessments and Root Cause Analyses officials against the documents, and against statements from acquisition, intelligence, and program management office officials. We compared the developmental plans and information provided to us by DIA and Performance Assessments and Root Cause Analyses officials against standards in the Project Management Institute’s A Guide to the Project Management Body of Knowledge, federal standards for internal controls, and key practices for collaboration among federal agencies. To examine DOD’s processes and procedures for assessing during the acquisition process a weapon system’s ability to gather intelligence when unrelated to its primary mission, we reviewed DOD reports and guidance for acquisition management identified for the previous objectives. We systematically reviewed the content of these documents for any information relevant to assessing during the acquisition process a system’s ability to gather intelligence. We were unable to identify any process or procedure relevant to this objective. To confirm this finding, we interviewed acquisitions, intelligence, and requirements professionals from offices identified in this appendix. Further details are provided in appendix I. We obtained relevant documentation and interviewed officials from the following organizations: Office of Under Secretary of Defense for Intelligence; Office of Under Secretary of Defense for Acquisition, Technology, and Office of Performance Assessments and Root Cause Analyses; Director of Operational Test and Evaluation; Cost Assessment and Program Evaluation; Acquisition Intelligence Requirements Task Force and Executive Defense Intelligence Agency; Defense Acquisition University; U.S. Marine Corps; U.S. Air Force; Office of the Director of National Intelligence We conducted this performance audit from December 2015 to November 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We defined intelligence input in this report as any action, process, or product that involved or included the participation of an intelligence professional and was provided for a specific acquisition program, and we then grouped the intelligence inputs we identified by inputs with similar names, document sources, or categories. To confirm our appropriate identification of intelligence inputs, we created a data collection instrument based on the spreadsheet of identified intelligence inputs. The instrument asked respondents to identify, for an Acquisition Category I program initiated as of June 1, 2016, (1) when, if at all, each of the identified intelligence inputs would be used throughout its acquisition lifecycle; (2) whether their office would provide input for each entry we identified; (3) whether these inputs are required to be provided; and (4) to provide any comments or additional inputs, if necessary. For programs begun prior to Milestone A, the intelligence inputs that we identified from a review of DOD’s acquisition-related guidance are to be provided prior to Milestone A review, with several updated at points prior to subsequent milestone reviews. The responses to our data collection instrument from the Joint Staff, USD(I), DIA, and the service intelligence community indicated that each intelligence input we identified had one or more respondent reporting that the respondent’s office would make that input, which verified the individual inputs we identified (see table 3). DIA, and Army also remarked that the Validated Online Lifecycle Threat report will replace the Capstone Threat Assessment, and DIA, the Air Force, and the Army remarked that the Validated Online Lifecycle Threat report will replace the System Threat Assessment Report. We have noted that these inputs are to be phased out once the Validated Online Lifecycle Threat is operational. The responses regarding when the identified intelligence inputs would be used throughout the lifecycle varied among the respondents, but every input had one or more respondent reporting that the input would be made prior to Milestone A. We attribute this variance to different interpretations of inputs that do not align with every milestone, such as threat-related inputs that occur prior to Milestone A and then are continuously monitored for changes; or to inputs that are made through groups that meet on a schedule independent of acquisition milestones. The responses to the data collection instrument regarding whether each input is required to be provided for every Acquisition Category I also varied. The threat assessment and validation category of inputs represent direct inputs from the intelligence community into acquisition programs. Others, such as Critical Intelligence Parameter and Critical Program Information, are foreign threat factors monitored by the intelligence community for changes that may impact an acquisition program. For the other categories of input, the intelligence community provides varying degrees of direct and indirect inputs into the acquisition process. On some of the responses to our data collection instrument, the respondents provided comments that indicated they were not familiar with the input, and on the others, we attribute the variance to different understandings of guidance among the respondents. During our review of DOD guidance to identify intelligence inputs into acquisition programs throughout the acquisition lifecycle we identified the following key guidance documents for providing intelligence support to acquisition programs: DOD Directive 5000.01: The Defense Acquisition System, issued May 12, 2003, provides management principles and mandatory policies and procedures for managing all acquisition programs, along with DOD Instruction 5000.02. This directive notes that intelligence and the understanding of threat capabilities are integral to system development and acquisitions decisions, and that program managers are to keep threat capabilities current and validated in program documents throughout the acquisition process. DOD Instruction 5000.02: Operation of the Defense Acquisition System, issued January 7, 2015, provides the detailed procedures that guide the operation of the Defense Acquisition System. Regarding intelligence inputs into the acquisition process, among others, the guidance identifies the requirement for a Lifecycle Mission Data Plan for acquisition programs dependent upon intelligence mission data. Additionally, the guidance describes the need to consider threat projections in the context of Analyses of Alternatives. It further notes that affordability analysis should involve a DOD component’s intelligence and acquisition communities. Finally, DOD Instruction 5000.02 lists requirements for a number of intelligence inputs, such as Capstone Threat Assessments, Initial Threat Environment Assessments, System Threat Assessment Reports, and Technology Targeting Risk Assessments. According to DOD officials, several of these inputs are being phased out. DOD Directive 5250.01: Management of Intelligence Mission Data (IMD) in DOD Acquisition, issued January 22, 2013, establishes policies and assigns responsibilities to provide linkages between the management, production, and application of DOD intelligence mission data and accommodation of intelligence mission data in the acquisition process. It helps to synchronize the acquisition, intelligence, and requirement communities regarding intelligence integration into the requirements process and acquisition life cycle. According to Acquisition Intelligence requirements Task Force officials, this directive is currently under revision. Chairman of the Joint Chiefs of Staff Instruction 5123.01G: Charter of the Joint Requirements Oversight Council (JROC), issued February 12, 2015, implements the Joint Requirements Oversight Council, established by statute, which supports the Chairman of the Joint Chiefs of Staff in carrying out the duties of the principal military advisor to the President, National Security Staff, and Secretary of Defense, among other functions. This instruction notes that the Secretary of Defense has designated the Under Secretary of Defense for Intelligence as one of the advisors to the Council, and identifies the Director of the Joint Staff Directorate for Intelligence as an advisor on intelligence supportability and intelligence interoperability issues, among other things. Manual for the Operation of the Joint Capabilities Integration and Development System (JCIDS), issued February 12, 2015 (including changes through December 18, 2015): The manual provides detailed guidelines and procedures for the Joint Capabilities Integration and Development System, and describes interactions of that process with several other departmental processes. Among other things, the manual contains a content guide for intelligence supportability, providing general descriptions of categories of intelligence support, to assist with the identification of intelligence support requirements and sufficiency or risk of shortfalls in intelligence infrastructure required to support a proposed potential acquisition program throughout its lifecycle. The manual indicates that in cases where the intelligence support requirements exceed the intelligence community’s ability to provide support, resources required to augment the intelligence support must be accounted for in program affordability documentation. Categories of intelligence support listed in the manual include intelligence manpower support, intelligence resource support, intelligence planning and operations support, targeting support, and intelligence mission data support, among others. Intelligence manpower support is to be addressed where the proposed acquisition will require intelligence personnel for development, testing, training, or operation. In some circumstances, the category may address necessary manpower changes or specific required skills. Intelligence resource support is to be addressed if the proposed acquisition or supporting efforts will require or depend upon intelligence funding. Defense Acquisition Guidebook: The Defense Acquisition University maintains this DOD best practice guide, which complements DOD Directive 5000.01 and DOD Instruction 5000.02. Chapter 8 – Intelligence Analysis Support to Acquisition—describes various aspects of providing intelligence support to acquisition programs such as threat intelligence support and signature and other intelligence mission data support. The Defense Acquisition Guidebook is currently under revision, according to Acquisition Intelligence Requirements Task Force officials. Defense Intelligence Agency Instruction 5000.002, Intelligence Threat Support for Major Defense Acquisition Programs, issued February 1, 2013: Referenced in guidance such as DOD Instruction 5000.02, the DIA instruction assigns responsibilities and establishes procedures for DIA and DOD components to provide intelligence threat support for major defense acquisition programs. In addition to the contact named above, GAO staff who made contributions to this report include Brian Mazanec, Assistant Director; Scott Behen, Pat Donahue, Ben Emmel, Amie Lesser, Jason Lyuke, C. James Madar, Ronald Schwenn, Michael Shaughnessy, and Cheryl Weissman.
DOD has reported that it expects to invest $1.6 trillion on acquiring 80 major defense acquisition programs, many of which depend on intelligence input both during the acquisition process and to effectively perform missions once deployed. The complexity of advanced weapon systems, such as the F-35, is creating increasing demand for intelligence mission data—such as radar signatures—for sensors and processes supporting warfighters. The National Defense Authorization Act for Fiscal Year 2016 includes a provision that GAO review intelligence integration into DOD acquisitions. This report evaluates, for major defense acquisition programs, the extent to which DOD has (1) processes and procedures for certifying and training personnel providing intelligence input into acquisition programs; (2) efforts to improve processes and procedures for integrating intelligence into its acquisitions; and (3) efforts to develop tools to integrate intelligence into its acquisitions. GAO compared certification and training to relevant guidance; reviewed relevant documents to identify intelligence inputs and the provision of intelligence input into acquisition programs; and interviewed cognizant officials. The Department of Defense (DOD) has developed certifications and training for acquisition and intelligence personnel, but it does not have certifications for certain personnel who provide intelligence support to acquisition programs. These personnel help integrate threat information on foreign capabilities and intelligence mission data—technical intelligence such as radar signatures and geospatial mapping data—into acquisition programs. DOD uses certifications to determine that an employee has necessary education, training, and experience. The lack of certifications for personnel providing intelligence support to acquisition programs has led to the services developing varying levels of training: the Air Force certifies and requires training specific to providing intelligence support, the Army offers training that is not required, and the Navy has no formal training. Without certifications for personnel providing intelligence support to acquisition programs, DOD does not have assurance that these personnel are prepared to carry out their duties. DOD has multiple efforts underway to improve processes and procedures for integrating intelligence into its acquisitions but does not require prioritization of intelligence mission data, which would identify those data most needed for a weapon system to perform its mission. A task force DOD created in 2015 to better integrate intelligence into acquisition programs identified the need for prioritization and proposed processes and procedures for doing so. Without department-wide requirements to prioritize intelligence mission data, new processes and procedures such as those developed by the task force may not be fully implemented and weapon systems could be deployed without the intelligence mission data they need to perform their missions. DOD is developing two tools for integrating intelligence into major defense acquisition programs. One tool to share threat information lacks a communication plan to obtain feedback from users to better ensure its effectiveness. Without user feedback, DOD may not receive useful information to develop the tool. The other tool is for acquisition programs to communicate their intelligence needs to the intelligence community, though the services did not identify a need for the tool and there is no mechanism to fund its implementation and operation. Without assessing the need for such a tool or plans or funding for implementation and operation, DOD may be using funds unnecessarily to develop an unneeded tool. GAO recommends DOD create certifications and training for intelligence support personnel, require that intelligence mission data be prioritized, develop a communication plan for a threat information tool, and determine the need to develop another tool. DOD concurred with GAO's recommendations.
Many foreign physicians who enter U.S. graduate medical education programs do so as participants in the Department of State’s Exchange Visitor Program—an educational and cultural exchange program aimed at increasing mutual understanding between the peoples of the United States and other countries. Participants in the Exchange Visitor Program enter the United States with J-1 visas. More than 6,100 foreign physicians with J-1 visas took part in U.S. graduate medical education programs during academic year 2004–05. This number was about 40 percent lower than a decade earlier, when about 10,700 foreign physicians with J-1 visas were in U.S. graduate medical education programs. Physicians participating in graduate medical education on J-1 visas are required to return to their home country or country of last legal residence for at least 2 years before they may apply for an immigrant visa, permanent residence, or certain nonimmigrant work visas. They may, however, obtain a waiver of this requirement from the Department of Homeland Security at the request of a state or federal agency, if the physician has agreed to practice in, or work at a facility that treats residents of, an underserved area for at least 3 years. States were first authorized to request J-1 visa waivers on behalf of foreign physicians in October 1994. Initially, states were authorized to request waivers for up to 20 physicians each fiscal year; in 2002, the annual limit was increased to 30 waivers per state. Physicians who receive waivers may work in various practice settings, including federally funded health centers and private hospitals, and they may practice both primary care and nonprimary care specialties. States and federal agencies may impose additional limitations on their programs beyond federal statutory requirements, such as limiting the number of requests they will make for physicians to practice nonprimary care specialties. Obtaining a J-1 visa waiver through a state request involves multiple steps. A physician must first secure a bona fide offer of employment from a health care facility that is located in, or that treats residents of, an underserved area. The physician, the prospective employer, or both then submit an application to a state to request the waiver. The state submits a request for the waiver to the Department of State. If the Department of State recommends the waiver, it forwards its recommendation to the Department of Homeland Security’s U.S. Citizenship and Immigration Services (USCIS). USCIS is responsible for making the final determination and notifying the physician when a waiver is granted. According to officials involved in recommending and approving waivers at the Department of State and USCIS, after review for compliance with statutory requirements and security issues, nearly all states’ waiver requests are recommended and approved. Once physicians are granted waivers, they must work at the site specified in their waiver applications for a minimum of 3 years. During this period, although states do not have explicit responsibility for monitoring physicians’ compliance with the terms and conditions of their waivers, states may conduct monitoring activities at their own initiative. For purposes of J-1 visa waivers, HHS has specified two types of underserved areas in which waiver physicians may practice: health professional shortage areas (HPSAs) and medically underserved areas and populations (MUA/Ps). In general, HPSAs are areas, population groups within areas, or facilities that HHS has designated as having a shortage of primary care health professionals and are identified on the basis of, among other factors, the ratio of population to primary care physicians. MUA/Ps are areas or populations that HHS has designated as having shortages of health care services and are identified using several factors in addition to the availability of primary care providers. In 2004, Congress gave states the flexibility to use up to 5 of their 30 waiver allotments each year— which we call “flexible waivers”—for physicians to work in facilities that serve patients who reside in a HPSA or MUA/P, regardless of the facilities’ location. No one federal agency is responsible for managing or tracking states’ and federal agencies’ use of J-1 visa waivers to place physicians in underserved areas. Further, no comprehensive data are available on the total number of waivers granted for physicians to practice in underserved areas. HHS’s Health Resources and Services Administration is the primary federal agency responsible for improving access to health care services, both in terms of designating underserved areas and in administering programs— such as the NHSC programs—to place physicians and other providers in them. However, HHS’s oversight of waiver physicians practicing in underserved areas has generally been limited to those physicians for whom HHS has requested J-1 visa waivers. J-1 visa waivers continue to be a major means of supplying physicians to underserved areas in the United States, with states and federal agencies reporting that they requested more than 1,000 waivers in each of fiscal years 2003 through 2005. We estimated that, at the end of fiscal year 2005, the number of physicians practicing in underserved areas through the use of J-1 visa waivers was roughly one and a half times the number practicing there through NHSC programs. In contrast to our findings a decade ago, states are now the primary source of waiver requests for physicians to practice in underserved areas. In fiscal year 2005, more than 90 percent of the waiver requests for physicians were initiated by the states, compared with fewer than 10 percent in 1995. (See fig. 1.) Every state except Puerto Rico and the U.S. Virgin Islands reported requesting waivers for physicians in fiscal year 2005, for a total of 956 waiver requests. In 1995—the first full year that states had authority to request waivers—nearly half of the states made a total of 89 waiver requests. During the past decade, the two federal agencies that requested the most waivers for physicians to practice in underserved areas in 1995—the Department of Agriculture and the Department of Housing and Urban Development—have discontinued their programs. These federal agencies together requested more than 1,100 waivers for physicians to practice in 47 states in 1995, providing a significant source of waiver physicians for some states. For example, these federal agencies requested a total of 149 waivers for physicians to practice in Texas, 134 for New York, and 105 for Illinois in 1995. In fiscal year 2005, the three federal agencies that requested waivers for physicians to practice in underserved areas—the Appalachian Regional Commission, the Delta Regional Authority, and HHS—requested a total of 56 waivers for physicians to practice in 15 states. With diminished federal participation, states now obtain waiver physicians primarily through the 30 waivers they are allotted each year. The number of waivers states actually requested, however, varied considerably among the states in fiscal years 2003 through 2005. For example, in fiscal year 2005, about one-quarter of the states requested the maximum of 30 waivers, while slightly more than a quarter requested 10 or fewer (see fig. 2). Collectively, the 54 states requested 956 waivers, or roughly 60 percent of the maximum of 1,620 waivers that could have been granted at their request. Of the waivers states requested in fiscal year 2005, about 44 percent were for physicians to practice exclusively primary care, while about 41 percent were for physicians to practice exclusively in nonprimary care specialties, such as anesthesiology or cardiology. An additional 7 percent were for physicians to practice psychiatry. A small proportion of requests (5 percent) were for physicians to practice both primary and nonprimary care—for example, for individual physicians who practice both internal medicine and cardiology (see fig. 3). More than 90 percent of the states that requested waivers in fiscal year 2005 reported that, under their policies in place that year, nonprimary care physicians were eligible to apply for waiver requests. Some of these states limited these requests. For example, some states restricted the number of hours a physician could practice in a nonprimary care specialty. Further, two states reported that they accepted applications from, and requested waivers for, primary care physicians only. Regarding practice settings, more than three-fourths of the waivers requested by states in fiscal year 2005 were for physicians to practice in hospitals and private practices, including group practices. In addition, 16 percent were for physicians to practice in federally qualified health centers—facilities that provide primary care services in underserved areas—or rural health clinics—facilities that provide outpatient primary care services in rural areas (see fig. 4). More than 80 percent of the states requesting waivers in fiscal year 2005 reported requiring facilities where the physicians worked—regardless of practice setting—to accept some patients who were uninsured or covered by Medicaid. Although states do not have explicit responsibility for monitoring physicians’ compliance with the terms and conditions of their waivers, in fiscal year 2005, more than 85 percent of the states reported conducting at least one monitoring activity. The most common activity—reported by 40 states—was to require periodic reports by the physician or the employer (see table 1). Some states required these reports to specify the number of hours the physician worked or the types of patients—for example, whether they were uninsured—whom the physician treated. Not all states that requested waivers conducted monitoring activities. Six states, which collectively accounted for about 13 percent of all state waiver requests in fiscal year 2005, reported that they conducted no monitoring activities in that year. The majority of the states reported that the annual limit of 30 waivers per state was at least adequate to meet their needs for J-1 visa waiver physicians. When asked about their needs for additional waiver physicians, however, 11 states reported needing more. Furthermore, of the 44 states that did not request their 30-waiver limit in each of fiscal years 2003 through 2005, more than half were willing, at least under certain circumstances, to have their unused waiver allotments redistributed to other states in a given year. Such redistribution would require legislation. Fourteen states reported that they would not be willing to have their states’ unused waiver allotments redistributed. About 80 percent of the states reported that the annual limit of 30 waivers per state was adequate or more than adequate to meet their needs for J-1 visa waiver physicians. However, 13 percent of the states reported that the 30-waiver limit was less than adequate (see fig. 5). Among the 16 states that requested 29 or 30 waivers in fiscal year 2005, 10 states reported that the annual limit was at least adequate for their needs. The other 6 states that requested all or almost all of their allotted waivers that year reported that the 30-waiver limit was less than adequate. As mentioned earlier, states can use up to 5 of their waiver allotments for physicians to work in facilities located outside of HPSAs and MUA/Ps, as long these facilities serve patients who live in these underserved areas. While we inquired about states’ views on the adequacy of the annual limit on these flexible waivers, fewer than half of the states reported requesting flexible waivers in fiscal year 2005—the first year they were authorized to do so. When asked about the annual limit of 5 flexible waivers, half of the states (27 states) reported that this limit was at least adequate, but nearly one-third (17 states) did not respond or reported that they were unsure of their need for flexible waivers. The remaining 10 states reported that the annual limit of 5 flexible waivers was less than adequate (see fig. 6). Of these 10 states, 8 had also reported that the annual limit of 30 waivers per state was at least adequate for their needs, suggesting that some states may be more interested in increasing the flexibility with which waivers may be used than in increasing the overall number of waivers each year. In addition to commenting on the adequacy of the annual waiver limits, states estimated their need for additional physicians under their J-1 visa waiver programs. Specifically, 11 states (20 percent) estimated needing between 5 and 50 more waiver physicians each. Collectively, these 11 states reported needing 200 more waiver physicians (see table 2). Although 10 states reported requesting the annual limit of 30 waivers in each of fiscal years 2003 through 2005, the large majority (44 states) did not. When asked to provide reasons why they did not use all 30, many of these states reported that they received fewer than 30 applications that met their requirements for physicians seeking waivers through their state J-1 visa waiver programs. Some states, however, offered further explanations, which touched upon difficulties attracting physicians to the state, low demand for waiver physicians among health care facilities or communities, and mismatches between the medical specialties communities needed and those held by the physicians seeking waivers. For example: Difficulties attracting waiver physicians: One state commented that the increase in the annual limit on waivers from 20 to 30 in 2002 opened more positions in other states, contributing to a decrease in interest among physicians seeking waivers to locate in that state. Two states suggested that because they had no graduate medical education programs or a low number of them, fewer foreign physicians were familiar with their states, affecting their ability to attract physicians seeking J-1 visa waivers. Low demand for waiver physicians: Many states noted low demand for foreign physicians among health care facilities or communities in the states. Two of these states commented that they had relatively few problems recruiting U.S. physicians. Another state commented that health care facilities—particularly small facilities and those located in rural areas—may be reluctant to enter into the required 3-year contracts with waiver physicians because of their own budget uncertainties. Lack of physicians with needed specialties: One state commented that most communities in the state need physicians trained in family medicine and that few physicians with J-1 visas have that training. Similarly, another state noted a lack of demand among the health care facilities in the state for the types of medical specialties held by physicians seeking waivers. In response to a question about whether they had observed any significant changes in the number of physicians seeking J-1 visa waivers, 15 states reported seeing less interest from physicians, or fewer applications, since 2001. Some states suggested that the decline might be due to an overall reduction in the number of physicians with J-1 visas who were in graduate medical education programs. Three states mentioned the possibility that more physicians may be opting to participate in graduate medical education on an H-1B visa, which does not have the same foreign residence requirement as a J-1 visa. Of the 44 states that did not use all of their waiver allotments in each of fiscal years 2003 through 2005, slightly over half (25 states) reported that they would be willing, at least under certain circumstances, to have their unused waiver allotments redistributed to other states. In contrast, about one-third of the states with unused waiver allotments (14 states) reported that they would not be willing to have their unused waiver allotments redistributed. (See table 3 and, for further details on states’ responses, see app. I.) The 14 states that reported they would be willing under certain circumstances to have their unused waiver allotments redistributed listed a variety of conditions under which they would be willing to do so, if authorized by law. These conditions centered around the timing for redistribution, the approach for redistribution, and the possibility for compensation. Timing of redistribution: Seven states reported that their willingness to have their unused waiver allotments redistributed depended in part on when the redistribution would occur in a given year. Their comments suggested concerns about states being asked to give up unused waiver allotments before having fully determined their own needs for them. For example, three states reported that they would be willing to release at least a portion of their unused waiver allotments midway through the fiscal year. One state reported that it would be willing to have its unused waiver allotments redistributed once the state reached an optimal physician-to-population ratio. Finally, two states specified that states benefiting from any redistribution should be required to use the redistributed waivers within the same fiscal year. Approach for redistribution: Three states reported that their willingness to have their unused waiver allotments redistributed depended on how the redistribution would be accomplished. Two states reported a willingness to do so if the allotments were redistributed on a regional basis—such as among midwestern or southwestern states. Another state reported that it would be willing to have its unused waiver allotments redistributed to states with high long-term vacancy rates for physicians. This state was also willing to have its unused waiver allotments redistributed in emergency relief situations, such as Hurricane Katrina’s aftermath, to help attract physicians to devastated areas. Possibility for compensation: Two states stated that they would be willing to have their unused waiver allotments redistributed if they were somehow compensated. One state remarked that it would like more flexible waiver allotments, equal to the number of unused waiver allotments that were redistributed. The other state did not specify the form of compensation. Other issues: One state commented that it would be willing to have its unused waiver allotments redistributed as long as redistribution did not affect the number of waivers it could request in future years. Another state responded that any provision to have unused waiver allotments redistributed would need to be pilot-tested for 2 years so that its effect could be evaluated. The 14 states that reported that they would not be willing to have their unused waiver allotments redistributed to other states cited varied concerns. Several states commented that, because of physicians’ location preferences and differences in states’ J-1 visa waiver program requirements, a redistribution of unused waiver allotments could possibly reduce the number of physicians seeking waivers to practice in certain states. Physician location preferences: Three states commented that physicians seeking J-1 visa waivers might wait until a redistribution period opened so that they could apply for waivers to practice in preferred states. As one state put it, if additional waivers were provided to certain states, a physician might turn down the “number 15 slot” in one state to accept the “number 40 slot” in another. This concern was also raised by four states that reported they were willing to have their unused waiver allotments redistributed under certain circumstances; two of these states specifically mentioned the possible negative impact that redistribution could have on rural areas. Differences in state program requirements: One state commented that until state requirements for waivers were made consistent among states, having unused waiver allotments redistributed would benefit states with more lenient requirements or force states with more stringent requirements to change their policies. While this state did not specify what it considered to be stringent or lenient requirements, substantial differences in state programs do exist. For example, some states restrict their waiver requests solely to primary care physicians, while others place no limits on the number of allotted waivers they request for nonprimary care physicians. In another example, four states require 4- or 5-year contracts for all physicians or for physicians in certain specialties. One state commented that if it lost an unused waiver allotment to a state with more lenient requirements, it would have given away to another state a potential resource that it had denied its own communities. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions that you or Members of the Subcommittee may have. For information regarding this testimony, please contact Leslie G. Aronovitz at (312) 220-7600 or aronovitzl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Kim Yamane, Assistant Director; Ellen W. Chu; Jill Hodges; Julian Klazkin; Linda Y.A. McIver; and Perry G. Parsons made key contributions to this statement. Much less than adequate No States that requested 30 waivers in each of fiscal years 2003 through 2005 were not asked about their willingness to have unused waiver allotments redistributed to other states. State Department: Stronger Action Needed to Improve Oversight and Assess Risks of the Summer Work Travel and Trainee Categories of the Exchange Visitor Program. GAO-06-106. Washington, D.C.: October 14, 2005. Health Workforce: Ensuring Adequate Supply and Distribution Remains Challenging. GAO-01-1042T. Washington, D.C.: August 1, 2001. Health Care Access: Programs for Underserved Populations Could Be Improved. GAO/T-HEHS-00-81. Washington, D.C.: March 23, 2000. Foreign Physicians: Exchange Visitor Program Becoming Major Route to Practicing in U.S. Underserved Areas. GAO/HEHS-97-26. Washington, D.C.: December 30, 1996. Health Care Shortage Areas: Designations Not a Useful Tool for Directing Resources to the Underserved. GAO/HEHS-95-200. Washington, D.C.: September 8, 1995. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Many U.S. communities face difficulties attracting physicians to meet their health care needs. To address this problem, states and federal agencies have turned to foreign physicians who have just completed their graduate medical education in the United States under J-1 visas. Ordinarily, these physicians are required to return home after completing their education, but this requirement can be waived at the request of a state or federal agency if the physician agrees to practice in, or work at a facility that treats residents of, an underserved area. In 1996, GAO reported that J-1 visa waivers had become a major means of providing physicians for underserved areas, with over 1,300 requested in 1995. Since 2002, each state has been allotted 30 J-1 visa waivers per year, but some states have expressed interest in more. GAO was asked to report on its preliminary findings from ongoing work on (1) the number of J-1 visa waivers requested by states and federal agencies and (2) states' views on the 30-waiver limit and on their willingness to have unused waiver allotments redistributed. Such redistribution would require legislative action. GAO surveyed the 50 states, the District of Columbia, 3 U.S. insular areas--the 54 entities that are considered states for purposes of requesting J-1 visa waivers--and federal agencies about waivers they requested in fiscal years 2003-05. The use of J-1 visa waivers remains a major means of placing physicians in underserved areas of the United States. States and federal agencies reported requesting more than 1,000 waivers in each of the past 3 years. In contrast to a decade ago, states are now the primary source of waiver requests for physicians to practice in underserved areas, accounting for more than 90 percent of such waiver requests in fiscal year 2005. The number of waivers individual states requested that year, however, varied considerably. For example, about one-quarter of the states requested the maximum of 30 waivers, while slightly more than a quarter requested 10 or fewer. Regarding the annual limit on waivers, about 80 percent of the states--including many of those that requested the annual limit or close to it--reported the 30-waiver limit to be adequate for their needs. About 13 percent reported that this limit was less than adequate. Of the 44 states that did not always request the limit, 25 reported that they would be willing to have their unused waiver allotments redistributed, at least under certain circumstances. In contrast, another 14 states reported that they would not be willing to have their unused waiver allotments redistributed. These states cited concerns such as the possibility that physicians seeking waivers would wait until a redistribution period opened and apply to practice in preferred locations in other states.
The process for importing goods into the United States generally involves at least two private parties (exporters and importers) as well as the U.S. government. The U.S. AD/CV duty collection process typically involves the five steps summarized below and illustrated in figure 1. 1. Commerce communicates the initial estimated duty rate to CBP: Commerce issues an AD/CV duty order that specifies the products for which importers must pay AD/CV duties. The order communicates the initial estimated duty rates applicable to one or more specific exporters, producers, or both, as well as a catchall rate or, if appropriate, a countrywide rate for all other exporters and producers that were not individually investigated and that did not receive a specific rate. The order also instructs CBP to collect cash deposits at the time of importation for estimated duties owed on all entries of the applicable products. These duty rates represent Commerce’s initial estimates of the level of dumping or subsidization. Commerce typically communicates to the public the initial estimated duty rate that must be applied in the Federal Register. 2. CBP reviews importers’ assessments of duties owed and collects the initial estimated duty from the importer: The importer determines the value of estimated duties owed by applying the initial rate set in the applicable AD/CV duty order to its imports. CBP reviews the importer’s assertions for correctness and collects the required cash deposits or bonds from the importer. According to CBP officials, estimated duties are usually due within 10 days after CBP has released the product for entry into the United States. 3. Commerce determines and communicates the final duty rate to CBP: Each year, during the anniversary month of the publication of an AD or CV duty order, an interested party may ask Commerce to conduct an administrative review to determine the actual amount of dumping or subsidization and calculate a final duty rate. This can occur if the party believes that the initial estimated rate is too high or too low. During the administrative review, Commerce analyzes previous imports to determine the actual level of dumping or subsidization for those imports during the period under review. At the conclusion of the administrative review (typically about 12–18 months after the review began), Commerce establishes the final duty rate (also known as the liquidation rate) for the goods. If an administrative review is not requested, then the final duty rate is generally the same as the initial estimated duty rate. Commerce typically communicates to the public the final duty rate that must be applied in the Federal Register. Commerce sends CBP liquidation instructions communicating the final duty rate and designating the importers, producers, or both that are associated with the entries to which the rate must be applied. According to CBP officials, the liquidation instructions are communicated first to the AD/CV Division, a headquarters unit within CBP’s Office of Trade. 4. CBP instructs ports to apply the final duty rate and calculate final duties: CBP instructs staff at each applicable U.S. port of entry to assess the final duties on all relevant goods (i.e., applying the final rate to the value of applicable goods that have entered since the order was issued). 5. CBP liquidates the import entry and may issue a refund or a bill: CBP liquidates the entry, which can result in CBP’s issuing a bill to the importer (if the liquidation rate is higher than the initial estimated rate) or refunding money (if the initial estimated rate is higher than the liquidation rate). If the initial estimated and final duty rates are the same, CBP liquidates the entry without issuing a bill or refund. CBP must liquidate these entries within a 6-month time limit that begins when CBP receives a notice, such as final duty rate instructions from Commerce or notification from a court or another agency that the suspension of liquidation that was placed on those entries has been lifted. Otherwise, the entry will be liquidated by operation of law at the initial estimated duty rate regardless of whether the final rate has changed. This is referred to as a “deemed liquidation.” To ensure payment of unforeseen financial obligations to the U.S. government, most importers are required to post a security, usually a customs bond. The bond is like an insurance policy protecting the U.S. government against revenue loss if an importer defaults on its financial obligations. CBP allows importers to provide two types of basic importation and entry customs bonds—a continuous entry bond and a single transaction bond—to secure the duties, taxes, and fees associated with the import of goods into the United States. Continuous entry bonds are used to secure financial obligations for one or more entries for a period of up to 365 days; single transaction bonds are used to secure financial obligations related to a specific entry. If an importer fails to pay the full amount owed on a final duty bill for an AD/CV duty entry, CBP will attempt to collect payment from the company that underwrote the bond for the entry (referred to as the “surety”). The amount CBP may be able to collect from the surety depends on how much the bond covers. In some cases, the bond issued by the surety may cover the entire amount owed; in other cases, it may only cover a small portion of the debt—depending on the size of the bond and size of the additional duties resulting from a higher final duty rate. One of CBP’s key challenges is to set an accurate bond amount for any given entry that reasonably protects the amount of revenue that is potentially at risk of loss if the final duty bill for that entry is not paid in full. An importer who is billed for additional AD/CV duties has 180 days from the date of liquidation to protest the bill amount. CBP will send the importer monthly bills. According to CBP officials, if CBP does not receive full payment of the bill for additional duties within approximately 8 months of sending the bill and the importer does not file a protest, CBP sanctions the delinquent importer by requiring full payment of all estimated duties, taxes, and fees before any products subsequently imported by that importer can be released by CBP into U.S. commerce. Separately, if the bill has not been paid 60 days after it was issued, CBP will also request payment from the surety that underwrote the bond that the importer provided when the goods entered the United States. After CBP has requested payment from a surety, that surety has 180 days to pay the bond amount or protest the bill. Importers are responsible for the full amount of additional duties owed; sureties will generally cover the cost of a bill only up to the value of the bond. The AD/CV duty collection process is completed when and if the importer, the surety, or the two together pay the full amount of the duty and interest owed or the duty is written off. CBP’s ports generally handle the bill creation process, including collecting payments for duties that are not delinquent from importers. However, if a bill becomes delinquent, the Revenue Division within CBP’s Office of Finance takes the primary responsibility for collecting payment from either the importer or the surety or both. In general, if CBP does not receive full payment of duties and interest owed, CBP’s Revenue Division researches the account and recommends next steps to CBP’s Office of Chief Counsel, which determines whether options for collection are available through the legal process. The Office of Chief Counsel in turn can refer the matter to the Department of Justice for prosecution. When CBP determines that a bill for additional AD/CV duties is uncollectible, the Revenue Division and the Office of Chief Financial Officer, in conjunction with the Office of Chief Counsel, can take steps to write off the bill. Figure 2 illustrates the process for collecting payments on bills for additional AD/CV duties and writing off uncollectible bills. Our analysis shows that the total amount of unpaid AD/CV duty bills issued for goods that entered the United States during fiscal years 2001 through 2014 was about $2.3 billion as of May 12, 2015. However, in its Performance and Accountability Report for fiscal year 2015, CBP reported that it did not expect to collect about $1.6 billion in outstanding AD/CV duty debt. Most AD/CV duty bills are paid: We estimate that, on average, CBP collected duties owed for about 90 percent of the total number of AD/CV duty bills issued for entries from fiscal years 2001 through 2014. However, CBP’s collection rate for AD/CV duties measured by the total dollar amount paid as a portion of the total amount owed averaged about 31 percent for bills issued on entries during this time. Our analysis shows that AD/CV duty bills with unpaid amounts are concentrated among a small number of importers, with 20 importers accounting for about 50 percent of the $2.3 billion owed. CBP continues to face challenges in collecting on AD/CV duty bills, attributable in part to the U.S. government’s retrospective and complex process for determining final AD/CV duty rates. The average lag time between entry of goods and CBP issuing a bill for any additional duties during fiscal years 2001–2014 was about 2.6 years. Out of all AD/CV entries during this period that we examined, the final duty rate was higher than the initial estimated rate assessed upon entry about 18 percent of the time, and the final rate was lower than the initial estimated rate about 19 percent of the time. According to agency officials, CBP is considering the feasibility of contracting with private collection agencies to pursue debts for which the agency has exhausted all administrative collection efforts, including claims against applicable surety bonds. Our analysis of CBP data on AD/CV duty bills for entries occurring in fiscal years 2001 through 2014 identified about 41,000 unpaid bills totaling about $2.3 billion, as of May 12, 2015. Antidumping duties account for almost this entire amount, with only about $584,000 related to countervailing duties. Of the $2.3 billion, about $2 billion (or 86 percent) is principal, and the remaining $321 million (or 14 percent) is accrued interest. We calculated collection rates for bills issued for goods subject to AD/CV duty orders that entered the United States since fiscal year 2001. We found that while CBP’s collection rate for AD/CV duties is generally high when measured as the percentage of bills collected, the rate is lower when measured as the percentage of dollars collected. CBP collected, on average, 90 percent of the bills issued, but about 31 percent of the dollar amount owed, indicating that although CBP collects payment on most bills it issues, it sometimes does not collect payment on bills with large dollar amounts (see fig. 3). For the approximately 41,000 unpaid bills for goods subject to AD/CV duty orders and entering the United States in fiscal years 2001 through 2014, the average unpaid bill was about $57,000, and the median unpaid bill was about $29,000. Our analysis identified 127 unpaid bills for at least $1 million, with the largest unpaid bill totaling over $12 million. Figure 4 shows the distribution of unpaid bills by amount of uncollected AD/CV duties for entries during fiscal years 2001 through 2014. While only about 26 percent of the bills issued are for $50,000 or more, these bills represent about 77 percent of the total amount unpaid. Importers that import products from China and 20 other countries are responsible for all unpaid AD/CV duty bills as of May 12, 2015. Of the $2.3 billion in unpaid bills, China is the country of origin for entries associated with about $2.2 billion of the uncollected amount, or 95 percent. While products from China represent the majority of the uncollected amount, China is also the largest exporter of goods subject to AD/CV duties. Of the approximately $5.5 billion in total liquidated AD/CV duties for goods imported into the United States in fiscal years 2001 through 2014, about $3.4 billion, or 62 percent, was for goods imported from China. In analyzing CBP data for entries in fiscal years 2001 through 2014, we found that the top six product types—without regard to country of origin—accounted for approximately 89 percent of the total amount of uncollected AD/CV duties. These six product types were associated with about a third of the 396 AD/CV duty orders in place during this period that resulted in unpaid duties. Figure 5 shows the top product types associated with uncollected AD/CV duties. CBP data show that about 33,000 importers made entries subject to AD/CV duties in fiscal years 2001 through 2014. Of those, 818 importers (or 2.5 percent) had unpaid AD/CV duty bills as of May 12, 2015. Within this group of importers with unpaid bills, the top 20 importers owe about 50 percent of the total $2.3 billion unpaid, and the top 4 importers owe about 26 percent of that amount (see fig. 6). Of the top 20 importers with unpaid duties, 17 stopped importing before bills for their entries were issued. For example, the importer with the largest dollar amount unpaid had 4,199 unpaid AD/CV duty bills, amounting to $220 million, or 9.4 percent of the total $2.3 billion in unpaid duties (see table 1). This importer, which imported wooden bedroom furniture from China, had not paid about 98 percent of the total amount it was billed for imports subject to AD/CV duties that entered the United States from August 2004 through July 2007. CBP issued the first bills to this importer for some of these entries in August 2010 after resolution of litigation. Similarly, importer 18, which imported preserved mushrooms from China, entered goods into the United States from February through May 2012, and the first of this importer’s 162 delinquent bills for these entries was issued in April 2014. All importers in table 1 were sanctioned—meaning that CBP would require full payment of all estimated duties, taxes, and fees before any products subsequently imported by these importers could be released by CBP into U.S. commerce. In some cases, importers continued to be involved with importing after being placed on sanction. Importer 2, which imported pure magnesium ingot from China, owed $169 million on 271 delinquent bills. According to CBP officials, this importer may have subsequently incorporated under a different name, enabling it to resume importing as a new entity. According to CBP officials, the agency requested single transaction bonds on the new entity’s imports. However, generally when importers reincorporate as new entities, it is extremely difficult and resource-intensive to hold the new entity liable for the previous entity’s AD/CV debt. Importer 11, which imported preserved mushrooms from India, entered goods into the United States from October 2000 through June 2012. While most of the 1,061 delinquent bills for these entries were not issued until 2013, 92 of these bills were issued from July through September 2008. The importer was able to continue importing despite these unpaid bills because the importer began making payments that ultimately totaled $2.5 million. However, after the importer stopped making these payments, CBP sanctioned the importer in January 2010. Further, in 2008, we reported the top 20 importers by amount of unpaid AD/CV duties at that time. In 2015, CBP determined that all of the top 20 importers we listed in 2008 were no longer actively importing. However, according to CBP officials, importer 14 from our 2008 report, which at that time owed $10 million on 48 unpaid bills, was put on sanction in 2010. While this company no longer acts as an importer of record, it has continued to act as a consignee, meaning that another company imports goods that are delivered to importer 14. CBP liquidates an entry with a duty bill, a refund, or closing the entry as paid, depending on the final AD/CV duty rate determined by Commerce. Many unpaid duty bills are associated with at least 2 years of lag time between the entry of goods into the United States and when CBP liquidates the entry with a duty bill. In 2008, we reported that according to agency officials, a long lag time between entry of AD/CV goods and final duty rate assessment increases the risk of uncollected duties because, in the interim, importers may disappear, cease operations, or declare bankruptcy. In 2015, CBP reported that the longer the lag time between entry and liquidation, the more difficult it is to collect any additional duties owed because of an increase in the final rate. Litigation may extend the length of time between entry and liquidation by several years. From fiscal year 2001 through fiscal year 2014, CBP liquidated entries subject to AD/CV duties in about 31 months (or 2.6 years) on average. The median time between entry and liquidation was about 24.5 months (about 2 years). About 10 percent of entries were liquidated at least 66 months (5.5 years) or longer after entry of goods, with 169.1 months (14.1 years) being the longest time between entry and liquidation. Figure 7 shows the percentile distribution of the number of months between entry and liquidation. In analyzing CBP entry data in fiscal years 2001 through 2014, we found that the final duty rates increased 18 percent of the time; decreased 19 percent of the time, and; remained unchanged 63 percent of the time. In 2008, we reported that the retrospective assessment of a final duty rate presents a challenge to CBP efforts to collect AD/CV duties because whenever the final duty rate is higher than the initial estimated duty rate, the importer may be unwilling or unable to pay the additional duties owed. The average rate change for paid bills was about 48 percent, with a median rate change of 36 percent. In contrast, however, our analysis of entries that resulted in unpaid bills found that, in general, bills with higher rate changes were more likely to be unpaid. For example, the average increase for unpaid bills was 198 percent, with a median rate change of 81 percent. Further, bills with a 100 to less than 200 percent increase in the rate went unpaid about 39 percent of the time; bills with a 200 to less than 500 percent increase in the rate went unpaid about 79 percent of the time. (See fig. 8.) CBP has a process in place to collect delinquent AD/CV duty debt but estimates that a significant portion of debt is likely uncollectible. When the final duty rate exceeds the initial estimated duty rate, importers are billed for the additional duties owed. When importers fail to pay their bills, CBP takes several steps to collect. First, if the importer can be located, CBP contacts the importer and attempts to secure payment. If necessary, CBP takes steps to obtain valid contact information for the importer. Next, if the entry is secured by a bond, CBP will collect from the surety that issued the bond. If the surety has paid and the importer is not responsive, then CBP investigates to determine whether the importer responsible for paying the bill has domestic assets or a clear successor entity and refers the bill to the Office of Chief Counsel, if appropriate. The amount of the bill that remains unpaid after CBP has exhausted all efforts to collect from the importer and the surety is considered uncollectible. According to CBP officials, once CBP has taken all measures to collect and determined that a bill is uncollectible, CBP terminates collection action. In its Performance and Accountability Report for fiscal year 2015, CBP reported that about $1.6 billion of AD/CV duty debt was uncollectible. As noted earlier, CBP has reported that the length of time between the entry of a product and the issuance of a bill for additional duties poses a challenge to collecting AD/CV duties owed, indicating that the more time that elapses before payment, the more difficult it is to collect. Our analysis of CBP data on AD/CV duty bills for entries occurring in fiscal years 2001 through 2014 shows that, of the approximately 41,000 unpaid bills, the average age was about 4 years, and the median age was 4.5 years. In addition, 977 unpaid bills were issued between 10 and 13 years ago; based on CBP’s reporting on challenges to collection, then, it seems that CBP would have an extremely low likelihood of collecting those bills. Figure 9 shows the distribution of delinquent AD/CV duty bills by age. Once CBP has exhausted its collection efforts, the next step is for CBP staff to prepare the bill for write-off by documenting what was found during the investigation of the debt and submitting this documentation to the Office of Chief Counsel and the Chief Financial Officer for review and approval. CBP provides staff guidance on steps and documentation required to prepare an unpaid bill for write-off but does not set specific time frames for writing off uncollectible debt. While CBP staff may begin the write-off process for uncollectible bills as they are identified, according to agency officials, preparing bills for write-off is generally a lower priority than pursuing debt considered collectible. As a result, CBP does not consistently write off bills. Figure 10 shows the dollar amount of AD/CV duty bills written off each year since 2001. As of October 2015, CBP had written off about $252 million in AD/CV duties from 2001 through 2014. CBP officials stated that the high dollar amount of write-offs in 2013 was not attributable to a specific cause. Currently, according to agency officials, CBP is considering the feasibility of contracting with private collection agencies to pursue debts for which the agency has exhausted all administrative collection efforts, including claims against applicable surety bonds. According to agency officials, it is not clear whether the proposal to use private collection agencies will go forward. Further, officials stated that CBP’s write-off activity has slowed while the agency considers this option. CBP has undertaken several efforts to improve its collection of AD/CV duties or to protect against the risk of uncollectible final duty bills through enhanced bonding; however, these efforts had yielded limited results as of May 2016. For example, CBP launched an initiative to reduce processing errors that result in CBP closing duty bills at the initial estimated duty rate rather than the final duty rate; in such cases, the initial duty paid may be significantly higher or lower than the final duty amount owed. Though the initiative has shown positive results, as of May 2016, its application had been limited. In addition, CBP had not collected and analyzed data systematically to help it monitor and minimize these duty processing errors. As a result, CBP does not know the extent of these errors and cannot take timely or effective action and avoid the potential revenue loss they may represent. In another effort to improve its collection of AD/CV duties, CBP formed a five-person AD/CV Duty Collections Team. While this team, which focused on collecting delinquent bills, produced some positive results, it has recently been hampered by staffing turnover and unfilled positions. Finally, CBP has taken steps to improve its use of bonding as a tool to protect revenue when CBP believes there is a high likelihood that final duty bills will not be paid. However, according to CBP officials, a ruling issued by the World Trade Organization (WTO) has limited CBP’s ability to use bonding to protect AD/CV duty revenue. As of May 2016, CBP had not begun a systematic effort to regularly collect, analyze, report, and monitor data and actions taken to help it minimize entries liquidated at the initial estimated duty rate rather than at the final duty rate. This can happen when an entry is either (1) liquidated prematurely before CBP receives liquidation instructions or (2) deemed liquidated. In either case, the entry is liquidated at the initial estimated duty rate. Thus, when the final duty rate is greater than the initial estimated duty rate, CBP might lose the opportunity to collect additional revenue and may not be fully remedying unfair trade practices. Alternately, when the final duty rate is lower than the initial estimated duty rate, CBP fails to provide importers any refunds owed to them. From calendar years 2008 through 2015, Commerce issued 6,447 messages containing liquidation instructions. The process of liquidating entries can be complex, requiring a considerable amount of work for CBP officials to implement. After receiving Commerce’s liquidation instructions, among other actions, CBP must ensure that the instructions are sufficiently clear so that CBP officials located across the 338 ports of entry and other locations that process AD/CV entries can identify the affected entries and apply the appropriate rate. Each AD/CV duty order is unique because it pertains to a specific combination of goods; country or countries of origin; and exporters, producers, or both. In addition, CBP officials said that the instructions in an AD/CV duty order may apply to only a few entries at a single U.S. port, or to tens of thousands of entries at multiple ports, and may cover entries over a span of several years. According to CBP officials and documents, processing errors by officials at the ports have resulted in entries that are liquidated too early, before Commerce has issued its final liquidation instructions. CBP officials attributed the premature liquidations typically to human error, but a December 2015 CBP document also attributed the problem to a lack of uniformity in the way individual ports and offices liquidate AD/CV duty entries. CBP could not provide us with an analysis that assesses the frequency of premature liquidations and its effects on revenue. According to CBP officials, they collected some data from the ports about the number of entries liquidated prematurely during the first 5 months of fiscal year 2015. However, CBP officials said that the data were incomplete since port participation was voluntary, and not all ports participated. Those ports that did participate did not do so consistently over the 5- month period. On the basis of our own analysis of data provided by CBP, we confirmed that CBP has liquidated some entries before receiving liquidation instructions from Commerce. We identified 94 AD/CV entries during the October 2000 through September 2014 period we reviewed that were subject to AD/CV duties where all the entry and final liquidation dates had occurred relatively quickly—approximately 30 days apart. We then asked CBP to determine why the liquidations had occurred so quickly for 20 of these entries, when it typically takes over a year after the goods enter the United States before CBP liquidates entries subject to AD/CV duties. CBP officials told us that of the 20 entries we provided, 7 (or about 35 percent) had been liquidated before CBP received final liquidation instructions from Commerce, and 9 were liquidated after CBP received final liquidation instructions. CBP could not determine whether the remaining 4 entries from our sample had been liquidated before receiving the final liquidation instructions. While CBP liquidates most AD/CV entries within the 6-month statutory time limit, CBP data show that a number of entries are deemed liquidated. Data CBP provided to us showed that of all the entries that CBP liquidated from fiscal years 2008 through 2014, approximately 10 percent were deemed liquidated. However, according to CBP officials, the data provided do not capture all of the entries that were deemed liquidated. In June 2005, CBP issued guidance for ports and offices to use a special code to identify all liquidations not completed within the statutory time limit; however, according to CBP officials, many CBP officials at the ports did not appropriately identify these liquidations using the special code and were using other codes instead. In response to our request for information on the possible effects that deemed liquidations might have on revenue, in March 2016 CBP provided an estimate for the 3-year period from 2010 through 2012, explaining that it would be too labor intensive to provide data for the entire 7-year period specified in our request. According to the CBP officials, from 2010 through 2012, deemed liquidations resulted in CBP not billing importers for approximately $13.9 million in duties owed because of an increase in the duty rate and not refunding importers approximately $465,000 because of a decrease in the duty rate; however, because these amounts represent only the entries that were properly coded, they may understate the effects of deemed liquidations during the 3-year period. In October 2014, CBP announced an initiative to centrally oversee the AD/CV duty liquidation process and thus reduce the number of liquidation processing errors that occur at the ports. To staff the pilot, in November 2015 CBP formed a nine-person team known as the Antidumping and Countervailing Duty Centralization Team (ACT) within its Office of Trade. The team was composed of representatives from CBP’s Centers of Excellence and Expertise who were on temporary assignments, which were scheduled to end in May 2016. According to a CBP document, ACT is intended to provide uniformity in the interpretation and application of the liquidation instructions and to provide CBP insight into processing errors. The ACT’s role is to review liquidation instructions communicated from CBP’s headquarters, identify affected entries, and communicate this information to the ports and offices to help ensure the ports liquidate entries within the statutory 6-month time limit as well as correct any entries mistakenly liquidated before receiving Commerce’s liquidation instructions. As of February 2016, CBP officials credited ACT with having identified and liquidated approximately $780,000 associated with entries at one port that otherwise would not have been liquidated in a timely manner. CBP has estimated that the effort to centrally manage the AD/CV duty liquidation process could save approximately $3.7 million in work hours annually. According to CBP officials, the number of ports and offices that participated in the ACT pilot varied over time, since participation was voluntary and inconsistent. Also, a few large ports did not participate. In addition, CBP officials said that implementation of the ACT’s liquidation advice to the ports was not mandatory. In April 2016, CBP issued guidance to make the ACT a permanent structure, but CBP officials said that, as of May 2016, CBP had not assigned any staff permanently. CBP’s guidance also made it mandatory for all ports to work with the ACT, although CBP officials at ports and offices will continue to play the lead role in liquidating AD/CV entries as before. According to the April 2016 guidance, in the event of disagreement between the ACT and officials of one or more ports and offices about how to resolve a liquidation question, the relevant officials at the ports and offices are to contact the appropriate officials within the AD/CV Division of the Office of Trade to arrive at a decision. As part of the ACT initiative, in February 2015 CBP developed a new data-management ACT portal based on data from ACS and ACE to enable team members to identify entries subject to Commerce liquidation orders. According to CBP officials and documents, until the creation of this portal, CBP had a limited ability to accurately identify entries that had been liquidated prematurely or outside the 6-month statutory time limit. These officials said that they plan to use the ACT portal to collect and analyze data for management to avoid premature and deemed liquidations and report on progress on a quarterly basis, but have not issued guidance to this effect. CBP also had no plans to regularly collect data to show the effects of premature and deemed liquidations on revenue. According to CBP officials, fiscal year 2017 will be the first full year when data from all ports are collected. Federal standards for internal control state that in order for an agency to run and control its operations efficiently and effectively, agency managers must have sufficient information to compare actual performance against planned or expected results. In addition, managers must collect data to understand the reasons for any differences between the actual performance and the planned or expected results. Finally, managers must take steps to resolve these differences. For these reasons, internal control standards for federal agencies stress the importance of (1) obtaining and using quality information, (2) regularly monitoring that information, and (3) taking steps to ensure that agencies achieve their objectives. Because CBP does not systematically collect and analyze data on a regular basis, CBP is not able to determine the extent to which premature and deemed liquidations are taking place or take timely and effective action to avoid premature or deemed liquidations and the potential revenue loss they represent. As discussed previously, CBP’s Revenue Division within its Office of Finance is responsible for collecting all debt owed to CBP, including AD/CV duty debt. In March 2014, CBP formed a dedicated five-person AD/CV Collections Team within the Revenue Division to focus strictly on collecting unpaid AD/CV duty bills. The goals set out for the team include, among other things: enhancing CBP’s technical expertise with regard to the unique complexities of the AD/CV duty entry, suspension, liquidation, and collection processes; enabling CBP to take a more systematic approach to the collection of unpaid AD/CV duty bills rather than treating each unpaid bill as an isolated transaction; initiating collection activity on AD/CV duty debts earlier through research and analysis; and assisting port officials in identifying importers that are unable or unwilling to pay outstanding debts at an early stage and helping to determine what actions, if any, CBP can take to reduce the possibility that these importers will not fully pay their bills. CBP officials credited the AD/CV Collections Team with several accomplishments. According to CBP’s October 2015 report to Congress, the team has enhanced CBP’s technical expertise with regard to the complexities of the AD/CV duty entry, suspension, liquidation, and collection processes. In conjunction with CBP’s National Targeting Center, the Office of Chief Counsel, and the AD/CV National Targeting and Analysis Group, in April 2015 the team initiated Operation Lost and Found, which identified over 100 active importers with links to inactive importers with delinquent bills. As a result, CBP successfully identified approximately $1.4 million in AD/CV duty refunds that CBP owed to the active importers and applied these funds against the debt owed to the U.S. government by the delinquent importers. In April 2014, the team, working with other CBP units and the Department of Justice, participated in a surge effort that resulted in the liquidation of 72 AD/CV entries associated with inactive importers but secured by bonds worth $14.2 million. The surge effort enabled CBP to collect revenue on bonds that otherwise would have been lost. CBP officials also stated that the team has had some success in collecting debt more efficiently and effectively. According to CBP officials and data, staff turnover has hampered the team’s ability to further improve CBP’s AD/CV duty collection efforts. Since the formation of the AD/CV Collections Team, three of the five original collection specialists with in-depth expertise have left their positions. CBP has hired one specialist, but the team remains understaffed by two positions as of April 2016. CBP has initiated efforts to revise the form it uses to obtain information about importers in order to collect more comprehensive information, but CBP officials noted that the revised form may have a limited impact on collections. CBP’s form 5106, known as the Importer Identification Input Record, is an important source of importer information for CBP and must be submitted by an importer or his or her representative before the importer’s goods can enter a U.S. port of entry. The information is used by CBP in decisions involving bond coverage; the entry and release of goods from the ports; the payment of taxes, duties, and fees; and the issuance of bills and refunds. The revisions to the form are intended to enhance CBP’s ability to assess the risk that an importer may not pay all required duties, taxes, and fees. The current form 5106 requires the importer to provide the company’s name, mailing address, and physical address. For tracking purposes, the form also requires the importer to provide a unique identifying number. This can be an Internal Revenue Service Taxpayer Identification Number (for a company), a Social Security Number (for an individual), or a unique importer number assigned by CBP. The revised draft of the form 5106 contains several fields not present in the current form. For example, it contains fields for importers to submit additional information about the company, such as the names of key company officials, as well as the name of the importer’s primary banking institution. The revised form also asks importers to estimate how many entries they estimate they will have during a given year. In April 2016, CBP officials stated that CBP’s Office of Trade was in the process of finalizing the revised form for submission to the Office of Management and Budget. However, CBP officials did not know when the Office of Management and Budget’s approval of the revised form would occur. CBP officials also did not know when importers would be required to begin using the revised Importer Identification Input Record. While the revised form will provide CBP some additional information about importers that it does not collect now, CBP officials cautioned that the collection and analysis of this information may have only a modest impact on the collection of AD/CV duty debt because CBP will accept revised forms with incomplete information. Moreover, deceptive importers may provide false information. CBP officials said that by regulation, importers are only required to provide the company’s name, mailing and physical addresses, and unique identifying number in order for CBP to process an entry; no other information is required. These officials also stated that requiring importers to provide additional information would require a change in the regulation, which CBP does not plan to make. According to CBP officials, importers who do not provide the additional information in the form will be viewed as high-risk importers, and could be subject to added inspection at the time the import enters the United States. CBP has undertaken efforts to improve the use of bonds by taking steps to centralize bond management and changing the bond formulas to address the risk of uncollected AD/CV duties. However, according to CBP officials, challenges such as limitations within ACS and an adverse WTO ruling limit CBP’s ability to use bonds as a tool to protect against the risk of uncollectible final duty bills. CBP began to centralize the management of continuous entry bonds in June 2005 and plans to centralize the management of single transaction bonds by July 23, 2016. For both types of bonds, centralization moves the responsibilities for managing bonds and maintaining records to a single unit within the Revenue Division of the Office of Finance. As part of the process of transitioning from ACS to the Automated Commercial Environment (ACE), CBP is also in the process of transitioning from a paper-based customs bond system to an electronic customs bond system called eBonds. Once importers are required to only use ACE, CBP expects most bond transactions to occur through eBonds. According to CBP officials, the creation of a central automated repository for eBonds will make it easier for CBP to collect payments from sureties because it will reduce errors often found in paper bonds, such as missing or incomplete information. According to CBP officials, such errors are frequently cited by sureties (the companies that underwrite the bonds) in litigation and protests as a reason for not having to make payment. According to surety association officials, CBP’s transition to an electronic bond will also enable sureties to more closely control the issuance of such bonds because brokers will now be required to electronically submit all documentation used to underwrite customs bonds to the surety before the surety can submit the bond to CBP. In addition to facilitating the transition to an electronic bond system, CBP’s full transition from ACS to ACE will also enable CBP to track the existence of multiple bonds for a single entry. In some instances, an import specialist at a port may decide that additional bond coverage is needed. For example, a CBP official may determine, based on analysis of the characteristics of an AD/CV duty entry and the importer’s record, that the importer should obtain an additional single transaction bond. ACE has fields for recording the existence of more than one bond for a single entry. By contrast, ACS can only record the existence of one bond, and any additional bonds (such as single transaction bonds) are recorded by the port officials by entering that information in the “notes” section of ACS. In a June 2011 report, the Department of Homeland Security Office of Inspector General found that port officials did not consistently record the existence of single transaction bonds in ACS. CBP officials we met with in November 2015 told us that this continues to be a problem. Consequently, CBP does not have an accurate count of single transaction bonds. CBP plans to complete its transition to ACE by December 2016. As shown previously in table 1 and as discussed in our 2008 report, CBP’s standard continuous entry bond formula provides little protection of AD/CV duty revenue when the final amount of AD/CV duties owed significantly exceeds the amount of the bond. As part of its efforts to utilize bonds more effectively, CBP updated the guidance it uses to calculate the value of continuous entry bonds and single transaction bonds to better protect against the risk of nonpayment. Continuous entry bonds: CBP’s Office of Finance updated the formula in January 2011, modifying the bonding requirements for importers subject to AD/CV duties depending on whether they have unpaid bills. Importers who do not have unpaid bills are assessed a bond equal to 10 percent of the amount the importer had to pay in duties, taxes, and fees over the preceding 12 months. In contrast, importers with unpaid bills are assessed a bond equal to 10 percent of the amount the importer had to pay in duties, taxes, and fees over the preceding 12 months plus an additional amount if the unpaid bill is more than 210 days old. Single transaction bonds: CBP issued guidance to port officials in May 2012 for assessing the requirement for additional bonding as well as determining the value of a single transaction bond in cases where port officials have developed a reasonable belief that acceptance of an entry secured by an existing continuous entry bond would place future financial obligations in jeopardy. The guidance states that port officials should take into account the amount of the importer’s continuous entry bond before making a determination that an additional single transaction bond is required. The guidance also states that CBP officials must judge each transaction or shipment on a case-by-case basis and cannot depend solely on product, country of origin, general trade data, noncompliance within an industry, or allegations. CBP has attempted to utilize bonds more effectively to address the risk of nonpayment of future obligations; however, according to CBP officials, a July 2008 WTO ruling has constrained CBP’s use of bonds because CBP had to change its methods for increasing bond requirements as a result of the ruling. Because the standard bond formulas for continuous entry bonds in general only cover a portion of the amount of revenue at risk of loss if final AD/CV duties are not paid, CBP attempted an enhanced bonding initiative in 2004. This initiative required all shrimp importers from certain countries to obtain a continuous entry bond equal to 100 percent of the estimated AD/CV duties for items imported over the previous 12 months. However, according to WTO documents, in July 2008 the Appellate Body of the WTO reported that it determined that the enhanced bonding initiative was inconsistent with WTO obligations. According to CBP officials and documents, the WTO ruling resulted in CBP’s having to eliminate the enhanced bonding requirement. Moreover, officials explained that since the WTO found that CBP’s enhanced bonding practices were not compliant with WTO obligations because CBP did not sufficiently link the risk addressed by the bond to the entire shrimp industry, the only option in increasing bond requirements to secure revenue is to target individual importers based on their importing record. According to CBP officials and reports, although the WTO found the manner in which CBP applied its enhanced bonding requirement to be inconsistent with WTO principles, it did not disagree with the concept of appropriately addressing risk through revised bonding requirements. As discussed previously, in January 2011 and May 2012, respectively, CBP updated its formulas for setting continuous entry bond and single transaction bond requirements. As of June 2016, CBP continues to use both continuous entry bonds and single transaction bonds as tools to attempt to ensure the payment of unforeseen obligations to the U.S. government; however, according to CBP officials, in response to the WTO ruling, CBP has exercised more caution in using bonds, concerned about the risk of litigation, which could tax agency resources and result in adverse rulings. According to CBP officials, CBP now determines the requirement for an importer to obtain both types of bonds on a case-by-case basis. For continuous entry bonds, in practice, CBP’s application of bonding requirements is based not solely on applying the January 2011 guidance described above, but also on an assessment by CBP’s Revenue Division of whether an importer’s current continuous entry bond will be sufficient to address his or her estimated AD/CV duty requirements during the previous calendar year or the last 12 months. Based on CBP’s assessment of current continuous entry bond sufficiency, from January 2014 through January 2016 CBP issued formal demands to 35 importers of goods subject to AD/CV duties to purchase a larger continuous entry bond. The increases in the amount of the bond demanded ranged from $20,000 to $550,000. For single transaction bonds, CBP required importers to submit an additional single transaction bond on 40 occasions from July 2013 through November 2015. The value of the additional single transaction bonds required ranged from $223 to $73,515. CBP did not have data for any other period within the period of our review, fiscal years 2001 through 2014. CBP’s limited analysis of the risk to revenue from potentially uncollectible AD/CV duties (nonpayment risk) does not accurately assess country- and product-associated risk or risks associated with other entry characteristics and misses opportunities to identify and mitigate nonpayment risk. In its 2014 report to Congress on AD/CV duties, CBP presented a data analysis to Congress that includes a summation of uncollected duties from five cases associated with products from China representing the largest dollar amount of uncollected duties. CBP officials said that, based on this analysis of uncollected duties, entries of these five products from China comprise the largest current risk of AD/CV duty nonpayment. The standard definition of risk with regard to a negative event that could occur includes both the likelihood of the event and the significance of the consequences if the event occurs; however, CBP does not attempt to assess either of these for any given entry of goods subject to AD/CV duties entering U.S. customs. As our analysis of CBP data demonstrates, a more comprehensive analysis of CBP’s available data is feasible and could help CBP better identify key risk factors and mitigate nonpayment risk, predict future risk levels for certain types of entries, and also evaluate the effects of past policy changes, such as bonding requirements, on nonpayment risk. CBP assesses the general risk of uncollected AD/CV duties retrospectively by examining its tally of the total dollars owed but does not consider factors related to the probability of loss for any given entry, such as the proportion of unpaid bills in that product. Federal internal control standards state that agency managers should comprehensively identify risks and analyze them for their possible effects. In doing so, managers should consider all significant interactions between the entity and other parties and changes within the agency’s external environment. In this way, managers can estimate the risk’s significance, assess the likelihood that the risk will occur, and decide what actions need to be taken to manage the risk. CBP has a statutory responsibility to collect all revenue due to the U.S. government that arises from the importation of goods. For entries of goods subject to AD/CV duties, the risk to CBP’s revenue collection from duty nonpayment, in terms of expected dollar loss, is the probability of nonpayment (risk likelihood) times the net duties owed to CBP (risk significance). CBP’s 2014 report to Congress on outstanding AD/CV duties considers total outstanding debt but not risk likelihood or risk significance at the level of individual entries. It provides summary statistics on open bills— stating that importers of goods from China account for over 90 percent of outstanding AD/CV duties as of April 1, 2014, and that the five largest cases in terms of open AD/CV duty bills involve five products from China: fresh garlic, wooden bedroom furniture, freshwater crawfish, honey, and preserved mushrooms. CBP officials told us that they consider this to be an assessment of how CBP views its AD/CV duty collection risk and that entries of these five products comprise the largest current risk of AD/CV duty nonpayment. CBP officials stated that their definition of risk, in tallying total amount of duties uncollected, does not consider CBP’s total exposure to a product category (which includes paid and unpaid bills) or factors related to the probability of loss for individual entries. However, as illustrated in figure 11, entities may have substantially different risk profiles even if the total dollar loss—CBP’s measurement of risk—is the same. With regard to assessing the risk associated with entries subject to AD/CV duties, CBP officials said that the agency was concerned only with the total amount of AD/CV duties billed but not paid. However, these officials also noted that this approach was a policy decision that could be revisited. While CBP does not presently use its risk analysis to target specific high-risk entries, CBP officials also said that the results of their data analysis have been interpreted by some CBP and Commerce officials as guidance that could be used for targeting. If CBP were to use its current definition of risk to assess the risk level of individual entries, such an analysis could identify entries with a lower probability of nonpayment based on past history as relatively riskier than entries that have a higher probability of nonpayment. We developed the following examples to illustrate the comparative risk of entries according to this definition of risk as a total of uncollected dollars. An entry of product C, which is associated with a 1.1 percent overall nonpayment rate and has imports of $100 million per year ($1.1 million total unpaid), would be considered a greater risk to revenue than an entry of product D, which is associated with a 50 percent nonpayment rate and has imports of $2 million per year ($1 million total unpaid), even though a given entry of product D is statistically far likelier to become associated with an uncollected bill. An entry of product E, a product for which all uncollected duties date from entries occurring 10 years ago, would be considered as presenting a similar risk to revenue as an entry of product F, for which the same amount of uncollected duties exists but from entries occurring 1 or 2 years ago—even if all entries of product E have had duties paid on time over the past 9 years. Analyzing data on entries subject to AD/CV duties provided by CBP, we applied standard statistical methods to explore nonpayment risk and found that controlling for a range of country, product, and other entry characteristics explains much of the risk of AD/CV duty nonpayment for the time periods we evaluated. While we used a number of diagnostic tests to assess the stability and predictive power of the risk factors estimated by our model, additional data and alternative modeling approaches could produce different results. Specifically, among other things, our analysis shows the following: Entries of products from countries other than China were estimated to be likelier to be associated with AD/CV duty nonpayments. These risk levels vary over time, meaning that some past risks are not contemporary risks. Products other than the five from China associated with cases that CBP identifies as presenting the highest risk of nonpayment of AD/CV duties were both estimated to be likelier to be associated with nonpayments of such duties and to represent greater losses when nonpayment occurs. These risks also vary over time. Other entry characteristics, such as the dollar value of the importer’s goods subject to duties, the use of a bond instead of cash to pay initial estimated duties, and the size of the final duty bill increase over the initial estimated duties were each estimated to be significantly associated with the entry’s overall risk level for uncollected duties. See figure 12 for additional examples, and see appendix II for further discussion. Because it singles out China, based on cumulative data over many years, and does not control for changes in risk factors over time, CBP’s risk analysis may lead CBP officials to misconstrue—overestimate or underestimate—the risk associated with an entry’s country of origin. Our analysis estimates that imports of products from China present a risk in terms of the likelihood of nonpayment, as well as the dollar loss of nonpayment, but it also estimates that imports of products from other countries may actually pose a greater risk in some cases—all other entry characteristics being equal. Our analysis also shows that risk factors, including the risk associated with country of origin, vary over time. While controlling for other entry characteristics, we computed the additional expected loss associated with the country of origin of an entry. We did this by computing probability of loss and loss given nonpayment using two regression models. To investigate whether estimated risks change over time, we divided the dataset into two 5-year periods. For 2009–2013, entries of imports from China accounted for the vast majority of uncollected bills. However, as figure 13 shows, the estimated risk of nonpayment on a given entry from China was actually lower than for products from other countries, holding all other entry characteristics equal. Specifically, our analysis estimates the following for the time periods we evaluated: For many types of products, entries of imports from China are not likelier to result in unpaid duties than otherwise identical entries of imports from other countries. While imports from China account for about 84 percent of unpaid AD/CV duties associated with entries during fiscal years 2009 through 2013, as of May 12, 2015, some of the apparent risk from these entries can be explained by the large volume of imports from China subject to AD/CV duties. In addition, certain products imported in large volume from China in 2009–2013, such as preserved mushrooms, are associated with increased probability of nonpayment. Controlling for such high-risk product types, in addition to the other shipment characteristics discussed below, shows that relatively little risk is associated directly with an entry’s being from China. While the average unpaid AD/CV duty bill associated with imports from China was more than 23 times larger than the average such bill associated with imports from Mexico in 2009–2013, the estimated likelihood of nonpayment was somewhat lower for imports from China, and the estimated dollar loss per nonpayment was identical for imports from the two countries (see fig. 13). In other words, an entry from Mexico with the same characteristics as a given entry from China, such as entry size and product type, had slightly greater estimated risk. However, this result is not apparent without a comprehensive analysis of data, such as the regression model we developed, because the typical entry from Mexico had very different characteristics than the typical entry from China. Figure 13 shows how entries from certain countries have greater estimated risk than those from China in terms of probability of nonpayment. As shown in the chart, otherwise identical entries of products from Denmark, Mexico, Japan, India, Thailand, and Vietnam are associated with a greater probability of nonpayment than entries of products from China in the 2009–2013 period. Our analysis further shows that the estimated nonpayment risk associated with the country of origin of an import can change considerably over time. As figure 13 demonstrates, the nonpayment risks associated with country of origin in the two periods we examined were significantly different. In 2004–2008 the estimated nonpayment risks associated with imports from the United Arab Emirates were nearly identical to the estimated risks associated with imports from China. In this earlier period, estimated losses per nonpayment were much lower for imports from India and Denmark than for imports from China in 2004– 2008; in addition, while the estimated probabilities of nonpayment for imports from Thailand and Vietnam remained higher, entries of products from these countries were associated with lower estimated losses per nonpayment. India, Thailand, and Vietnam all had equal loss per nonpayment and greater probability of nonpayment than China in 2009– 2013. Thus, our model shows that imports from several countries became riskier relative to Chinese imports over the time periods compared. Most of the products in the five cases that CBP highlights in its 2014 report as presenting the greatest risk of generating unpaid AD/CV duties were not among the products that our models estimated as presenting the greatest risk of duty nonpayment. Controlling for other entry characteristics, we compared the estimated probability of nonpayment and the dollar loss when nonpayment occurs for products associated with more than 15 delinquent bills. We also controlled for whether a product’s country of origin was China or some other country. Figure 14 illustrates the results of our analysis across several products; see appendix II for all products included in our model. Two of the five products from China in cases identified by CBP in its 2014 report as presenting the highest risk—crawfish and honey— were not associated with any additional estimated risk of nonpayment in the 2009–2013 period compared with other products. Further, the estimated risk posed by wooden bedroom furniture from China was lower than the estimated risk associated with otherwise identical entries of many other products. Our analysis shows that the estimated risk in 2009–2013 associated with polyethylene retail carrier bags was greater than the estimated risk associated with all five highest-risk cases from China identified by CBP. In addition, steel nails from countries other than China and wire hangers from China were associated with a greater estimated probability of nonpayment than three of the five products identified as riskiest by CBP, and woven ribbons from countries other than China were associated with a substantially larger loss per nonpayment than all five. Our analysis further shows that estimated product-associated risk can change over time. By examining these changes, CBP would be able to more accurately assess risk of loss due to nonpayment of AD/CV duties. As figure 14 demonstrates, the estimated product-associated risks posed in 2004–2008 were very different compared with the same risks in 2009– 2013. During the 2004–2008 period, all five products identified by CBP as riskiest were associated with a greater estimated probability of nonpayment than other products. CBP does not comprehensively examine the extent to which key entry characteristics other than country of origin and product type are associated with nonpayment risk. This reduces CBP’s ability to accurately assess the likelihood of nonpayment risk and the relative significance of risk factors associated with country of origin and product type. Using data provided by CBP, we identified a group of entry characteristics other than country of origin or product type associated with nonpayment risk, such as the length of an importer’s entry history and number of previous delinquencies. Because these other entry characteristics correlate with both nonpayment risk and certain product types and countries of origin, controlling for these other characteristics is necessary to avoid incorrectly overstating or understating the risks associated with the characteristics country of origin and product type. Our analysis suggests that, for products entered between 2009 and 2013, six entry characteristics, in addition to country of origin and product type, were significantly associated with either estimated likelihood of nonpayment or estimated size of the loss per nonpayment, or both. These six entry characteristics are (1) the size of the final duty bill, (2) the dollar value of the goods being imported, (3) the length of importer history, (4) the count of previous entries from the importer, and the number of previous delinquent bills from (5) the same importer and (6) the same manufacturer. (See fig. 15.) Our analysis suggests that these six characteristics stayed largely consistent relative to one another over time—each remained associated with a similar relative likelihood of nonpayment and loss per nonpayment during the 2004–2008 period. In contrast, the estimated risk associated with use of a bond in lieu of cash to pay initial estimated duties was not consistent over time. Bond use was associated with a large decreased risk of nonpayment in 2004–2008 compared with 2009–2013, when bond use had no estimated positive or negative association with risk. CBP does not proactively and routinely use its data to identify entries at risk of potentially uncollectible AD/CV duties, for example, by developing quantitative risk assessment tools that could be used consistently on newly arriving entries to help assess when additional risk mitigation actions may be warranted. CBP officials said that currently port officials investigate trends unsystematically, such as through anecdotal evidence from port officials about problems with certain importers or products. When a risk is identified through this process, these officials said that CBP can increase the bonding requirements for an entry, which reduces CBP’s exposure to potential losses from unpaid duties. As previously noted, federal internal control standards state that agency managers should comprehensively identify risks and analyze them for their possible effects, as well as design responses to these risks as necessary to mitigate these risks. Because governmental, economic, industry, regulatory, and operating conditions continually change, the standards also note that risk management efforts should include mechanisms to identify and deal with ongoing changes in the likelihood or significance of risk factors. To date, however, CBP has undertaken a few limited efforts to use its data to help identify and mitigate the risk of uncollected AD/CV duties. For example, as discussed previously, CBP’s AD/CV Collections Team initiated Operation Lost and Found in April 2015 to take advantage of specialized databases and information maintained by the National Targeting Center and the AD/CV National Targeting and Analysis Group to identify active importers with ties to inactive, delinquent AD/CV debtors. In 2015, CBP also briefly utilized a Department of Defense contract with the Jet Propulsion Laboratory and Johns Hopkins University to examine the use of systematic data analysis techniques to reduce AD/CV duty evasion. CBP officials said a 3-week trial resulted in significant findings and has determined that systematic data analysis techniques may be useful for identifying importers attempting to evade AD/CV duties; however, CBP has not yet determined whether and how such analysis might be used to improve AD/CV duty collection. Implementing a more comprehensive risk analysis system, using standard statistical methods such as those we used in building our proof- of-concept model, CBP could better assess nonpayment risk with its current data. Doing so would enable CBP to identify a more complete list of risk factors ranked in order of priority. Such analysis would support a decision-making process enabling CBP to take more effective actions to mitigate the nonpayment risk. Any such actions would also need to take into consideration U.S. international trade obligations as well as relevant U.S. court rulings. Most of the factors we identified that explain nonpayment risk are known to CBP at the time an entry arrives when CBP collects initial AD/CV duties, such as the importer’s history and the entry size. The remaining factors are known to CBP at the time it issues the final AD/CV duty bill, such as the increase in final billed amount relative to initial estimated duties. We determined that these entry characteristics predict nonpayment risk well when using 4 or more years of historical data and could be used to predict payment outcomes on future bills. Therefore, as figure 16 illustrates, CBP could use regression models similar to what we developed as an empirical tool for weighting the importance of risk factors for AD/CV entries and importers. For example, for newly arriving entries, these weighted risk factors, updated periodically, would be multiplied by the observed characteristics of the entry to yield estimated values for (1) probability of nonpayment and (2) dollar loss if nonpayment occurs. Further multiplying these two estimated values would yield expected loss—an estimate of the risk of uncollected duties constituting a concise risk score that is comparable across all entries of goods subject to AD/CV duties. CBP could use such a risk score strategically to mitigate nonpayment risk in a variety of ways, including, but not limited to, the following four ways: Triggering the need for an entry review by officials. As described above, when a newly arriving entry’s estimated risk score exceeds a predetermined threshold, CBP could begin a process of qualitative review by officials with appropriate expertise to determine whether a larger bonding requirement is appropriate and would be consistent with U.S. international trade obligations and relevant U.S. court rulings. Our analysis shows that a substantial proportion of risk associated with the likelihood of nonpayment and some of the risk associated with the size of the loss can be explained with information available to CBP at the time an entry arrives, even with our limited dataset. More sophisticated models that could further incorporate CBP’s institutional expertise would likely be able to predict risk even more effectively. Such predictive modeling as described above—or a similar approach—has the potential to be a valuable tool because, as CBP officials noted, the time of entry is when CBP has the best opportunity to enforce collection of the duties owed to the U.S. government. Targeting high-risk duty bills. When final duties are assessed for an entry and a bill is issued, CBP may be able to enhance its collection efforts by targeting high-risk bills using additional data available at that time, recalculating the risk score assigned at time of entry. As mentioned earlier, these additional data include, for example, risk factors such as the length of the review process and the amount of any increase over the initial estimated duties paid when the entry first arrived. Assessing ongoing aggregate risk posed by specific importers. As we discussed earlier, many of the importers with the largest total unpaid AD/CV duty bills imported 200 or more entries over a period longer than a year. CBP may be able to use a predictive model to assess the ongoing aggregate risk posed by an importer, even if final bills have not yet been issued and, if applied in a manner consistent with U.S. international trade obligations and relevant U.S. court rulings, use this information to adjust the importer’s continuous entry bond requirements. This may be particularly useful to mitigate risk from high-volume importers from whom individual entries do not present large expected losses. Assessing the effectiveness of policy changes intended to mitigate risk. CBP could use regression models such as the one we developed—or similar approaches—to examine its data retrospectively to assess the impact of policy changes intended to mitigate the risk of uncollected AD/CV duties. For example, CBP could assess whether there is a meaningful change in this risk for a specific group of interest, such as bond users, associated with a particular policy intervention, such as the changes to restrict access to bonds used in lieu of cash to pay initial estimated tariffs in 2006 and 2011. CBP officials cited limited staff resources and data systems as obstacles to systematically model and predict risk of AD/CV duty nonpayment. These officials said that CBP may not have the staff and resources that would be required to engage in such an effort. They noted that CBP’s recent efforts to use data more strategically have been limited and depend on leveraging expertise from different units of CBP. In addition, these CBP officials said that their data systems store information in a manner that is difficult to access and analyze and that it may be missing certain data necessary to undertake predictive modeling. Moreover, CBP’s current information systems make it difficult to do large-scale data runs and cannot generate real-time output. Nonetheless, we found that standard statistical methods were sufficient to predict nonpayment risk, and we were able to combine all necessary data pulled from CBP databases using a commercial computer workstation and software; such analysis would not necessarily require CBP to invest in special computing systems or prohibitively expensive software. Finally, the risk-score method described above would not require real-time data—only periodic checks to update risk-factor weighting. As discussed previously, CBP officials said that a past ruling by the WTO has constrained CBP’s use of bonds because the WTO determined that CBP’s application of an enhanced bonding policy on all shrimp importers from certain countries was inconsistent with U.S. obligations at the WTO. CBP officials explained that as a result of the WTO’s ruling, CBP now determines the requirement for an importer to obtain a bond on a case-by-case basis. However, CBP officials told us that targeting analytics they currently perform, an analysis that is specifically tailored to the individual shipment, would not be a blanket approach like past enhanced bonding efforts on shrimp. Specifically, CBP officials explained that although the WTO previously found the manner in which CBP applied its enhanced bonding requirement to be inconsistent with WTO principles, the WTO did not disagree with the concept of appropriately addressing risk through revised bonding requirements. CBP officials stated that CBP might be able to make good use of the statistical analysis methods GAO presented. For example, such an analysis could be used to assess a requirement for additional security in the form of bonds as part of an enhanced bonding requirement, according to these officials. However, they cautioned that the use of bonding in this way would have to be carefully tailored in order to avoid a legal challenge. We estimate the amount of uncollected duties on entries from fiscal year 2001 through 2014 to be $2.3 billion. While CBP collects on most AD/CV duty bills it issues, it only collects, on average, about 31 percent of the dollar amount owed. The large amount of uncollected duties is due in part to the long lag time between entry and billing in the U.S. retrospective AD/CV duty collection system, with an average of about 2-and-a-half years between the time goods enter the United States and the date a bill may be issued. Large differences between the initial estimated duty rate and the final duty rate assessed also contribute to unpaid bills, as importers receiving a large bill long after an entry is made may be unwilling or unable to pay. In 2015, CBP estimated that about $1.6 billion in duties owed was uncollectible. By not fully collecting unpaid AD/CV duty bills, the U.S. government loses a substantial amount of revenue and compromises its efforts to deter and remedy unfair and injurious trade practices. CBP faces a number of challenges in its efforts to improve its collection of AD/CV duties. CBP does not know the extent to which it liquidates entries in an untimely manner, nor does it know the effects such liquidations have on revenue. To mitigate the number of entries liquidated at the initial estimated duty rate instead of the final duty rate set by Commerce, CBP has begun an initiative to centralize and improve oversight of liquidation processing at the ports; however, CBP does not systematically collect and analyze data from this effort or assess impact on revenue. CBP says it has plans to collect and analyze data for management use once the initiative is fully implemented in the beginning of fiscal year 2017 but as of May 2016 had not issued guidance to this effect. Without systematically collecting and analyzing data on a regular basis to ascertain liquidation trends at ports of entry and offices, CBP cannot determine the extent to which premature and deemed liquidations are taking place or take timely and effective action to avoid premature or deemed liquidations and the potential revenue loss they represent. In separate but related efforts, CBP has created an AD/CV duty collections team, plans to collect more information about importers, and has taken steps to centralize the management of bonds—after revising its bonding formulas to better enable it to protect AD/CV duty revenue. These efforts, however, have yielded limited results to date. Though its institutional knowledge about the nature of this risk is deep, CBP has not used its extensive relevant data to conduct a comprehensive risk assessment. The risk analysis it has presented in reports on AD/CV duties is not useful for mitigating AD/CV duty nonpayment risk because it merely examines a tally of the total dollars in AD/CV duties owed but does not consider factors related to the likelihood of nonpayment for any given entry and the size of revenue loss if nonpayment occurs. Mathematically, the likelihood of nonpayment and the size of the loss if nonpayment occurs are the two components of expected loss. Our analysis shows that a substantial proportion of nonpayment risk can be explained with information available to CBP at the time an entry arrives, and even more could be explained at the time a final bill is issued. Further, we found that CBP’s data are suitable for conducting such analyses for risk predictions on future entries. More sophisticated models that CBP could develop, incorporating its institutional expertise, would likely be able to predict risk even more effectively than ours. As our analysis demonstrates, a more comprehensive analysis of CBP data related to AD/CV duties is feasible and could help CBP better identify key risk factors associated with nonpayment risk. Without such a risk analysis, CBP is also missing opportunities to take appropriate action consistent with its mission to facilitate compliant trade while collecting revenue. We recommend that the Commissioner of CBP take the following three actions: 1. To better manage the AD/CV duty liquidation process, CBP should issue guidance directing ACT to (a) collect and analyze data on a regular basis to identify and address the causes of liquidations that occur contrary to the process or outside the 6-month time frame mandated by statute, (b) track progress on reducing such liquidations, and (c) report on any effects these liquidations may have on revenue. 2. To improve risk management in the collection of AD/CV duties and to identify new or changing risks, CBP should regularly conduct a comprehensive risk analysis that assesses both the likelihood and the significance of risk factors related to AD/CV duty collection. For example, CBP could construct statistical models that explore the associations between potential risk factors and both the probability of nonpayment and the size of nonpayment when it occurs. 3. To improve risk management in the collection of AD/CV duties, CBP should, consistent with U.S. law and international obligations, take steps to use its data and risk assessment strategically to mitigate AD/CV duty nonpayment, such as by using predictive risk analysis to identify entries that pose heightened risk and taking appropriate action to mitigate the risk. We provided a draft of this report for review and comment to CBP, Commerce, Treasury, and the United States International Trade Commission. CBP was the only agency that provided formal agency comments, which are reproduced in appendix IV. In its comments, CBP concurred with all three of our recommendations. CBP also identified several actions it intends to take in response to the recommendations. For example, in response to our first recommendation, CBP said that its Offices of Trade and Field Operations will be employing its annual self- inspection program to identify the causes of premature and deemed AD/CV duty liquidations. CBP set forth two dates for completing both an initial and expanded analysis of incorrect liquidations and addressing the results of its analysis: September 30, 2016, to complete the initial analysis, and September 30, 2017, to complete the expanded analysis. In response to our second and third recommendations, CBP said that it has initiated a comprehensive statistical risk analysis that assesses both the likelihood and significance of risk factors related to AD/CV duty collection and will use this risk analysis to develop a predictive model to identify, and take appropriate action to mitigate, the risk from specific entries that pose a higher likelihood of nonpayment of final AD/CV duties. While concurring with our second and third recommendations, CBP expressed concern that the statistical methodology we used may have produced results that understate the impact of the duty evasion issues relating to high-risk imports from China. However, CBP did not identify any specific limitations in our methodology. CBP said that it will conduct its own analysis using statistical methods based on country of origin and other risk factors to identify high-risk entries and mitigate the risk of nonpayment of final AD/CV duties. We encourage CBP to conduct its own analysis of risk factors, in keeping with our recommendation. As discussed in this report, our model is one of many possible models, and risk factors are likely to change over time. Our model estimates that, for the 2009–2013 period, entries from China were not associated with additional nonpayment risk relative to otherwise identical entries of most products from other countries; however, we found that entries of certain specific products from China were associated with substantial increases in nonpayment risk. CBP, Commerce, Treasury, and the United States International Trade Commission all provided technical comments, which we incorporated in the report, as appropriate As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to appropriate congressional committees, the Commissioner of CBP, the Secretary of Commerce, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8612 or GianopoulosK@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This report (1) examines the status and composition of uncollected antidumping (AD) and countervailing (CV) duties, (2) the extent to which the U.S. Customs and Border Protection (CBP) has taken steps to improve its billing and collection of AD/CV duties, and (3) the extent to which CBP uses and could further use its data to assess and mitigate the risk to revenue from potentially uncollectible AD/CV duties. To examine the status and composition of uncollected AD/CV duties, we analyzed CBP data on all open, delinquent duty bills for entries from fiscal year 2001 through fiscal year 2014, as of May 12, 2015. For this purpose, we combined three datasets from CBP’s Automated Commercial System (ACS) containing information on entries and billed amounts associated with entries. ACS is used by CBP to track, control, and process all goods entering the United States. The first ACS dataset contained AD/CV duty entry data; the second contained final assessed AD/CV duty rate data; and the third contained importer AD/CV duty billing data. As part of our examination of the status and composition of uncollected AD/CV duties, we analyzed the extent to which CBP writes off uncollectible bills. The data for this part of the analysis constitutes a fourth dataset, which was also taken from ACS and was provided as of October 2015. The definition of “uncollected duties” that we use in this report differs slightly from the definition used in our 2008 report. That report defined “uncollected duties as including all open, unpaid bills for AD/CV duties.” For this report, we narrowed that definition to all open, delinquent bills for AD/CV duties. According to statute, amounts due to CBP are considered delinquent if they are unpaid within 30 days after issuance of the bill for such a payment. Similar to our 2008 report, we excluded softwood lumber from Canada from our analysis because the AD/CV duty collection processes for this product are established through a binational agreement, which is outside the typical practice. The CBP data we analyzed to determine collection rates for AD/CV duty bills included key characteristics such as the bill amount, importer information, dates of entry, and dates and amounts of liquidation. Using these data, we calculated two different collection rates: (1) the weighted average percentage of the number of bills collected and (2) the weighted average percentage of the dollar amount collected. To calculate this rate, we included data on entries where the final duty rate was higher than the initial estimated duty rate, indicating that a bill would have been issued. Where a bill was issued but no data existed on an associated delinquent bill, we assumed the bill was paid. Because the entry and billing data used to calculate these rates are a snapshot as of May 12, 2015—the date our data request was filled— these collection rates are subject to change. For example, we included data on entries from 2013 and 2014; however, about 42 percent of the entries from 2013 and about 77 percent of the entries from 2014 had not been liquidated as of March 2016. As more entries from these years are liquidated, the collection rates may change due to a varying ratio of paid to unpaid duty bills; in addition, the proportion of liquidations resulting in any bill at all may change. After combining CBP’s data, we also used these data to analyze several other characteristics of unpaid bills, including their distribution by dollar amount, top products associated, importers with the highest amounts of unpaid bills, the average time between entry and liquidation across all entries, the frequency with which large rate changes result in unpaid bills, and the age of the bills. In each analysis, where relevant, we determined the mean and median amounts for comparison. Our analysis consisted of more than 41,000 delinquent bills. We determined that these data were sufficiently reliable for the purposes of our report. In addition to analyzing data to determine the status and composition of uncollected AD/CV duties, we reviewed relevant statutes, regulations, and agency reports and interviewed CBP and Department of Commerce (Commerce) officials. To assess the reliability of the ACS data, we (1) performed electronic testing of required data elements, (2) reviewed existing information about the data and the systems that produced them, and (3) interviewed agency officials knowledgeable about the data and the systems that produced them. Our electronic testing consisted of automated checks to determine inconsistencies in the data. We identified several inconsistencies in the data and performed follow-up interviews and analysis to resolve the inconsistencies. We found the ACS data to be generally reliable for purposes of our analysis, with several limitations that required steps outlined below. To analyze the status and composition of uncollected AD/CV duties, we made several assumptions in order to process the data. We consolidated our data by unique combinations of entry number and AD/CV duty case number. Each AD/CV duty case number includes codes that indicate, separately, the relevant product and country of origin. However, the product code is not consistent between countries. For example, the product code for lemon juice when an entry is from Mexico is the same code used for sodium sulfate when an entry is from Canada. We constructed a database using a large list of case numbers provided by CBP. We then identified, where available, codes from every country corresponding to a given product description. We conducted a manual search for several missing case numbers. Because of limitations in CBP’s database of open bills from ACS, we were unable to determine which case number an open bill was associated with. Therefore, in order to avoid falsely attributing open bills to a given case, we dropped open bills associated with entries containing more than one AD/CV duty case number. While we found a relatively small number of bills containing more than one case number (4,224, or 8 percent of the data), dropping these bills means that our results somewhat understate the amount of uncollected duties. Specifically, our methodology may underestimate the amount of uncollected CV duties because, according to CBP, most CV entries also include goods subject to an AD case, but the reverse is not true. We restricted our analysis to entries that could have resulted in uncollected duties—that is, entries that were liquidated and billed. In describing the extent and nature of uncollected duties, we considered the principal amount due and any interest accrued in order to present the most comprehensive total picture of unpaid duties owed to CBP. However, in estimating the risk of nonpayment, we considered only the principal amount due and treated interest accumulated after liquidation as endogenous to the decision not to pay the bill. (See below for further details on our analysis of nonpayment risk.) We conducted a distinct assessment of the reliability of the write-off data because these data were provided separately. We interviewed an agency official knowledgeable about the source and uses of these data and reviewed the agency’s annual Performance and Accountability reports for fiscal years 2013–2015, which include CBP’s financial statements and are audited by an external accounting firm. Using these data, we calculated the dollar amounts of AD/CV duties that CBP has written off by year. We determined that these data were sufficiently reliable for our purposes. To examine the extent to which CBP has taken steps to improve its billing and collection of AD/CV duties, we obtained and analyzed data from ACS for entries from fiscal year 2001 through fiscal year 2014, as of May 12, 2015; reviewed relevant statutes, regulations, and agency reports; and interviewed CBP, Commerce, and Department of the Treasury (Treasury) officials. For example, we obtained CBP data showing the extent to which CBP liquidates entries prematurely as well as those it liquidates beyond the 6-month statutory time frame for liquidating AD/CV entries, and we interviewed CBP officials from the Office of Trade about these processing errors. However, as discussed in the report, the data were incomplete. We also obtained CBP documents about the establishment of the Antidumping and Countervailing Centralization Team (ACT) and the portal used by the team to identify applicable AD/CV entries for liquidation. As discussed in the report, we did a check of CBP data and identified 94 AD/CV entries during the period covered by our review. The 94 entries were all entries where the entry and final liquidation dates had occurred relatively quickly—approximately 30 days apart. We then asked CBP to check 20 of these entries to determine why the liquidations had occurred so quickly. CBP officials told us that 7 (about 35 percent of the 20 entries) had been prematurely liquidated. On the basis of that information, we asked CBP to provide additional information about the number of liquidations that had occurred prematurely and any that had occurred beyond the statutory 6-month time frame for liquidating entries. CBP provided information from a February 2015 analysis. However, as discussed in the report, prior to the ACT portal CBP had no means of accurately tracking the number of premature and deemed liquidations occurring. For that reason, the February 2015 analysis is not comprehensive in nature. To follow up on the finding from our 2008 report that CBP collects little information regarding importers of record, we examined CBP’s planned revisions to its form 5106, which CBP uses to collect key importer of record information and make decisions regarding bonding and other matters. We discussed the planned revisions with CBP officials. Customs bonds are used to safeguard revenue and, according to CBP officials, play an important role in CBP’s efforts to improve AD/CV collections. To follow up on another finding from our 2008 report—that CBP’s standard bond formula provides little protection of AD/CV duty revenue, we met with three of the major associations that represent the companies (known as sureties) that issue customs bonds. We discussed, among other topics, how customs bonds are used by importers to pay for AD/CV duties, changes in the sureties’ bond underwriting patterns that have occurred since our 2008 report, and CBP’s introduction of an electronic bond. We also met with CBP officials to understand how CBP has made changes to address the concerns discussed in our 2008 report that the standard bonding formula provides little protection of AD/CV duty revenue. Two CBP offices currently play major roles in the management of customs bonds used to pay AD/CV duties: the Office of Trade and the Office of Finance. We discussed with officials from those offices CBP’s efforts to centralize the management of all bonds and to change the bonding formulas to address concerns that the standard bonding formulas do not sufficiently protect revenue. We also obtained data showing how CBP has required AD/CV importers to obtain both continuous entry bonds and single transaction bonds to address the payment of unforeseen obligations to the U.S. government. As part of our analysis of CBP’s AD/CV duty collection process, we examined bond use before and after the April 2006 through June 2009 suspension of the new shipper bonding privilege. To accomplish this, we combined two separate datasets. The first was from a Commerce database that documents new shipper reviews for fiscal years 2002 through 2015. The second was the previously described data from ACS containing information on entries and billed amounts associated with entries for fiscal years 2001 through 2014, as of May 12, 2015. Because the Commerce and CBP datasets did not always use the same format or spelling for the names of importers, we performed both an automated and a manual matching of importer names in both datasets to identify the universe of entries likely to be associated with a new shipper. We then produced summary statistics for the amounts of delinquent duties associated with new shippers as a group, with and without bonds, and comparable statistics for the amount of delinquent duties associated with all other shippers, with and without bonds. We also compared these data for the pre- and post-2006 through 2009 periods. Because the time frames associated with the Commerce and CBP datasets did not exactly coincide, we decided to use in our analysis a timeframe common to both: January 2002 through December 2013. We performed tests of the data and determined, based on those tests and interviews, that the data were sufficiently reliable for our analysis. To examine the extent to which CBP assesses and mitigates the risk to revenue from potentially uncollectible AD/CV duties, we combined the three datasets from ACS into a single database. The database is associated with entries from fiscal year 2001 through fiscal year 2014, as of May 12, 2015. To develop a reasonable risk measurement for use in addressing the risk of AD/CV duty nonpayment, we first examined agency goals and criteria (including federal internal controls criteria) and identified expected loss per shipment (in terms of uncollected duties). We also reviewed CBP’s reports to Congress. To calculate an expected loss score, we mathematically extrapolated expected loss into two measurable components: (1) likelihood of AD/CV duty nonpayment and (2) the amount of duties not paid contingent on nonpayment (loss per nonpayment). In developing a regression model to analyze each of these two risk measures, we created several variables that describe importer and manufacturer characteristics derived from variables in ACS. To determine associations with likelihood of duty nonpayment, we calculated the coefficients of the regression of country, product, and other shipment characteristics on the binary variable “delinquent.” As many of our variables are indicator variables, we found that a linear fit was a good approximation of a logistic regression and had the advantages of being less computationally intensive and producing coefficients that fit intuitively into a risk scorecard. To determine associations with size of nonpayment, we regressed the same independent variables on the continuous variable “amount delinquent.” To determine whether our models are appropriate for forecasting, we ran a series of regressions over 2-year periods. We selected several risk factors for review based on the size and statistical significance of their coefficients in the full 2001–2014 period regression model. We found that these risk factors, as evidenced by their coefficients, are generally stable over time in our year-by-year regression models, retaining the same sign and comparable magnitudes. However, some risk factors also change over time, for example, large changes in magnitude (e.g., Vietnam) or changing sign from positive to negative or vice versa (e.g., Mexico). Our regression models do not establish whether a given factor causes nonpayment or is merely correlated with this risk. The models provide an example of how CBP data could be systematically analyzed to provide insights into bill delinquency patterns, but we do not intend them to be prescriptive. To determine appropriate periods of time for analysis, we examined the effect of including data from a range of periods. We sequentially expanded the regression analysis to include data from 2011–2013, adding additional years up to and including 2005–2013, as well as groupings of 2005–2009 and 2004–2008. Expanding the period of analysis may have several effects. For example, increasing the available data will generally result in more accurate estimates and therefore more accurate models. On the other hand, longer periods may be associated with greater amounts of systemic change in risk factors and therefore yield less accurate models. For each period, we constructed the model on one portion of our data and tested the model’s ability to identify the likelihood of nonpayment for entries outside of this sample (“out-of- sample predictive power”). We measured the model fit for this cross-validation process for each period and found that the model’s out-of-sample predictive power improved until we included 5 years of data (2010–2013), at which point the predictive power roughly plateaued with an R-squared value estimated at 0.75 -0.77 out of sample for probability of nonpayment . As a result, we believe that models constructed with 5–9 years of past data would be reasonable. We selected 5-year periods (2004–2008 and 2009– 2013) for purposes of comparing useful models from distinct periods. This allowed us to compare datasets from two equal periods, one corresponding to the period of our 2008 report and the other corresponding to the most recent period for which complete data were available. Because fewer data are available to CBP at entry than at liquidation, we also ran out-of-sample tests without the variables for “net billed amount” and “rate review period length.” We found that the model remains useful at entry: Our ability to predict probability of nonpayment was unaffected, while our ability to predict the loss per nonpayment was reduced by a moderate amount. This reduction was expected given that loss per nonpayment is a function of the amount billed, and the amount billed is determined by CBP after entry, based on the final duty rate set by Commerce. Using our full set of data, we found that duty rate increases and decreases for many product types were systematically predictable. We presented the results of our regression analysis to CBP on two occasions. Based on their comments, we adjusted the methodology used to derive our analysis. As discussed previously, we assessed the reliability of the ACS data and had to make several assumptions in order to process the data. Beyond the assumptions discussed above, in order to perform our regression analysis, we had to make several additional assumptions. These are discussed in appendix II. We conducted this performance audit from January 2015 to July 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted a systematic statistical analysis of U.S. Customs and Border Protection (CBP) data to identify factors affecting the risk of antidumping (AD) and countervailing (CV) duty nonpayment. CBP provided data we requested from its Automated Commercial System (ACS)—CBP’s data system for tracking, controlling, and processing all goods imported into the United States—as well as from CBP billing data, and we consolidated these data files into a single database. To demonstrate how a statistical model could be constructed that explores the association between potential risk factors and the potential for nonpayment, we used CBP’s data to develop two regression models, one to estimate the likelihood of nonpayment for any given entry and one to estimate the size of revenue loss if nonpayment occurs. Mathematically, the likelihood of nonpayment and the size of the loss if nonpayment occurs are the two components of expected loss. Our regression models do not establish whether a given factor causes nonpayment or is merely correlated with this risk. To be useful for risk management, such a model would need to be able to predict future nonpayment risk. As a result, to assess the ability of the model to predict future losses, we aggregated, cross-validated, and analyzed the data for two separate 5-year periods and conducted qualitative assessments of parameter stability. Our models provide a demonstration of how CBP could systematically analyze its data to provide insights into bill delinquency patterns, but we do not intend to be prescriptive. Our analysis merely demonstrates that a substantial proportion of nonpayment risk can be explained with information available to CBP at the time an entry arrives and, later, at liquidation, even with the limited dataset that we used. More sophisticated models that could further incorporate CBP’s institutional expertise would likely be able to predict risk even more effectively. Our analysis is based on data collected in ACS, CBP’s data storage systems for imports subject to AD/CV duties. We combined these data with information that CBP stores on open AD/CV duty bills. Our data included information on entries from fiscal year 2001 through fiscal year 2014. For this analysis, with the exception of a product codes database that we constructed from CBP sources, we did not incorporate any external or additional databases in our analysis. Prior to conducting our analysis, we assessed CBP’s databases and found them to be generally reliable for purposes of our analysis. While we used a number of diagnostic tests to confirm the stability and predictive power of the risk factors estimated by our model, additional data and alternative modeling approaches could produce different results. Our model is based on a number of statistical assumptions, some of which may not correspond to the underlying process that generates AD/CV duty losses from nonpayment. These statistical assumptions may include the linearity of risk factors in our functional form, the potential that variables omitted because they were not in CBP’s databases or are otherwise difficult to quantify would change estimates of risk, and the potential sensitivity of our statistical inference to deviations from normality. We processed the data from CBP by taking the following steps, which required several additional assumptions as noted: 1. We consolidated our data by unique combinations of entry number and AD/CV duty case number. 2. We identified the product type associated with each case number. Each AD/CV duty case number includes codes that indicate, separately, the relevant product type and country of origin. However, the product code is not consistent between countries. For example, the product code for lemon juice when an entry is from Mexico is the same code used for sodium sulfate when an entry is from Canada. We constructed a database using a large list of case numbers provided by CBP. We then identified, where available, cases from every country corresponding to a given product description. We conducted a manual search for several missing case numbers. 3. We dropped open bills associated with entries containing more than one AD/CV duty case number in order to avoid falsely attributing open bills to a given case. Because of limitations in CBP’s database of open bills, we were unable to determine for any open bill its corresponding case number. 4. We removed a small number of entries that did not have information recorded for the dollar value of the product being imported, which is a necessary component of AD/CV duty rate determination. 5. We restricted our analysis to entries subject to AD/CV duties that could have resulted in uncollected duties—that is, entries that were liquidated and billed. Because of data limitations, we estimated billed amounts by summing the final assessed duty with accumulated interest and subtracting any initial payment; we retained only entries where this “net bill” amount was greater than $0 (zero). While analytically imperfect, this was a reasonable approach, according to CBP officials. 6. We considered only the principal amount due for open bills. In assessing the amount of uncollected duties, we assumed that interest accumulated after liquidation follows from the decision not to pay the bill. Because of limitations in the data provided by CBP, we are unable to account for the proportion of duties owed that may have been covered by surety bonds and thus potentially collectible by CBP in the event of delinquency. 7. We logarithmically transformed variables that contained long-tail distributions. 8. We identified delinquent bills as those that were 31 days old or older with unpaid amounts, consistent with the requirements of 19 U.S.C. § 1505(d), which allows the charging of interest on billed but unpaid amounts after 30 days. 9. In order to reduce noise in our analysis, we did not test for the risk associated with products and countries from which there were fewer than 15 delinquent bills. Because of general policy interest in Chinese entries, we controlled for the interaction effect of Chinese origin and product type for products that met the volume criteria described above. We retained these interaction variables for products with more than 15 entries from China, and we removed redundant controls for products entering almost exclusively from China (i.e., products for which more than 99.5 percent of the entries originated from China). 10. We created several variables that describe importer and manufacturer characteristics derived from variables in ACS. All variables included in our model and their derivations are described in tables 2 and 3. Summary statistics for these variables are included in tables 4 and 5. We ran two regression models for each of two 5-year periods—full calendar years 2004–2008 and 2009–2013. In order to determine associations with probability of nonpayment, we regressed country of origin, product, and other shipment characteristics on the binary variable “delinquent.” As many of our variables are indicator variables, we found that a linear fit was a good approximation of a logistic regression and had the advantages of being less computationally intensive and producing coefficients that fit intuitively into a risk scorecard. In order to determine associations with size of nonpayment, we regressed the same independent variables on the continuous variable “amount delinquent.” See table 6 for regression coefficients from these models for each relevant time period. We restricted this second model to entries associated with a delinquent bill. Model 1, run for each 5-year period: delinquent = β0 + β1*loglinevalue + β2*bondind + β3*lognetbill + β4*logentrytimegapman + β5*logentryspanman + β6*logentrytimegapimp + β7*logentryspanimp + β8*logcountman + β9*logcountimp + β10*logprevmeanman +β11*logprevmeanimp + β12*logdelimp + β13*logdelman + β14*loghowlong + β15*earlydelin + β16*initialtar + β* + β* + β* Model 2, run for each 5-year period: logamtdelinquent = β0 + β1*loglinevalue + β2*bondind + β3*lognetbill + β4*logentrytimegapman + β5*logentryspanman + β6*logentrytimegapimp + β7*logentryspanimp + β8*logcountman + β9*logcountimp + β10*logprevmeanman +β11*logprevmeanimp + β12*logdelimp + β13*logdelman + β14*loghowlong + β15*earlydelin + β16*initialtar + β* + β* + β* (restricted to entries associated with a delinquent bill) In order to determine whether risk factors were generally stable over various periods of time and hence whether our models were in principle appropriate for forecasting, we ran a series of regressions over 2-year time periods. We selected several risk factors for review based on the size and statistical significance of their coefficients in the full 2001–2014 period regression model. We found that these risk factors, as evidenced by their coefficients, are generally stable over time in our year-by-year regression models, retaining the same sign and comparable magnitudes. However, some risk factors also change over time, for example, showing large changes in magnitude or changing sign. These results suggest that in principle models like the one we developed could be useful for forecasting risk of loss and, further, that changes over time suggest risk factor estimates should be updated periodically. In order to determine appropriate time periods for analysis, we examined the effect of including data from a range of time periods. We sequentially expanded the regression analysis to include data from 2011–2013, adding additional years up to and including 2005–2013, as well as groupings of 2005–2009 and 2004–2008. Expanding the time period of analysis may have several effects. For example, increasing the available data will generally result in more precise estimates and therefore models that are more likely to capture inherent, underlying risks. On the other hand, longer time periods may be associated with greater amounts of systemic change in risk factors and therefore models that are less reflective of contemporary underlying risks. For each time period, we constructed the model on one portion of our data and tested the model’s ability to identify the likelihood of nonpayment for entries outside of this sample (“out-of-sample predictive power”). We measured the model fit for this cross-validation process for each time period and found that the model’s out-of-sample predictive power improved until we included 5 years of data (2009–2013), at which point the predictive power roughly plateaued with an R-squared value estimated at about 0.75-0.77 out of sample for probability of nonpayment (see table 7). As a result, we believe that models constructed with 5–9 years of past data would be reasonable. We selected 5-year periods (2004–2008 and 2009–2013) for purposes of comparing the results of our models on datasets from distinct time periods. Our selection of these particular 5-year periods allowed us to compare results from two full time periods of data that correspond to the most current data available from CBP and the data from approximately the time of our 2008 report on AD/CV duties. Because fewer data are available to CBP at entry than at liquidation, we also ran out-of-sample tests without the variables for net billed amount (“netbill”) and rate review period length (“loghoqlong”). We found that the model remains useful at entry: our ability to predict probability of nonpayment was unaffected, while our ability to predict the loss per nonpayment was reduced from an R-squared value of 0.985 to a value of 0.27. This reduction is to be expected given that loss per nonpayment is a function of the amount billed, and the amount billed is the result of CBP’s application of the AD/CV duty rate determined by Commerce after entry; “netbill” almost perfectly explains the size of loss if nonpayment occurs, and we did not include it in this version of the model. However, we found that the increase or decrease from the initial estimated duty rate to the final duty rate was, to some extent, predictable using the information that CBP has available at entry. Hence, the model works reasonably well for predicting the size of loss if nonpayment occurs before Commerce has set the final rate. Using our full set of data, not restricted to entries with netbill>0, we found that the coefficients associated with many product types were highly statistically significant, indicating that at least some can meaningfully predict rate increases and decreases; hence, the lack of information at entry about the liquidation that ultimately will be applied to the entry is mitigated to some extent. We performed an analysis of importers’ use of the new shipper bonding privilege before and after the suspension of the bonding privilege from August 2006 through July 2009. Our analysis shows the following: After the new shipper bonding privilege was reinstated, importers made much less use of it to pay initial estimated AD/CV duties compared with the period before the privilege was suspended. Most of the importers that obtained the new shipper bonds, including both before and after the suspension of the privilege, were associated with unpaid bills. New shippers that used a bond to pay estimated AD/CV duties did not account for a significant amount of the total unpaid debt during either of the two periods when the bonding privilege was in effect. Over the entire time frame we examined, from January 2002 through December 2013, new shippers that paid their estimated AD/CV duties in cash were associated with many fewer unpaid bills than importers that obtained new shipper bonds. In August 2006, Congress temporarily suspended the “new shipper bonding privilege” that allowed importers purchasing goods from companies undergoing a new shipper review to provide a bond, instead of cash, to cover estimated antidumping (AD) and countervailing (CV) duties due at entry. As a result, importers of these goods were required to provide a cash deposit to cover the estimated duties. However, the temporary suspension expired and the privilege was reinstated in July 2009. In February 2016, the President signed legislation removing the new shipper bonding privilege. Figure 17 shows when the new shipper bonding privilege was and was not in effect for the period of our review. We performed an analysis of importers’ use of the new shipper bonding privilege before and after the suspension of the bonding privilege. We also examined the extent to which these importers were associated with unpaid bills. In addition, we examined the extent to which importers that used a new shipper bond to pay for estimated AD/CV duties accounted for the total amount of unpaid debt. Finally, for comparison, we examined the extent to which new shippers that used cash instead of a bond to pay for estimated AD/CV duties due at entry were associated with unpaid bills. Our analysis of the new shipper bonding privilege focused on two periods: from January 2002 through July 2006 (before the new shipper bonding privilege was suspended) and from August 2009 through December 2013 (after the new shipper bonding privilege was reinstated). Because the use of a cash deposit was allowed during the entire period from January 2002 through December 2013, we used this period to analyze how the use of cash deposits by new shippers was associated with unpaid bills. The analysis shows that after the new shipper bonding privilege was reinstated, importers made much less use of it to pay initial estimated AD/CV duties compared with the period before the privilege was suspended. From January 2002 through July 2006, 32 importers used new shipper bonds. These importers used new shipper bonds to pay duties on 1,558 entries subject to AD/CV duties assessed at approximately $154 million. By comparison, in the period after the reinstatement of the privilege, from August 2009 through December 2013, only 1 importer used a new shipper bond. This importer used new shipper bonds to pay for 3 entries subject to AD/CV duties worth approximately $511,000. According to surety officials we interviewed, sureties tightened their underwriting standards in 2009, and this could account for the vastly reduced number of new shipper bonds issued. Moreover, our analysis also shows that most of the importers that obtained the new shipper bonds, including both before and after the suspension of the privilege, were associated with unpaid bills. For example, before the suspension (January 2002 through July 2006), 25 of the 32 importers (or approximately 78 percent) that used a new shipper bond had one or more unpaid bills. Approximately 76 percent of the bills issued to these importers during this period went unpaid. The total amount that went unpaid was approximately $89 million, with a median bill amount of about $13,000 and an average bill amount of about $75,000. After the suspension was lifted until December 2013, the 1 importer that used new shipper bonds did not pay any of its bills and the total amount due was approximately $560,000. The median and average bill amounts were approximately $180,000 and $187,000, respectively. In addition, our analysis shows that new shippers that used a bond to pay for estimated AD/CV duties did not account for a significant amount of the total unpaid debt during either of the two periods when the bonding privilege was in effect. For the period from January 2002 through July 2006, these new shippers accounted for about 11 percent of all unpaid bills. For the period after the reinstatement of the privilege through December 2013, the percentage of total unpaid debt associated with new shippers that used a bond to pay for estimated AD/CV duties was less than 1 percent. Our analysis also shows that by comparison, over the entire time frame we examined, new shippers that paid their estimated AD/CV duties in cash were associated with many fewer unpaid bills. From January 2002 through December 2013, a total of 42 importers associated with a new shipper review used a cash deposit instead of a bond to pay for estimated AD/CV duties. Of the 42, only 1 was associated with an unpaid bill. This importer had two unpaid bills worth a total of about $6,000. In addition to the contact named above, Christine Broderick (Assistant Director), José M. Peña III, (Analyst in Charge), Kerri Eisenbach, Andrew Kurtzman, and Cristina Ruggiero made key contributions to this report. Also contributing were Ming Chen, Gergana Danailova-Trainor, David Dayton, Michael Hoffman, Julia Jebo Grant, Mitchell Karpman, Grace Lui, Michael Maslowski, Marc Molino, and Eddie Uyekawa. Antidumping and Countervailing Duties: Key Challenges to Small and Medium- Sized Enterprises’ Pursuit of the Imposition of Trade Remedies. GAO-13-575. Washington, D.C.: June 25, 2013. Antidumping and Countervailing Duties: Management Enhancements Needed to Improve Efforts to Detect and Deter Duty Evasion. GAO-12-551. Washington, D.C.: May 17, 2012. Options for Collecting Revenues on Liquidated Entries of Merchandise Evading Antidumping and Countervailing Duties. GAO-12-131R. Washington, D.C.: November 2, 2011. Agencies Believe Strengthening International Agreements to Improve Collection of Antidumping and Countervailing Duties Would Be Difficult and Ineffective. GAO-08-876R. Washington, D.C.: July 24, 2008. Antidumping and Countervailing Duties: Congress and Agencies Should Take Additional Steps to Reduce Substantial Shortfalls in Duty Collection. GAO-08-391. Washington, D.C.: March 26, 2008. International Trade: Customs’ Revised Bonding Policy Reduces Risk of Uncollected Duties, but Concerns about Uneven Implementation and Effects Remain. GAO-07-50. Washington, D.C.: October 18, 2006. U.S.-China Trade: Eliminating Nonmarket Economy Methodology Would Lower Antidumping Duties for Some Chinese Companies. GAO-06-231. Washington, D.C.: January 10, 2006. U.S.-China Trade: Commerce Faces Practical and Legal Challenges in Applying Countervailing Duties. GAO-06-231. Washington, D.C.: June 17, 2005.
The United States assesses AD duties on products imported at unfairly low prices (i.e., dumped) and CV duties on products subsidized by foreign governments. Nonpayment of AD/CV duties means the U.S. government has not fully remedied unfair trade practices and results in lost revenue. GAO was asked to review CBP's efforts to improve the collection of AD/CV duties. This report (1) examines the status and composition of uncollected AD/CV duties, (2) the extent to which CBP has taken steps to improve its collection of such duties, and (3) the extent to which CBP assesses and mitigates the risk to revenue from potentially uncollectible AD/CV duties. GAO analyzed CBP AD/CV duty entry data for fiscal years 2001 through 2014, AD/CV duty billing data as of mid-May 2015, and Department of Commerce data for fiscal years 2002–2015. GAO also reviewed agency documents, interviewed agency and private sector officials, and analyzed CBP data to assess the risk of duty nonpayment. GAO estimates that about $2.3 billion in antidumping (AD) and countervailing (CV) duties owed to the U.S. government were uncollected as of mid-May 2015, based on its analysis of AD/CV duty bills for goods entering the United States in fiscal years 2001–2014. U.S. Customs and Border Protection (CBP) reported that it does not expect to collect most of that debt. GAO found that most AD/CV duty bills were paid and that unpaid bills were concentrated among a small number of importers, with 20 accounting for about 50 percent of the $2.3 billion uncollected. CBP data show that most of those importers stopped importing before receiving their first AD/CV duty bill. As GAO has previously reported, the U.S. AD/CV duty system involves the retrospective assessment of duties, such that the final amount of AD/CV duties an importer owes can significantly exceed the initial amount paid at the estimated duty rate when the goods entered the country. CBP has undertaken efforts to improve its collection of AD/CV duties or to protect against the risk of unpaid final duty bills through bonding, but these efforts have yielded limited results. For example, CBP launched an initiative to reduce processing errors that result in CBP closing duty bills at the initial duty rate rather than the final duty rate, such that the initial duty paid may be significantly higher or lower than the final duty amount owed. Though the initiative has shown positive results, as of May 2016, its application had been limited. In addition, CBP had not collected and analyzed data systematically to help it monitor and minimize these duty processing errors. As a result, CBP does not know the extent of these errors and cannot take timely or effective action and avoid the potential revenue loss they may represent. CBP's limited analysis of the risk to revenue from potentially uncollectible AD/CV duties (nonpayment risk) misses opportunities to identify and mitigate nonpayment risk. The standard definition of risk with regard to some negative event that could occur includes both the likelihood of the event and the significance of the consequences if the event occurs; however, CBP does not attempt to assess either of these risk components for any given entry of goods subject to AD/CV duties. GAO's analysis, applying standard statistical methods, demonstrates that a more comprehensive analysis of CBP data related to AD/CV duties is feasible and could help CBP better identify key factors associated with nonpayment risk and take steps to mitigate it. GAO recommends that CBP (1) issue guidance to collect and analyze data on a regular basis to find and address the causes of AD/CV duty liquidation errors and track progress; (2) regularly conduct a comprehensive risk analysis that considers likelihood as well as significance of risk factors related to duty nonpayment; and (3) take steps to use its data and risk assessment strategically to mitigate AD/CV duty nonpayment consistent with U.S. law and international trade obligations. CBP concurred with all three recommendations.
Borrowers arrange residential mortgages through either mortgage lenders or brokers. The funding for mortgages can come from federally or state- chartered banks, mortgage lending subsidiaries of these banks or financial holding companies, or independent mortgage lenders, which are neither banks nor affiliates of banks. Mortgage brokers act as intermediaries between lenders and borrowers, and for a fee, help connect borrowers with various lenders who may provide a wider selection of mortgage products. Mortgage lenders may keep the loans that they originated or purchased from brokers in their portfolios or sell the loans in the secondary mortgage market. Government-sponsored enterprises (GSEs) or investment banks pool many mortgage loans that lenders sell to the secondary market, and these lenders or investment banks then sell claims to these pools to investors as mortgage backed-securities (MBS). Lenders consider whether to accept or reject a borrower’s loan application in a process called underwriting. During underwriting, the lender analyzes the borrower’s ability to repay the debt. For example, lenders may determine ability to repay debt by calculating a borrower’s DTI ratio, which consists of the borrowers’ fixed monthly expenses divided by gross monthly income. The higher the DTI ratio, the greater the risk the borrower will have cash-flow problems and miss mortgage payments. During the underwriting process, lenders usually require documentation of borrowers’ income and assets. Another important factor lenders consider during underwriting is the amount of down payment the borrower makes, which usually is expressed in terms of a LTV ratio (the larger the down payment, the lower the LTV ratio). The LTV ratio is the loan amount divided by the lesser of the selling price or appraised value. The lower the LTV ratio, the smaller the chance that the borrower would default, and the smaller the loss if the borrower were to default. Additionally, lenders evaluate the borrowers’ credit history using various measures. One of these measures is the borrowers’ credit score, which is a numerical measure or score that is based on an individual’s credit payment history and outstanding debt. Mortgage loans could be made to prime and subprime borrowers. Prime borrowers are those with good credit histories that put them at low risk of default. In contrast, subprime borrowers have poor or no credit histories, and therefore cannot meet the credit standards for obtaining a prime loan. Chartering agencies oversee federally and state-chartered banks and their mortgage lending subsidiaries. At the federal level, OCC, OTS, and NCUA oversee federally chartered banks (including mortgage operating subsidiaries), thrifts, and credit unions, respectively. The Federal Reserve oversees insured state-chartered member banks, while FDIC oversees insured state-chartered banks that are not members of the Federal Reserve System. Both the Federal Reserve and FDIC share oversight with the state regulatory authority that chartered the bank. The Federal Reserve also oversees mortgage lending subsidiaries of financial holding companies, although FTC is responsible for enforcement of certain federal consumer protection laws as discussed in the following text. Federal banking regulators have responsibility for ensuring the safety and soundness of the institutions they oversee and for promoting stability in the financial markets. To achieve these goals, regulators establish capital requirements for banks, conduct on-site examinations and off-site monitoring to assess their financial condition, and monitor their compliance with applicable banking laws, regulations, and agency guidance. As part of their examinations, for example, regulators review mortgage lending practices, including underwriting, risk management, and portfolio management practices. Regulators also try to determine the amount of risk lenders have assumed. From a safety and soundness perspective, risk involves the potential that events, either expected or unanticipated, may have an adverse impact on the bank’s capital or earnings. In mortgage lending, regulators pay close attention to credit risk. Credit risk involves the concerns that borrowers may become delinquent or default on their mortgages and that lenders may not be paid in full for the loans they have originated. Certain federal consumer protection laws, including TILA and the act’s implementing regulation, Regulation Z, apply to all mortgage lenders, including mortgage brokers that close loans in their own name. Implemented by the Federal Reserve, Regulation Z requires these creditors to provide borrowers with written disclosures describing basic information about the terms and cost of their mortgage. Each lender’s primary federal supervisory agency holds responsibility for enforcing Regulation Z. Regulators use examinations and consumer complaint investigations to check for compliance with both the act and its regulation. FTC is responsible for enforcing certain federal consumer protection laws for brokers and lenders that are not depository institutions, including state-chartered independent mortgage lenders and mortgage lending subsidiaries of financial holding companies. However, FTC is not a supervisory agency; instead, it enforces various federal consumer protection laws through enforcement actions. The FTC uses a variety of information sources in the enforcement process, including its own investigations, consumer complaints, state and other federal agencies, and others. State regulators oversee independent lenders and mortgage brokers and do so by generally requiring business licenses that mandate meeting net worth, funding, and liquidity thresholds. They may also mandate certain experience, education, and operational requirements to engage in mortgage activities. Other common requirements for licensees may include maintaining records for certain periods, individual prelicensure testing, posting surety bonds, and participating in continuing education activities. States may also examine independent lenders and mortgage brokers to ensure compliance with licensing requirements, review their lending and brokerage functions for state-specific and federal regulatory compliance, and look for unfair or unethical business practices. When such practices arise, or are brought to states’ attention through consumer complaints, regulators and State Attorneys General may pursue actions that include licensure suspension or revocation, monetary fines, and lawsuits. The volume of interest-only and payment-option ARMs grew rapidly between 2003 and 2005 as home prices increased nationwide and lenders marketed these products as an alternative to conventional mortgage products. During this period, AMP lending was concentrated in the higher- priced real estate markets on the East and West Coasts. Also at that time, a variety of federally and state-regulated lenders participated in the AMP market, although a few large federally regulated dominated lending. Once considered a financial management tool for wealthier borrowers, lenders have marketed AMPs as affordability products that enable borrowers to purchase homes they might not be able to afford using conventional fixed- rate mortgages. Furthermore, lenders have increased the variety of AMP products offered to respond to changing market conditions. As home prices increased nationally and lenders offered alternatives to conventional mortgages, AMP originations tripled in recent years, growing from less than 10 percent of residential mortgage originations in 2003 to about 30 percent in 2005. Most of the AMPs originated during this period consisted of interest-only or payment-option ARMs. In 2005, originations of these two products totaled $400 billion and $175 billion, respectively. According to federal regulatory officials, consumer demand for these products grew because their low initial monthly payments enabled borrowers to purchase homes that they otherwise might not have been able to afford with a conventional fixed-rate mortgage. AMP lending has been concentrated in the higher-priced regional markets on the East and West Coasts, where homes are least affordable. For example, based on data from mortgage securitizations in 2005, about 47 percent of interest-only ARMs and 58 percent of payment-option ARMs that were securitized in 2005 originated in California, where NAR reports that 7 of the 20 highest-priced metropolitan real estate markets in the country are located. On the East Coast, Virginia, Maryland, New Jersey, Florida and Washington, D.C., exhibited high concentrations of AMP lending in 2005, as did Washington, Nevada, and Arizona on the West Coast. These areas also have experienced higher rates of house price appreciation than the rest of the United States. A variety of federally and state-regulated lenders were involved in the recent surge of AMP originations. Six large federally regulated lenders dominated much of the AMP production in 2005, producing 46 percent of interest-only and payment-option ARMs originated in the first 9 months of that year. The six included nationally chartered banks and thrifts under the supervision of OCC and OTS as well as mortgage lending subsidiaries of financial holding companies under the supervision of the Federal Reserve. Although these six large, federally-regulated institutions accounted for a large share of AMP lending in that year, other federally and state-regulated lenders also participated in the AMP market, including other nationally and state chartered banks and independent nonbank lenders. Additionally, independent mortgage brokers have been an important source of originations for AMP lenders. Some mortgage brokers in states with high volumes of AMP lending told us in early 2006 that they estimated interest-only and payment-option ARM lending accounted for as much as 35 to 50 percent of their recent business. Once considered a specialized product, AMPs have entered the mainstream marketplace in higher-priced real estate markets. According to federal regulatory officials and a mortgage lending trade association, lenders originally developed and marketed interest-only and payment- option ARMs as specialized products for higher-income, financially sophisticated borrowers who wanted to minimize mortgage payments to invest funds elsewhere. Additionally, they said that other borrowers who found AMPs suitable included borrowers with irregular earnings who could take advantage of interest-only or minimum monthly payments during periods of lower income and could pay down principal and any deferred interest when they received an increase in income. However, according to federal banking regulators and a range of industry participants, as home prices increased rapidly in some areas of the country, lenders began marketing interest-only and payment-option ARMs widely as affordability products. They also said that in doing so, lenders emphasized the low initial monthly payments offered by these products and made them available to less creditworthy and less wealthy borrowers than those who traditionally used them. After the recent surge of interest-only and payment-option ARMs, lenders have increased the variety of AMPs offered as market conditions have changed. According to industry analysts, as interest rates continued to rise, by the beginning of 2006, mortgages with adjustable rates no longer offered the same cost-savings over fixed-rate mortgages, and borrowers began to shift to fixed-rate products. These analysts reported that in response to this trend, lenders have begun to market mortgages that are less sensitive to interest rate increases. For example, interest-only fixed- rate mortgages (interest-only FRMs) offer borrowers interest-only payments for up to 10 years but at a fixed interest rate over the life of the loan. Another mortgage that has gained in popularity is the 40-year mortgage. This product does not allow borrowers to defer interest or principal, but offers borrowers lower monthly payments than conventional mortgages. For example, some variations of the 40-year mortgage have a standard 30-year loan term, but offer lower fixed monthly payments that are based on a 40-year amortization schedule for part or all of the loan term. According to one professional trade publication,—37 percent of first half of 2006 mortgage originations were AMPs, and a significant number of them were 40-year mortgages. Depending on the particular loan product and the payment option the borrower chooses, rising interest rates or choice of a minimum monthly payment and corresponding negative amortization can significantly raise future monthly payments and increase the risk of default for some borrowers. Underwriting trends that, among other things, allowed borrowers with fewer financial resources to qualify for these loans have heightened this risk because such borrowers may have fewer financial reserves against financial adversity and may be unable to sustain future higher monthly payments in the event that they cannot refinance their mortgages or sell their home. Higher default risk for borrowers translates into higher credit risk for lenders, including banks. However, federal regulatory officials and industry participants agree that it is too soon to tell whether risks to borrowers will result in significant delinquencies and foreclosures for borrowers and corresponding losses for banks that hold AMPs in their portfolios. AMPs such as interest-only and payment-options ARMs are initially more affordable than conventional fixed-rate mortgages because during the first few years of the mortgage they allow a borrower to defer repayment of principal and, in the case of payment-option ARMs, part of the interest as well. Specifically, borrowers with interest-only ARMs can make monthly payments of just interest for the fixed introductory period. Borrowers with payment-option ARMs typically have four payment options. The first two options are fully amortizing payments that are based on either a 30-year or 15-year payment schedule. The third option is an interest-only payment, and the fourth is a minimum payment, which we previously described, that does not cover all of the interest. The interest that does not get paid gets capitalized into the loan balance owed, resulting in negative amortization. The deferred payments associated with interest-only and payment-option ARMs will eventually result in higher monthly payments after the introductory period expires. For example, for interest-only mortgages, payments will rise at the expiration of the fixed interest-only period to include repayment of principal. Similarly, when the payment-option period ends for a payment-option ARM, the monthly payments will adjust to require an amount sufficient to fully amortize the outstanding loan balance, including any deferred interest and principal, over the remaining life or term. Depending on the particular loan product, a combination of rising interest rates and deferred or negative amortization can raise monthly payments twofold or more, causing payment shock for those borrowers who cannot avoid and are not prepared for these larger payments. For example, consider the borrower in the following example who took out a $400,000 payment-option ARM in April 2004. The borrower’s payment options for the first year ranged from a minimum payment of $1,287 to a fully amortizing payment of $2,039. Figure 1 shows how monthly payments for the borrower who chose to make only the minimum monthly payments during the 5-year payment-option period could increase from $1,287 to $2,931 or 128 percent, when that period expires. The example in figure 1 assumes loan features that were typical of payment-option ARMs offered during 2004, including a promotional “teaser” rate of 1 percent for the first month of the loan, which set minimum monthly payments for the first year at $1,287; a payment reset cap, which limits any annual increases in minimum monthly payments due to rising interest rates to 7.5 percent for the first five years of the loan; and a negative amortization cap, which limits the amount of deferred interest that could accrue during the first five years until the mortgage balance reaches 110 percent of its original amount, and if reached, triggers a loan recast to fully amortizing payments. After the first month, the start rate of 1 percent expired and the interest due on the loan was calculated on the basis of the fully indexed interest rate, which was 4.55 percent in April 2004 and rose to 6.61 percent in April 2006. Minimum monthly payments were adjusted upward every April, but only by the maximum 7.5 percent allowed. By year 5, the minimum payments reset to $1,718, a 33 percent increase from the initial minimum payment required in year 1. As shown in figure 1, these minimum monthly payments were not enough to cover the interest due on the loan after the start rate expired in the first month of year 1, and the loan immediately began to negatively amortize. By year 2, the loan balance increased by $3,299. As interest rates rose, the amount of deferred interest grew more quickly, reaching $33,446 by the beginning of year 6. Because the start of year 6 marked the end of the 5- year payment-option period, the loan recast to require fully amortizing monthly payments of $2,931. This payment represented a 70 percent increase from the minimum monthly payment required a year earlier and a 128 percent increase from the initial minimum monthly payment in year 1. Note that the largest monthly payment increase occurred at this time, reflecting the combined effect of a fully amortizing payment that is calculated on the basis of both the fully indexed interest rate and the increased loan balance. Federal regulatory officials have cautioned that the risk of default could increase for some recent AMP borrowers. This is because lenders have marketed these products to borrowers who are not as wealthy or financially sophisticated as previous borrowers, and because rising interest rates, combined with constraints on the growth in minimum payments imposed by low teaser rates, have increased the potential for payment shock. FDIC officials expressed particular concern over payment-option ARMs, as they are more complex than interest-only products and have the potential for negative amortization and bigger payment shocks. Mortgage statistics of recently securitized interest-only and payment- option ARMs show a relaxation of underwriting standards regarding credit history, income, and available assets during the years these products increased in popularity. According to one investment bank, interest-only mortgages that were part of subprime securitizations were negligible in 2002, but rose to almost 29 percent of subprime securitizations in 2005. Lenders also originated payment-option ARMs to borrowers with increasingly lower credit scores (see table 1). In addition, besides permitting lower credit scores, lenders increasingly qualified borrowers with fewer financial resources. For example, lenders allowed higher DTI ratios for some borrowers and began combining AMPs with “piggyback” mortgages—that is, second mortgages that allow borrowers with limited or no down payments to finance a down payment. As table 1 shows, by June 2005, 25 percent of securitized payment-option ARMs included piggyback mortgages—up from zero percent 5 years earlier. Furthermore, lenders increasingly have qualified borrowers for AMPs under “low documentation” standards, which allow for less detailed proof of income or assets than lenders traditionally required. Federal banking regulators cautioned that “risk-layering”, which results from the combination of AMPs with one or more relaxed underwriting practices could increase the likelihood that some borrowers might not withstand payment shock and may go into default. In particular, federal regulatory officials said that some recent AMP borrowers, particularly those with low income and little equity, may have fewer financial reserves against financial adversity, which could impact their ability to sustain future higher monthly payments in the event that they cannot refinance their mortgages or sell their homes. Although concerns about the effect of risk-layering exist, OCC officials observed that while underwriting characteristics for AMPs have trended downward over the past few years, lenders generally attempt to mitigate the additional credit risk of AMPs compared to traditional mortgages by having at least one underwriting criteria (such as LTV ratio, DTI ratio, or loan size) tighter for AMPs than for a traditional mortgage. In addition, both OCC and Federal Reserve officials said that most lenders qualify payment-option ARM borrowers at the fully-indexed rate, and not the teaser rate, suggesting that these borrowers have the financial resources to either make more than the minimum monthly payment or to manage any future rise in monthly payments. However, Federal Reserve officials said that borrowers of interest-only loans are qualified on the interest-only payment. For borrowers who intend to refinance their mortgages to avoid higher monthly payments, FDIC officials expressed concern that some may face prepayment penalties that could make refinancing expensive. In particular, they said that borrowers with payment-option ARMs that choose the minimum payment option could reach the negative amortization cap well before the expiration of the five-year payment option period, triggering a loan recast to fully amortizing payments, the need to refinance the mortgage, and the imposition of prepayment penalties. Some recent borrowers may find that they do not have sufficient equity in their homes to refinance or even to sell, particularly if their loans have negatively amortized or they have borrowed with little or no down payment. Again, consider the borrower in figure 1. To avoid the increase in monthly payments when the loan recasts at the end of year 5, the borrower would either have to refinance the mortgage or sell the home. However, because the borrower made only minimum payments, the $400,000 debt would have increased to $433,446. To the extent that the home’s value has risen faster than the outstanding mortgage, or the borrower contributed a substantial down payment, the borrower might have enough equity to obtain refinancing or could sell the house and pay off the loan. However, if the borrower has little or no equity and home prices remain flat or fall, the borrower could easily have a mortgage that exceeds the value of his or her home, thereby making the possibility of refinancing or home sale very difficult. According to an investment bank, as of July 2006, about 75 percent of payment-option ARMs originated and securitized in 2004 and 2005 were negatively amortizing, meaning that borrowers were making minimum monthly payments, and more than 70 percent had loan balances that exceeded the original loan balances. Federal Reserve officials also said they are concerned that some recent borrowers who used AMPs to purchase homes for investment purposes may be less inclined to avoid defaulting on their loans when faced with financial distress, on the basis that mortgage delinquency and default rates are typically higher for these borrowers than for borrowers who use them to purchase their primary residences. According to these officials, borrowers who used AMPs for investment purposes may have less incentive to try to find a way to make their mortgage payments if confronted with payment shock or difficulties in refinancing or selling, because they would not lose their primary residence in the event of a default. According to FDIC officials, this is particularly acute during instances where the borrower has made little or no down payment. Although the majority of borrowers used AMPs to purchase their primary residence, data on recent payment-option ARM securitizations indicate that 14.4 percent of AMPs originated in 2005 were used by borrowers to purchase homes for purposes other than use as a primary residence, up from 5.3 percent in 2000. However, this data did not show the proportion of these originations that were used to purchase homes for investment purposes as compared to second homes. AMP underwriting practices may have increased the risk of payment shock and default for some borrowers, resulting in increased credit risk for lenders, including banks. However, federal regulatory officials said that most banks appeared to be managing this credit risk. First, they said that banks holding the bulk of residential mortgages, including AMPs, are the larger, more diversified financial institutions that would be able to better withstand losses from any one business line. Second, they said that most banks appear to have diversified their assets sufficiently and maintained adequate capital to manage the credit risk of AMPs held in their portfolios or have reduced their risk through loan sales and securitizations. Investment and mortgage banking officials told us that hedge funds, real estate investment trusts, and foreign investors are among the largest investors in the riskiest classes of these securities, and that these investors largely would bear the credit risk from any AMP defaults. In addition, several regulatory officials noted borrowers who have turned to interest-only FRMs are subject to less payment shock than interest-only and payment-option ARM borrowers. As we previously discussed, interest- only FRMs are not sensitive to interest rate changes. For example, the amount of the initial interest-only payment and the later fully amortizing payment are known at the time of loan origination for an interest-only FRM and do not vary. Furthermore, these products tend to feature a longer period of introductory payments than did the interest-only and payment-option ARMs sold earlier, thus giving the borrower more time to prepare financially for the increase in monthly payments or plan to refinance or sell. Federal regulatory officials and industry participants agree that it is too soon to tell how many borrowers with AMPs will become delinquent or go into foreclosure, thereby producing losses for banks that hold AMPs in their portfolios. Most of the AMPs issued between 2003 and 2005 have not recast; therefore, most of these borrowers have not yet experienced payment shock or financial distress. As a result, lenders generally do not yet have the performance data on delinquencies that would serve as an indicator of future problems. Furthermore, the credit profile of recent AMP borrowers is different from that of traditional AMP borrowers, because it includes less creditworthy and less affluent borrowers. Consequently, it would be difficult to use past performance data to predict how many loans would be refinanced before payment shock sets in and how many delinquencies and foreclosures could result for those borrowers who cannot sustain larger monthly payments. The information that borrowers receive about their loans through advertisements and disclosures may not fully or effectively inform them about the risk of AMPs. Federal and state banking regulatory officials expressed concern that advertising practices by some lenders and brokers emphasized the affordability of these products without adequately describing their risks. Furthermore, a recent Federal Reserve staff study and state complaint data indicated that some borrowers appeared to not understand (1) the terms of their ARMs, including AMPs, and (2) the potential magnitude of changes to their monthly payments or loan balance. As AMPs are more complex than conventional mortgage products and advertisements may not provide borrowers with balanced information on these products, it is important that written disclosures provide borrowers with clear and comprehensive information about the key terms, conditions, and costs of these mortgages to help them make an informed decision. That information is conveyed both through content and presentation, including writing style and design. With respect to content, Regulation Z, which includes requirements for mortgage disclosures, requires all creditors (lenders and those brokers that close loans in their own name) to provide borrowers with information about their ARM products. However, these requirements are not designed to address more complex products such as AMPs. The Federal Reserve has recently initiated a review of Regulation Z that will include reviewing the disclosures required for all mortgage loans, including AMPs. For presentation, current guidance available in the federal government suggests good practices on developing disclosures that effectively communicate key information on financial products. Most of the AMP disclosures we reviewed did not always fully or effectively explain the risks of payment shock or negative amortization for these products and lacked information on some important loan features, both because Regulation Z currently does not require lenders to tailor this information to AMPs and because lenders do not always follow leading practices for writing disclosures that are clear, concise, and user-friendly. According to Federal Reserve officials, revising Regulation Z to require better disclosures of the key terms and risks of AMPs could increase borrower understanding of these complex mortgage products, particularly if a broader effort were made to simplify and clarify mortgage disclosures generally. Officials added that borrowers who do not understand their AMPs may not anticipate the substantial increase in monthly payments or loan balance that can occur. Borrowers can acquire information on mortgage options from a variety of sources, including loan officers and brokers, or as noted by mortgage industry participants, through the Internet, television, radio, and telemarketing. However, federal regulatory officials expressed concerns that some consumers may have difficulty understanding the terms and risks of these complex products. These concerns have been heightened as advertisements by some lenders and brokers emphasize the benefits of AMPs without explaining the associated risks. For example, one print advertisement for a payment-option ARM product we obtained stated on the first page that the loan “started” at an interest rate of 1.25 percent, promised a reduction in the homeowner’s monthly mortgage payment of up to 45 percent, and offered three low monthly payment options. However, the lender noted in much smaller print on the second page that the 1.25 percent interest rate applied only to the first month of the loan and could increase or decrease on a monthly basis thereafter. Federal regulatory officials said that less financially sophisticated borrowers might be drawn to the promise of initial low monthly payments and flexible payment options and may not realize the potential for substantial increases in monthly payments and loan balance later. Officials from three of the eight states we contacted reported similar concerns with AMP advertising distributed by the nonbank lenders and independent brokers under their supervision. For example, one official from Ohio told us that some brokers advertised the availability of large loans with low monthly payments and only specified in tiny print at the bottom of the advertisements that the offer involved interest-only products. According to this official, small print makes it more difficult for the consumer to see these provisions and more likely for the consumer not to read them at all. Regulatory officials in Alaska told us some advertisements circulating in their state stated that consumers could save money by using interest-only products, without disclosing that over time these loans might cost more than a conventional product. In some cases, the advertisements were potentially misleading. For example, New Jersey officials provided us with a copy of an AMP advertisement that promised potential borrowers low monthly payments by suggesting that the teaser rate (termed “payment rate” in the advertisement) on a payment-option ARM product was the actual interest rate for the full term of the loan (see figure 2). The officials also said that advertising a rate other than the annual percentage rate (APR), without also including the APR (as seen in the advertisement shown in fig. 2) is contrary to the requirements of Regulation Z. Industry representatives also expressed concerns about AMP advertising. In 2005, the California Association of Mortgage Brokers issued an alert to warn the public about misleading AMP advertisements circulating in the state. The advertisements offered low monthly payments without clearly stating that these payments were temporary, and that the loan could become significantly more costly over time. A recent Federal Reserve staff study and state complaint data indicate that some borrowers appeared to not fully understand the terms and features of their ARMs, including AMPs, and were surprised by the increases in monthly payments or loan balance. In January 2006, staff economists at the Federal Reserve published the results of a study that assessed whether homeowners understood the terms of their mortgages. The study was based, in part, on data obtained from the Federal Reserve’s 2001 Survey of Consumer Finances, which included questions for consumers on the terms of their ARMs. While most homeowners reported knowing their broad mortgage terms reasonably well, some borrowers with ARMs, particularly those from households with lower income and less education, appeared to underestimate the amount by which their interest rates, and thus their monthly payments, could change. The authors suggested that this underestimation might be explained, in part, by borrower confusion about the terms of their mortgages. Although they found that most households in 2001 were unlikely to experience large and unexpected changes in their mortgage payments in the event of a rise in interest rates, some borrowers might be surprised by the change in their payments and subsequently might experience financial difficulties. The Federal Reserve staff study focused on borrowers holding ARM products in 2001—not AMPs. However, as we previously discussed, most AMP products sold between 2003 and 2005 were interest-only and payment-option ARMs that lenders increasingly marketed and sold to a wider spectrum of borrowers. Federal regulatory officials and consumer advocates said that since AMPs tend to have more complicated terms and features than ARMs, borrowers who have these mortgages would be likely to (1) underestimate the potential changes in their interest rates and (2) experience confusion about the terms of their mortgages and amounts of their payments. Because most AMPs have not recast to fully amortizing payments, many borrowers are still making lower monthly payments that do not cover repayment of deferred principal. However, five of the eight states we contacted reported receiving some complaints about AMPs from borrowers who did not understand their loan terms and were surprised by increases in their monthly payments or loan balances. For example, some borrowers with payment-option ARMs complained that they did not know that their loans could negatively amortize until they received their payment coupons and saw that their loan balance had increased. In one case, a borrower believed that the teaser rate would be in effect for 1 or more years, when in fact it was in effect for only the first month. Officials from one state said that they anticipated receiving more consumer complaints regarding AMPs as these mortgages recast over the next several years to require fully amortizing payments. As AMPs are more complex than conventional mortgages and advertisements sometimes expose borrowers to unbalanced information about them, it is important that the written disclosures they receive about these products from creditors provide them with comprehensive information about the terms, conditions, and costs of these loans. Disclosures convey that information in the following two ways: content and presentation. Federal statute and regulation mandate a certain level of content in mortgage disclosures through TILA and Regulation Z. The purpose of both TILA and Regulation Z, which implements the statutory requirements of TILA, is to promote the informed use of credit by requiring creditors to provide consumers with disclosures about the terms and costs of their credit products, including their mortgages. Some of Regulation Z’s mortgage disclosure requirements are mandated by TILA. Under Regulation Z, creditors are required to provide three disclosures for a mortgage product with an adjustable rate: a program–specific disclosure that describes the terms and features of the a copy of the federally authored handbook on ARMs, and a transaction-specific TILA disclosure that provides the borrower with specific information on the cost of the loan. First, Regulation Z requires that creditors provide a program-specific disclosure for each adjustable-rate product the borrower is interested in when the borrower receives a loan application or has paid a nonrefundable fee. Among other things, lenders must include a statement that the interest rate, payment, or loan term may change; an explanation of how the interest rate and payment will be determined; the frequency of interest rate and payment changes; any rules relating to changes in the index, interest rate, payment amount, and outstanding loan balance—including an explanation of negative amortization if it is permitted for the product; and an example showing how monthly payments on a $10,000 loan amount could change based on the terms of the loan. Second, Regulation Z also requires creditors to give all borrowers interested in an ARM a copy of the Consumer Handbook on Adjustable Rate Mortgages or CHARM booklet. The Federal Reserve and OTS wrote the booklet to explain how ARMs work and some of the risks and advantages to borrowers that ARMs introduce, including payment shock, negative amortization, and prepayment penalties. Finally, for both fixed-rate and adjustable-rate loans for home purchases, lenders are required to provide a transaction-specific TILA disclosure to borrowers within 3 days of loan application for loans used to purchase homes. For other home-secured loans this disclosure must be provided before the loan closes. The TILA disclosure reflects loan-specific information, such as the amount financed by the loan, related finance charges, and the APR. Lenders also must include a payment schedule, reflecting the number, amounts, and timing of payments needed to repay the loan. The Federal Reserve periodically has updated Regulation Z in response to new mortgage features and lending practices. For example, in December 2001, the Federal Reserve amended the Regulation Z provisions that implement the Home Ownership and Equity Protection Act (HOEPA), which requires additional disclosures with respect to certain high-cost mortgage loans. The Federal Reserve has also developed model disclosure forms to help lenders achieve compliance with the current requirements. According to Federal regulatory officials, current Regulation Z requirements are designed to address traditional fixed-rate and adjustable- rate products—not more complex products such as AMPs. Consequently, lenders are not required to tailor the mortgage disclosures to communicate information on the potential for payment shock and negative amortization specific to AMPs. The Federal Reserve has recently initiated a review of Regulation Z that will include reviewing the disclosures required for all mortgage loans, including AMPs. In addition, the Federal Reserve has begun taking steps to consider revisions that would specifically address AMPs. During the summer of 2006, the Federal Reserve held a series of four hearings across the country on home-equity lending. Federal Reserve officials said that a major focus of these hearings was on AMPs, including the adequacy of consumer disclosures for these products, how consumers shop for home-secured loans, and how to design more effective disclosures. According to these officials, they are currently reviewing the hearing transcripts and public comment letters as a first step in developing plans and recommendations for revising Regulation Z. In addition, they said that they are currently revising the CHARM booklet to include information about AMPs and are planning to publish a consumer education brochure concerning these products. As we previously noted, the presentation of information in disclosures helps convey information. Regulation Z requires that the mortgage disclosures lenders provide to consumers are clear and conspicuous. Current leading practices in the federal government provide useful guidance on developing financial product disclosures that effectively present and communicate key information on these products. The SEC publishes A Plain English Handbook for investment firms to use when writing mutual fund disclosures. According to the SEC handbook, investors need disclosures that clearly communicate key information about their financial products so that they can make informed decisions about their investments. SEC requires investment firms to use “plain English” to communicate complex information clear and logical manner so that investors have the best possible chance of understanding the information. A Plain English Handbook presents recommendations for both the effective visual presentation and readability of information in disclosure documents. For example, the handbook directs firms to highlight information that is important to investors, presenting the “big picture” before the details. Also, the handbook recommends tailoring disclosures to the financial sophistication of the user by avoiding legal and financial jargon, long sentences, and vague “boilerplate” explanations. Furthermore, it states that the design and layout of the document should be visually appealing, and the document should be easy to read. According to SEC, it developed these recommendations because investor prospectuses were full of complex, legalistic language that only financial and legal experts could understand. Because full and fair disclosures are the basis for investor protection under federal securities laws, SEC reasoned that investors would not receive that basic protection if a prospectus failed to provide information clearly. To see how lenders implemented Regulation Z requirements for AMPs and the extent to which they discussed AMP risks and loan terms, we reviewed eight program-specific disclosures for three interest-only ARMs and five payment-option ARMs, as well as transaction-specific TILA disclosures associated with four of them. Six federally regulated lenders, representing over 25 percent of the interest-only and payment-option ARMs produced in 2005, provided these disclosures to borrowers between 2004 and 2006. We found that the program-specific disclosures, while addressing current Regulation Z requirements, did not always provide full and clear explanations of the potential for payment shock or negative amortization associated with AMPs. Furthermore, in developing these program-specific disclosures, lenders did not always adhere to “plain English” practices for designing disclosures that are readable and visually effective, thus potentially reducing their effectiveness. Finally, we found that Regulation Z does not require lenders to completely disclose important loan information on the transaction-specific TILA disclosures, and, in most cases, lenders did not go beyond these minimum requirements when developing TILA disclosures for AMP borrowers. While addressing current Regulation Z requirements, the program-specific disclosures for the eight adjustable-rate AMPs we reviewed did not always consistently provide clear and full explanations of payment shock and negative amortization as they related to AMPs. For example, in describing how monthly payments could change, two of the disclosures we reviewed closely followed the “boilerplate” language of the model disclosure form, which included a statement that monthly payments could “increase or decrease annually” based on changes to the interest rate, as illustrated in figure 3. “As with all Adjustable Rate Mortgage (ARM) loans, your interest rate can increase or decrease. In the case of a , the monthly payment can increase substantially after the first 60 months or if the loan balance rises to 110 percent of the original amount borrowed, and this creates the potential for payment shock. Payment shock means that the increase in the payment is so significant that it can affect your monthly cash flow.” In reviewing the five payment-option ARM disclosures, we also found that they did not always clearly describe negative amortization and its risks for the borrower. As required by Regulation Z, all of the disclosures explained that the product allowed for negative amortization and described how. However, the disclosures we reviewed did not always clearly or completely explain the harmful effects that could result from negative amortization. In the example above, where the disclosure did link an increased loan balance with payment shock, the effectiveness of the statement is blunted because it does not tell the borrower early on how the loan balance could rise. Instead, in a separate paragraph under the relatively nondescript heading, “More Information About Payment Choices,” the lender tells the borrower that the “minimum payment probably will not be sufficient to cover the interest due each month.” “If your monthly payment is not sufficient to pay monthly interest, you may take advantage of the negative amortization feature by letting the interest rate defer and become part of the principle balance to be paid by future monthly payments, or you may also choose to limit any negative amortization by increasing the amount of your monthly payment or by paying any deferred interest in a lump sum at any time.” . In addition, three of the five payment-option ARM disclosures did not explain how soon the negative amortization cap could be reached in a rising interest rate environment and trigger an early recast. Without this information, borrowers who considered purchasing a typical 5-year payment-option ARM for its flexibility might not realize that their payment- option period could expire before the end of the first 5 years, thus recasting the loan and increasing their monthly payments. Although the potential for payment shock and negative amortization are the most significant risks to an interest-only or payment-option ARM, the program-specific disclosures we reviewed generally did not prominently feature this key information. Instead, in keeping with the layout suggested by the model disclosure form, most of the disclosures we reviewed first provided lengthy discussions on the borrower’s interest rate and monthly payment and the rules related to interest rate and payment changes, before describing how much monthly payments could change for the borrower. One disclosure did use the heading, “Worst Case Example,” to highlight the potential for payment shock for the borrower. However, this information could be hard to find because it is located on the third and fourth page of an eight-page disclosure. Furthermore, the program-specific disclosures generally did not conform to key plain English principles for readability or design in several key areas. In particular, we found that these disclosures were generally written with a complexity of language too high for many adults to understand. Also, most of the disclosures used small, hard-to-read typeface, which when combined with an ineffective use of white space and headings, made them even more difficult to read and hindered identification of important information. Appendix II provides additional information on the results of our analysis. Regulation Z does not require lenders to completely disclose important AMP loan information on the transaction-specific TILA disclosures, including the interest-rate assumptions underlying the payment schedule, the amount of deferred interest that can accrue, and the amount and duration of any prepayment penalty. In most cases, lenders did not go beyond minimum requirements when developing transaction-specific disclosures for AMP borrowers. First, when the mortgage product features an adjustable rate, Regulation Z requires lenders to (1) include a payment schedule and (2) assume that no changes occur in the underlying index over the life of the loan. However, it does not require the disclosures to indicate this assumption, and the four transaction-specific disclosures we reviewed did not include this information. Regulation Z only requires lenders to remind borrowers in the transaction-specific disclosure that the loan has an adjustable rate and refer them to previously provided adjustable-rate disclosures (see fig. 4); therefore, borrowers might not understand that the payment schedule is not representative of their payments in a changing interest rate environment. Figure 4 shows the payment schedule for a 5-year payment-option ARM originated in 2005. The first 5 years show the minimum monthly payments increasing to reflect the difference between the teaser rate and the initial fully-indexed interest rate, but the amount of the increase is constrained each year by the payment reset cap in effect for the loan. The loan recasts in the 6th year to fully amortizing payments. However, this increase could be considerably more if the fully-indexed interest rate were to rise during the first 5 years of the loan. Second, although negative amortization increases the risk of payment shock for the payment-option ARM borrower, Regulation Z does not require lenders to disclose the amount of deferred interest that would accrue each year as a result of making minimum payments. None of the lenders whose transaction-specific disclosures for payment-option ARMs we reviewed elected to include this information. Without it, borrowers would not be able to see how choosing the minimum payment amount could increase the outstanding loan balance from year to year. We reviewed two loan payment coupons that lenders provide borrowers on a monthly basis to see if they provided the borrower with information on negative amortization. Although they included information showing the increased loan balance that resulted from making the minimum monthly payment, borrowers only would receive these coupons once they started making payments on the loan. Finally, Regulation Z requires lenders to disclose whether the loan contains any prepayment penalties, but the regulation does not require the lender to provide any details on this penalty on the transaction-specific disclosure. Three of the four disclosures used two checkboxes to indicate whether borrowers “may” or “will not” be subject to a prepayment penalty if they paid off the mortgage before the end of the term, but did not disclose any additional information, such as the amount of the prepayment penalty (see fig. 4). One disclosure provided information on the length of the penalty period. Without clear prepayment information, borrowers may not understand how expensive it could be to refinance the mortgage if they found their monthly payments were rising and becoming unaffordable. According to federal banking regulators, borrowers who do not understand their AMP may not anticipate the substantial increase in monthly payments or loan balance that could occur, and would be at a higher risk of experiencing financial hardship or even default. One mortgage industry trade association told us that it is in the best interest of lenders and brokers to provide adequate disclosures to their customers so that they will be satisfied with their loan and consider the lender for future business or refer others to them. Officials from one federal banking regulator said that revising Regulation Z requirements so that lender disclosures more clearly and comprehensively explain the key terms and risks of AMPs would be one of several steps needed to increase borrower understanding about these more complex mortgage products. Federal Reserve officials said that there is a trade-off between the goals of clarity and comprehensiveness in mortgage disclosures. In particular, they said that there is a desire to provide information that is both accurate and comprehensive in order to mitigate legal risks, but that might also result in disclosures that have too much information and therefore, are not clear or useful to consumers. According to these officials, this highlights the need for using consumer testing in designing model disclosures to determine (1) what information consumers need, (2) when they need it, and (3) which format and language that will most effectively convey the information so that it is readily understandable. In conducting the review of Regulation Z rules for mortgage disclosures, they said that they plan to use extensive consumer testing and will also use design consultants in developing model disclosure forms. In addition, Federal Reserve officials and other industry participants said that the benefits of amending federally required disclosures to improve their content, usability, and readability might not be realized if revisions were not part of a broader effort to simplify and clarify mortgage disclosures. According to a 2000 report by the Department of the Treasury and the Department of Housing and Urban Development, federally required mortgage disclosures account for only 3 to 5 forms in a process that can generate up to 50 mortgage disclosure documents, most of which are required by the lender or state law. According to federal and state regulatory officials and industry representatives, existing mortgage disclosures are too voluminous and confusing to clearly convey to borrowers the essential terms and conditions of their mortgages, and often are provided too late in the loan process for borrowers to sort through and read. Officials from one federal banking regulator noted that disclosures often are given when borrowers have committed money to apply for a loan, thereby making it less likely that the borrowers would back out even if they did not understand the terms of the loan. Federal banking regulators have responded, collectively and individually, to concerns about the risks of AMP-lending. In December 2005, regulators collectively issued draft interagency guidance for federally regulated lenders that suggests tightening underwriting for AMP loans, developing policies for risk management of AMP lending, and improving consumer understanding of these products. For instance, the draft guidance states that lenders should provide clear and balanced information on both the benefits and risks of AMPs to consumers, including payment shock and negative amortization. In comments to the regulators, some industry groups said the draft guidance would put federally regulated lenders at a disadvantage, while some consumer advocates questioned whether it would protect consumers because it did not apply to all lenders or require revised disclosures. Federal regulatory officials discussed AMP lending in a variety of public and industry forums, widely publicizing their concerns and recommendations. In addition, some regulators individually increased their monitoring of AMP lending, taking such actions as issuing new guidance to examiners and developing new review programs. Draft interagency guidance, which federal banking regulators released in December 2005, responds to their concern that banks may face heightened risks as a result of AMP lending and that borrowers may not fully understand the terms and risks of these products. Federal regulatory officials noted that the draft guidance did not seek to limit the availability of AMPs, but instead sought to ensure that they were properly underwritten and disclosed. In addition, they said the draft guidance reflects an approach to supervision that seeks to help banks identify emerging and growing risks as early as possible, a process that encourages banks to develop advanced tools and techniques to manage those risks, for their own account and for their customers. Accordingly, the draft guidance recommends that federally regulated financial institutions ensure that (1) loan terms and underwriting standards are consistent with prudent lending practices, including consideration of a borrower’s repayment capacity; (2) risk management policies and procedures appropriately mitigate any risk exposures created by these loans; and (3) consumers are provided with balanced information on loan products before they make a mortgage product choice. To address AMP underwriting practices, the draft guidance states that lenders should consider the potential impact of payment shock on the borrower’s capacity to repay the loan. In particular, lenders should qualify borrowers on the basis of whether they can make fully amortizing monthly payments determined by the fully-indexed interest rate, and not on their ability to make only interest-only payments or minimum payments determined from lower promotional interest rates. The draft guidance also notes increased risk to lenders associated with combining AMPs with risk- layering features, such as reduced documentation or the use of piggyback loans. In such cases, the draft guidance recommends that lenders look for off-setting factors, such as higher credit scores or lower LTV ratios to mitigate the additional risk. Furthermore, the draft guidance recommends that lenders avoid using loan terms and underwriting practices that may cause borrowers to rely on the eventual sale or refinancing of their mortgages once full amortization begins. To manage risk associated with AMP lending, the draft guidance recommends lenders develop written policies and procedures that describe AMP portfolio limits, mortgage sales and securitization practices, and risk-management expectations. The policies and procedures also should establish performance measures and management reporting systems that provide early warning of portfolio deterioration and increased risk. The draft guidance also recommends policies and procedures that require banking capital levels that adequately reflect loan portfolio composition and credit quality, and also allow for the effect of stressed economic conditions. To help improve consumer understanding of AMPs, the draft guidance recommends that lender communications with consumers, including advertisements, promotional materials, and monthly statements, be consistent with actual product terms and payment structures and provide consumers with clear and balanced information about AMP benefits and risks. Furthermore, the draft guidance recommends that institutions avoid advertisement practices that obscure significant risks to the consumer. For example, when institutions emphasize the AMP benefit of low initial payments, they also should disclose that borrowers who make these payments may eventually face increased loan balances and higher monthly payments when their loans recast. The draft guidance also recommends that lenders fully disclose AMP terms and features to potential borrowers in their promotional materials, and that lenders not wait until the time of loan application or closing, when they must provide written disclosures that fulfill Regulation Z requirements. Rather, the draft guidance states that institutions should offer full and fair descriptions of their products when consumers are shopping for a mortgage, so that consumers have the appropriate information early enough to inform their decision making. In doing so, the draft guidance urges lenders to employ a user-friendly and readily navigable design for presenting mortgage information and to use plain language with concrete examples of available loan products. Further, the draft guidance states that financial institutions should provide consumers with information about mortgage prepayment penalties or extra costs, if any, associated with AMP loans. Finally, after loan closing, financial institutions should provide monthly billing statement information that explains payment options and the impact of consumers’ payment choices. According to the draft guidance, such communication should help minimize potential consumer confusion and complaints, foster good customer relations, and reduce legal and other risks to lending institutions. Federal regulatory officials said they developed the draft guidance to clarify how institutions can offer AMPs in a safe and sound manner and clearly disclose the potential AMP risks to borrowers. These officials told us they will request remedial action from institutions that do not adequately measure, monitor, and control risk exposures in their loan portfolios. In response to the draft interagency guidance, federal regulators received various responses through comment letters from various groups, such as financial institutions, mortgage brokers, and consumer advocates, and began reviewing comments to develop final guidance. For example, several financial institutions such as banks and their industry associations opposed the draft guidance, suggesting that it put federally regulated institutions at a competitive disadvantage because its recommendations would not apply to lenders and brokers that were not federally regulated. Some lenders suggested implementing these changes through Regulation Z so that they apply to the entire industry, and not just to regulated institutions. Organizations such as the Conference of State Bank Supervisors (CSBS) and the American Association of Residential Mortgage Regulators (AARMR) also noted the possibility of competitive disadvantage and have responded by developing guidance for state- licensed mortgage lenders and brokers who offer AMPs but were not covered by the draft federal guidance issued in December 2005. Other financial institutions said that the recommendations regarding borrower qualification and general underwriting practices were too prescriptive and would have the effect of reducing mortgage choice for consumers. Consumer advocates supported the need for additional consumer protections relating to AMP products, but several questioned whether the draft guidance would add needed protections. They also contended, as did lenders, that since the draft guidance applies only to federally regulated institutions, independent lenders and brokers would not be subject to recommendations aimed at informing and protecting consumers. One advocacy organization said that the proposed guidance is only a recommendation by the agencies regulating some lenders, and that failure to follow the guidance neither leads to any enforceable sanctions nor provides a means of using guidance to obtain relief for a harmed consumer. Although not in a comment letter, another advocate echoed these concerns by saying the draft guidance would not expand consumer protections because it neither requires revisions to mortgage disclosures, nor allows consumers to enforce the application of guidance standards to individual lenders. Although the draft interagency guidance has not been finalized, officials from the Federal Reserve, OCC, OTS, FDIC, and NCUA have reinforced messages regarding AMP risks and appropriate lending practices by publicizing their concerns in speeches, at conferences, and the media. According to an official at the Federal Reserve, federal regulatory officials who publicized their concerns in these outlets raised awareness of AMP risks and reinforced the message that financial institutions and the general public need to manage risks and understand these products, respectively. In addition to drafting interagency guidance and publicizing AMP concerns, officials from each of the federal banking regulators told us they have responded to AMP lending with intensified reviews, monitoring, and other actions. For instance, FDIC developed a review program to identify high-risk lending areas, adjust supervision according to product risk levels, and evaluate risk management and underwriting approaches. OTS staff has performed a review of its 68 most active AMP lenders to assess and respond to potential AMP lending risks while the Federal Reserve and OCC have begun to conduct reviews of their lenders’ AMP promotional and marketing materials to assess how well they inform consumers. As discussed earlier, the Federal Reserve has taken several steps to address consumer protection issues associated with AMPs, including initiating a review of Regulation Z that includes reviewing the disclosures required for all mortgage loans and holding public hearings that in part explored the adequacy and effectiveness of AMP disclosures. In addition, NCUA officials told us they informally contacted the largest credit unions under their supervision to assess the extent of AMP lending at these institutions. FTC also directed some attention to consumer protection issues related to AMPs. In 2004, it charged a California mortgage broker with misleading AMP consumers by making advertisements that contained allegedly false promises of fixed interest rates and fixed payments for variable rate payment option mortgages. As a result of FTC’s actions, a U.S. district court judge issued a preliminary injunction barring the broker’s allegedly illegal business practices. More recently in May 2006, FTC sponsored a public workshop that explored consumer protection issues as a result of AMP growth in the mortgage marketplace. FTC, along with other federal banking regulators and departments, also helped create a consumer brochure that outlines basic mortgage information to help consumers shop for, compare, and negotiate mortgages. Along with federal regulatory officials, state banking and financial regulatory officials we contacted expressed concerns about AMP lending and some have incorporated AMP issues into their licensing and examinations of independent lenders and brokers and worked to improve consumer protection. While the states we reviewed had not changed established licensing and examinations procedures to oversee AMP lending, some currently have a greater focus on and awareness of AMP risks. Two states also had collected AMP-specific data to identify areas of concerns, and one state had proposed changing a consumer protection law to cover AMP products. Most regulatory officials from our sample of eight states focused their concerns about AMP lending on the potential negative effects on consumers. For example, many officials questioned (1) how well consumers understood complex AMP loans, and therefore, how susceptible consumers with AMPs therefore might be to payment shock and (2) how likely consumers would then be to experience financial difficulties in meeting their mortgage payments. Some state officials also said that increased AMP borrowing heightened their concern about mortgage default and foreclosure, and some officials expressed concern about unscrupulous lender or broker operations and the extent to which these entities met state licensing and operations requirements. In addition to these general consumer protection concerns, some state officials spoke about state-specific issues. For example, Ohio officials put AMP concerns in the context of larger economic issues and said AMP mortgages were part of wider economic challenges facing the state, including an already- high rate of mortgage foreclosures and the loss of manufacturing jobs that hurt both Ohio’s consumers and the overall economy. Officials from another state, Nevada, said they worried that lenders and brokers sometimes took advantage of senior citizens by offering them AMP loans that they either did not need or could not afford. State banking and financial regulatory officials expressed concerns about the extent to which consumers understood AMPs and that potential for those who used them to experience monthly mortgage payment increases. Some state officials said that current federal disclosures were complicated, difficult to comprehend, and often did not provide information that could help consumers. However, these officials thought that adding a state-developed disclosure to the already voluminous mortgage process would add to the confusion and paperwork burden. Officials from most states have not created their own mortgage disclosures. State banking and financial regulators from our sample generally responded to concerns about AMP lending by increasing their attention to AMP issues through their existing regulatory structure of lender and broker licensing and examination, but some states had taken additional approaches. Most of the state officials from our sample suggested they primarily used their own state laws and regulations to license mortgage lenders and brokers and to ensure that these entities met minimum experience and operations standards. While these were not AMP-specific actions, several state officials told us these actions help ensure that lenders had the proper experience and other qualifications to operate within the mortgage industry. Some officials told us that these requirements also helped ensure that those with criminal records or histories of unscrupulous mortgage behavior would not continue to harm consumers. Some state officials said that they were particularly sensitive to AMP lenders’ records of behavior because of the higher risks these products entailed for consumers. However, Alaska provided an exception. Alaska had not specifically responded to AMP lending and Alaska officials noted that the state does not have statutes or regulations that govern mortgage lending, nor are mortgage lenders or brokers required to be licensed to make loans. Many of the state banking and financial regulatory officials we contacted also told us that they periodically examine AMP lenders and brokers for compliance with state licensing, mortgage lending, and general consumer protection laws, including applicable fair advertising requirements. Because state officials perform examinations for all licensed lenders and brokers, these regulatory processes also are not AMP-specific. However, some state officials said they were particularly aware of AMP risks to consumers and had begun to pay more attention to potential lender, broker, and consumer issues during their oversight reviews. For example, because AMP lending heightens potential risks for consumers, several state officials said they had taken extra care during their licensing and examination reviews to review lender and broker qualifications and loan files. A few states had worked outside of the existing licensing and examination framework to identify AMP issues and protect consumers. Officials from several states said that because they did not collect data on AMP loans and borrowers, they did not fully understand the level and types of AMP lending in their states. However, two states from our sample had begun to gather AMP data to improve their information on AMP lending. New Jersey conducted a mortgage lending survey among its state-chartered banks that specifically collected data on interest-only and payment-option mortgages, while Nevada implemented annual reporting requirements for lenders and brokers on the types of loans they originate. New Jersey and Nevada officials told us that these efforts would provide an overview of AMP lending in each state and would serve to help identify emerging AMP issues. Other states reacted by focusing on consumer protection or using guidance for independent lenders and mortgage brokers. Ohio addressed mortgage issues, including AMP concerns, by working to improve its consumer protection law. This law originally did not cover mortgage lenders and brokers, but was amended to include protections found in other states. As of June 2006, officials drafted and passed legislation to expand the law’s provisions to cover these entities and require lenders and brokers to meet fiduciary standards to offer loans that serve the interest of potential borrowers. Officials from another state in our sample, New York, said they planned to use guidance developed by the Conference of State Bank Supervisors and American Association of Residential Mortgage Regulators to address AMP lending concerns at the state level. In addition, they said that they were revising their banking examination manual to address AMP concerns, reflect recommendations made in their guidance, and provide examiners with areas of concern on which to focus during their reviews. Historically AMPs were offered to higher-income, financially sophisticated borrowers who wanted to minimize their mortgage payments to better manage their cash flows. In recent years, federally and state-regulated lenders and brokers widely marketed AMPs by touting their low initial payments and flexible payment options, which helped borrowers to purchase homes for which they might not have been able to qualify with a conventional fixed-rate mortgage, particularly in some high-priced markets. However, the growing use of these products, especially by less informed, affluent, and creditworthy borrowers, raises concerns about borrowers’ ability to sustain their monthly mortgage payments, and ultimately to keep their homes. When these mortgages recast and payments increase, borrowers who cannot refinance their mortgages or sell their homes could face substantially higher payments. If these borrowers cannot make these payments, they could face financial distress; delinquency; and possibly, foreclosure. Nevertheless, it is too soon to tell the extent to which payment shock will produce financial distress for borrowers and induce defaults that would affect banks that hold AMPs in their portfolios. Federal banking regulators have taken steps to address the potential risks of AMPs to lenders and borrowers. They have drafted guidance for lenders to strengthen underwriting standards and improve disclosure of information to borrowers. Because the key features and terms of AMPs may continue to evolve, it is essential for the regulators to make an effort to respond to AMP lending growth in ways that seek to balance market innovation and profitability for lenders with timely information and mortgage choices for borrowers. Furthermore, with the continued popularity of AMPs, it is important that the federal banking regulators finalize the draft guidance in a timely manner. The popularity and complexity of AMPs and lenders’ marketing of these products highlight the importance of mortgage disclosures in helping borrowers make informed mortgage decisions. As lenders and brokers increasingly market AMPs to a wider spectrum of borrowers, more borrowers may struggle to fully understand the terms and risks of these products. While Regulation Z requires that lenders provide certain information on ARMs, currently lenders are not required to tailor the mortgage disclosures to communicate to borrowers information on the potential for payment shock and negative amortization specific to AMPs. In particular, although they may be in compliance with Regulation Z requirements, the disclosures we reviewed did not provide borrowers with easily comprehensible information on the key features and risks of their mortgage products. Furthermore, the readability and usability of these documents were limited by the use of language that was too complex for many adults and document designs that made the text difficult to read and understand. As such, these documents were not consistent with leading practices at the federal level for financial-product disclosures that are predicated on investment firms’ providing investors with important product information clearly to further their informed decision making. Although the draft interagency guidance by federal banking regulators addressed some of the concerns with consumer disclosures, the draft guidance focuses on promotional materials, not the written disclosures required by Regulation Z at loan application and closing. In addition, the guidance does not apply to nonbank lenders, whereas Regulation Z applies to the entire industry. We recognize that the Federal Reserve has begun to review disclosure requirements for all mortgage loans, including AMPs, under Regulation Z and has used the recent HOEPA hearings to gather public testimony on the effectiveness of current AMP disclosures. Furthermore, we agree with regulators and industry participants’ views that revising Regulation Z to make federally required mortgage disclosures more useful for borrowers that use complex products like AMPs is a good first step to addressing a mortgage disclosure process that many view as overwhelming and confusing for the average borrower. Without amending Regulation Z to require lenders to clearly and comprehensively explain the terms and risks of AMPs, borrowers might not be able to fully exercise informed judgment on what is likely a significant investment decision. We commend the Federal Reserve’s efforts to review its existing disclosure requirements and focus the recent HOEPA hearings in part on AMPs. As the Federal Reserve begins to review and revise Regulation Z as it relates to disclosure requirements for mortgage loans, we recommend that the Board of Governors of the Federal Reserve System consider improving the clarity and comprehensiveness of AMP disclosures by requiring language that explains key features and potential risks specific to AMPs, and effective format and visual presentation, following criteria such as those suggested by SEC’s A Plain English Handbook. We requested comments on a draft of this report from the Federal Reserve, FDIC, NCUA, OCC, and OTS. We also provided a draft to FTC and selected sections of the report to the relevant state regulators for their review. The Federal Reserve provided written comments on a draft of this report, which have been reprinted in appendix III. The Federal Reserve noted that it has already begun a comprehensive review of Regulation Z, including its requirements for mortgage disclosures. The Federal Reserve reiterated that one of the purposes of its recent public hearings on home equity lending was to discuss AMPs, and in particular, whether consumers receive adequate information about these products. It intends to use this information in developing plans and recommendations for revising Regulation Z within the existing framework of TILA. The Federal Reserve stressed that any new disclosure requirements relating to features and risks of today’s loan products must be sufficiently flexible to allow creditors to provide meaningful disclosures even as those products develop over time. In response to our recommendation to consider improving the clarity and comprehensiveness of AMP disclosures, the Federal Reserve noted that it plans to conduct consumer testing to determine what information is important to consumers, what language and formats work best, and how disclosures can be revised to reduce complexity and information overload. To that end, the Federal Reserve said that it will use design consultants to assist in developing model disclosures that are most likely to be effective in communicating information to consumers. In addition, the Federal Reserve provided examples of other efforts that it is currently engaged in to enhance the information consumers received about the features and risks associated with AMPs, which we have previously discussed in the report. FDIC, FTC, NCUA, OCC, and OTS did not provide written comments. Finally, the Federal Reserve, FDIC, FTC, and OCC provided technical comments, which we have incorporated into the final report. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this report. At that time, we will send copies of this report to the Chairman and Ranking Minority Member of the Senate Committee on Banking, Housing, and Urban Affairs and the Ranking Minority Member of its Subcommittee on Housing and Transportation; the Chairman and Ranking Minority Member of the House Committee on Financial Services; other interested congressional committees. We will also send copies to the Chairman, Federal Deposit Insurance Corporation; the Chairman, Board of Governors of the Federal Reserve System; the Chairman, National Credit Union Administration; the Comptroller of the Currency; and the Director, Office of Thrift Supervision. We will also make copies available to others upon request. The report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-8678 or williamso@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. To identify recent trends in the market for alternative mortgage products (AMPs), we gathered information from federal banking regulators and the residential mortgage lending industry on AMP product features, customer base, and originators as well as on reasons for the recent growth of these products. To determine the potential risks of AMPs for lenders and borrowers, we analyzed the changes, especially increases, in future monthly payments that can occur with AMPs. We analyzed these data using several scenarios, including rising interest rates and negative amortization. We obtained data from a private investment firm on the underwriting characteristics of recent interest-only and payment-option adjustable rate mortgage (ARM) issuance and obtained information on the securitization of AMPs from federal banking regulators, government-sponsored enterprises, and the secondary mortgage market. We conducted a limited analysis to assess the reliability of the investment firm’s data. To do so, we interviewed a firm representative and an official from a federal banking regulator (federal regulatory official) to identify potential data limitations and determine how the data were collected and verified and to identify potential data limitations. On the basis of this analysis, we concluded that the firm’s data were sufficiently reliable for our purposes. Finally, we interviewed federal regulatory officials and representatives from the residential mortgage lending industry and reviewed studies on the risks of these mortgages compared with conventional fixed rate mortgages. To determine the extent to which mortgage disclosures present the risks of AMPs, we reviewed federal laws and regulations governing the content of required mortgage disclosures. We obtained examples of AMP-related advertising and mortgage disclosures, reviewed studies on borrowers’ understanding of adjustable rate products, and conducted interviews with federal regulatory officials and industry participants. To obtain state regulators’ views on AMP mortgage disclosures, we also selected a sample of eight states and reviewed laws and regulations related to disclosure requirements. We obtained examples of AMP advertisements, disclosures, and AMP-related complaint information and interviewed state officials. We generally selected states that 1) exhibited high volumes of AMP lending, 2) provided geographic diversity of state locations, and 3) provided diverse regulatory records when responding to the challenges of a growing AMP market. Because state-level data on AMP lending volumes were not available, we determined which states had high volumes of AMP lending by using data obtained from a Federal Reserve Bank on states that had high levels of ARM growth and house price appreciation in 2005, factors which this study suggested corresponded with high volumes of AMP lending. Furthermore, we reviewed regulatory data showing that the largest AMP lenders conducted most of their lending in these states. We selected eight states and conducted in-person interviews with officials from California, New Jersey, New York, and Ohio. We conducted telephone interviews with officials from the remainder of the sample states (Alaska, Florida, Nevada, and North Carolina). We also analyzed for content, readability, and usability a selected sample of eight written disclosures that six federally regulated AMP lenders provided to borrowers between 2004 and 2006. The sample included program-specific disclosures for three interest-only ARMs and for five payment-option ARMs as well as transaction-specific disclosures associated with four of them. The six lenders represented over 25 percent of the interest-only and payment-option ARMs produced in the first 9 months of 2005. First, we assessed the extent to which the disclosures described the key risks and loan features of interest-only and payment- option ARMs. Second, we conducted a readability assessment of these disclosures using computer-facilitated formulas to predict the grade level required to understand the materials. Readability formulas measure the elements of writing that can be subjected to mathematical calculation, such as the average number of syllables in words or number of words in sentences in the text. We applied the following commercially available formulas to the documents: Flesch Grade Level, Frequency of Gobbledygook (FOG), and Simplified Measure of Gobbledygook (SMOG). Using these formulas, we measured the grade levels at which the disclosure documents were written for selected sections. Third, we conducted an evaluation that assessed how well these AMP disclosures adhered to leading practices in the federal government for usability. We used guidelines presented in the Securities and Exchange Commission’s (SEC) A Plain English Handbook: How to Create Clear SEC Disclosure Documents (1998). SEC publishes the handbook for investment firms to use when writing mutual fund disclosures. The handbook presents criteria for both the effective visual presentation and readability of information in disclosure documents. To obtain information on the federal regulatory response to the risks of AMPs for lenders and borrowers, we reviewed the draft interagency guidance on AMP lending issued in December 2005 by federal banking regulators and interviewed regulatory officials about what actions they could use to enforce guidance principles upon final release of the draft. We also reviewed comments written by industry participants in response to the draft guidance. To review industry comments, we selected 29 of the 97 comment letters that federal regulators received. We selected comment letters that represented a wide range of industry participants, including lenders, brokers, trade organizations, and consumer advocates. We analyzed the comment letters for content; sorted them according to general comments, issues of institutional safety and soundness, consumer protection, or other concerns; and summarized the results of the analysis. To obtain information on selected states’ regulatory response to the risks of AMPs for lenders and borrowers, we reviewed current laws and, where applicable, draft legislation from the eight states in our sample and interviewed these states’ banking and mortgage lending officials. We performed our work between September 2005 and September 2006 in accordance with generally accepted government auditing standards. The AMP disclosures that we reviewed did not always conform to key plain English principles for readability or design. We analyzed a selected sample of eight written AMP disclosures to determine the extent to which they adhered to best practices for financial product disclosures. In conducting this assessment, we used three widely used “readability” formulas as well as guidelines from the SEC’s A Plain English Handbook. In particular, the AMP disclosures that we reviewed were written at a level of complexity too high for many adults to understand. Also, most of the disclosures that we reviewed used small typeface, which when combined with an ineffective use of white space and headings, made them more difficult to read and hindered identification of important information. The AMP disclosures that we reviewed contained content that was written at a level of complexity higher than the level at which many adults in the United States read. To assess the reading level required for AMP disclosures, we applied three widely used “readability” formulas to the sections of the disclosures that discussed how monthly payments could change. These formulas determined the reading level required for written material on the basis of quantitative measures, such as the average numbers of syllables in words or the number of words in sentences. On the basis of our analysis, the disclosures were written at reading levels commensurate with an education level ranging from 9th to 12th grade, with an average near the 11th grade. A nationwide assessment of reading comprehension levels of the U.S. population reported in 2003 that 43 percent of the adult population in the United States reads at a “basic” level or below. While certain complex terms and phrases may be unavoidable in discussing financial material, disclosures that are written at too high a reading level for the majority of the population are likely to fail in clearly communicating important information. To ensure that disclosures investment firms provide to prospective investors are understandable, the Plain English Handbook recommends that investment firms write their disclosures at a 6th- to 8th-grade reading level. Most of the AMP disclosures used font sizes and typeface that were difficult to read and could hinder borrowers’ ability to find information. The disclosures extensively used small typeface in AMP disclosures, when best practices suggest using a larger, more legible type. A Plain English Handbook recommends use of a 10-point font size for most investment product disclosures and a 12-point size font if the target audience is elderly. Most of the disclosures we reviewed used a 9-point size font or smaller. Also, more than half of the disclosures used sans serif typeface, which is generally considered more difficult to read at length than its complement, serif typeface. Figure 5 below provides an example of serif and sans serif typefaces. The handbook recommends the use of serif typefaces for general text because the small connective strokes at the beginning and end of each letter help guide the reader’s eye over the text. The handbook recommends using the sans serif typeface for short pieces of information, such as headings or for emphasizing particular information in the document. In addition, some lenders’ efforts to use different font types to highlight important information made the text harder to read. Several disclosures emphasized large portions of text in boldface and repeated use of all capital letters for headings and subheadings. According to the handbook, formatting large blocks of text in capital letters makes it harder to read because the shapes of the words disappear, thereby forcing the reader to slow down and study each letter. As a result, readers tend to skip sentences that are written entirely in capital letters. The AMP disclosures generally did not make effective use of white space, reducing their usefulness. According to the Plain English Handbook, generous use of white space enhances usability, helps emphasize important points, and lightens the overall look of the document. However, in most of the AMP disclosures, the amount of space between the lines of text, paragraphs, and sections was very tight, which made the text dense and difficult to read. This difficulty was compounded by the use of fully justified text—that is, text where both the left and right edges are even—in half of the disclosure documents. According to the handbook, when text is fully justified, the spacing between words fluctuates from line to line, causing the eye to stop and constantly readjust to the variable spacing on each line. This, coupled with a shortage of white space, made the disclosures we reviewed visually unappealing and difficult to read. The handbook recommends using left-justified, ragged right text (as this report uses), which research has shown is the easiest text to read. Very little visual weight or emphasis was given to the content of the disclosures other than to distinguish the headings from the text of the section beneath it. As a result, it was difficult to readily locate information of interest or to quickly identify the most important information—in this case, what the maximum monthly payment could be for a borrower considering a particular AMP. According to the handbook, a document’s hierarchy shows how its designer organized the information and helps the reader understand the relationship between different levels of information. A typical hierarchy might include several levels of headings, distinguished by varying typefaces. In addition to those named above, Karen Tremba, Assistant Director; Tania Calhoun; Bethany Claus Widick; Stefanie Jonkman; Mark Molino; Robert Pollard; Barbara Roesmann; and Steve Ruszczyk made key contributions to this report.
Alternative mortgage products (AMPs) can make homes more affordable by allowing borrowers to defer repayment of principal or part of the interest for the first few years of the mortgage. Recent growth in AMP lending has heightened the importance of borrowers' understanding and lenders' management of AMP risks. This report discusses the (1) recent trends in the AMP market, (2) potential AMP risks for borrowers and lenders, (3) extent to which mortgage disclosures discuss AMP risks, and (4) federal and selected state regulatory response to AMP risks. To address these objectives, GAO used regulatory and industry data to analyze changes in AMP monthly payments; reviewed available studies; and interviewed relevant federal and state regulators and mortgage industry groups, and consumer groups. From 2003 through 2005, AMP originations, comprising mostly interest-only and payment-option adjustable-rate mortgages, grew from less than 10 percent of residential mortgage originations to about 30 percent. They were highly concentrated on the East and West Coasts, especially in California. Federally and state-regulated banks and independent mortgage lenders and brokers market AMPs, which have been used for years as a financial management tool by wealthy and financially sophisticated borrowers. In recent years, however, AMPs have been marketed as an "affordability" product to allow borrowers to purchase homes they otherwise might not be able to afford with a conventional fixed-rate mortgage. Because AMP borrowers can defer repayment of principal, and sometimes part of the interest, for several years, they may eventually face payment increases large enough to be described as "payment shock." Mortgage statistics show that lenders offered AMPs to less creditworthy and less wealthy borrowers than in the past. Some of these recent borrowers may have more difficulty refinancing or selling their homes to avoid higher monthly payments, particularly if interest rates have risen or if the equity in their homes fell because they were making only minimum monthly payments or home values did not increase. As a result, delinquencies and defaults could rise. Officials from the federal banking regulators stated that most banks appeared to be managing their credit risk by diversifying their portfolios or through loan sales or securitizations. However, because the monthly payments for most AMPs originated between 2003 and 2005 have not reset to cover both interest and principal, it is too soon to tell to what extent payment shocks would result in increased delinquencies or foreclosures for borrowers and in losses for banks and other lenders. Regulators and others are concerned that borrowers may not be well-informed about the risks of AMPs, due to their complexity and because promotional materials by some lenders and brokers do not provide balanced information on AMPs benefits and risks. Although lenders and certain brokers are required to provide borrowers with written disclosures at loan application and closing, federal standards on these disclosures do not currently require specific information on AMPs that could better help borrowers understand key terms and risks. In December 2005, federal banking regulators issued draft interagency guidance on AMP lending that discussed prudent underwriting, portfolio and risk management, and consumer disclosure practices. Some lenders commented that the recommendations were too prescriptive and could limit consumer choices of mortgages. Consumer advocates expressed concerns about the enforceability of these recommendations because they are presented in guidance and not in regulation. State regulators GAO contacted generally relied on existing regulatory structure of licensing and examining independent mortgage lenders and brokers to oversee AMP lending.
U.S. diplomatic missions have faced numerous attacks in recent years, resulting in legal and policy changes. According to DS, between January 1998 and December 2013, there were 336 attacks against U.S. personnel and facilities. Several of those attacks resulted in the deaths of U.S. personnel, destruction of U.S. facilities, or both including recent attacks in Benghazi, Libya, in September 2012, and in Ankara, Turkey, and Herat, Afghanistan, in 2013. Pub. L. No. 99-399 (codified at 22 U.S.C. § 4801 et seq). The Omnibus Diplomatic Security and Antiterrorism Act of 1986 requires that the Secretary of State (in consultation with the heads of other federal agencies) develop and implement policies and programs, including funding levels and standards, to provide for the security of U.S. government diplomatic operations abroad. State’s policies are detailed in the FAM and corresponding FAH; these include the Overseas Security Policy Board (OSPB) standards and the Physical Security Handbook specifications to guide implementation of the standards. In June 1991, State adopted new security-related construction standards, which are included in the FAH and have continued to evolve. Responsibility for diplomatic facility security falls principally on two State bureaus, DS and OBO. DS is responsible for, among other things, establishing and operating security and protective procedures at posts, developing and implementing posts’ physical security programs, and chairing the interagency process that sets security standards. In addition, at posts, it is the DS agents known as Regional Security Officers (RSOs), including Deputy RSOs and Assistant RSOs, that are responsible for protection of personnel and property, documenting threats and facility vulnerabilities, and identifying possible mitigation efforts to address those vulnerabilities, among other duties. OBO is responsible for the design, construction, acquisition, maintenance, and sale of U.S. government diplomatic property abroad, establishing construction programs—including those for most facility and security-related construction—and providing direction and guidance on construction matters abroad to State regional bureaus and other agencies. State’s overseas posts also play a role in setting post-specific security measures and funding some physical security upgrades, with approval from DS. In addition, M/PRI manages State’s implementation of ARB recommendations and State’s Bureau of Administration coordinates State’s clearance process regarding updates to the FAM and FAH. (See fig. 2 for an organizational chart of the key State offices responsible for physical security.) USAID maintains its own Office of Security, which is responsible for the physical security of its facilities and coordinating with DS. According to OBO, State maintains approximately 1,600 work facilities, which includes offices and warehouses, at 275 diplomatic posts— embassies, consulates, and missions—worldwide under chief-of-mission authority. A significant number of State’s embassies and consulates predate the June 1991 construction standards. State constructed approximately 475 of the work facilities, including over 120 new embassy and consulate compounds and annex facilities built to the newer construction standards. In addition, State acquired—purchased or leased—over 1,125 work facilities. According to State officials, State has a limited number of temporary work facilities, mostly in high-risk locations such as Afghanistan. In addition, USAID maintains over 25 independently leased facilities. In fiscal years 2009 through 2014, State allotted about $8.3 billion directly to construction of new secure facilities and physical security upgrades to existing and acquired facilities (see table 1). While DS has a few small programs to provide physical security upgrades to facilities abroad, most of the allotted funds were managed by OBO. DS and OBO have detailed the conditions under which each bureau is responsible for funding security construction and upgrades. In general, OBO is responsible for constructing new facilities and funding upgrades to owned facilities and leased office facilities, while DS is responsible for funding physical security upgrades to leased residential facilities. State runs other programs, such as OBO’s major rehabilitation program and DS’s technical field support efforts, which may include physical security upgrades; however, we did not include funding from these sources in table 1. USAID has also allotted about $0.03 billion to directly support physical security upgrades. To manage risk to overseas facilities under chief-of-mission authority, State conducts a range of ongoing activities (see fig. 3), and after the September 2012 attacks, it took additional steps to improve risk management activities. Nonetheless, we found problems with facility categorization and data reliability that may affect State’s ability to accurately track facilities and rank them by the risks they face. State conducts several key activities to manage risk to overseas facilities: OBO tracks facilities in a property inventory database, and OBO and other bureaus rely on the information in this database to inform a number of security-related decisions. DS uses security-related questionnaires completed by officials at each post to assess and determine threat levels at each overseas post. Working through an interagency group, DS establishes security standards for facilities overseas, which vary depending on the threat levels at each post. Guided by the security standards, officials at posts periodically assess facilities to identify security deficiencies or vulnerabilities. DS analyzes information from OBO’s property inventory database, the threat assessments, and the vulnerability assessments to assess the risk faced by overseas facilities. DS then ranks facilities by the level of risk each facility faces to help OBO prioritize embassy and consulate construction plans. In addition to these ongoing activities to manage risk, State has taken steps to implement recommendations resulting from several post- Benghazi reports such as the ARB. OBO is responsible for maintaining records on all diplomatic residential and work facilities overseas in its property inventory database (hereafter referred to as OBO’s property database). According to OBO officials, OBO and other State bureaus rely on this database for data on over 1,600 work facilities. OBO’s property database includes data for facilities State or USAID owns or leases, including facilities located outside embassy and consulate compounds—such as office spaces and warehouses—and facilities outside both the work and residential categories—such as recreational facilities. OBO’s property database does not include host-government facilities where U.S. agencies may operate, such as laboratories supported by the Centers for Disease Control. According to OBO officials, OBO has undertaken several efforts since early 2012 to improve the quality of the information in its property database. For example, OBO hired additional staff to review the reliability of the data in the system, and these staff identified outdated records and missing information. In addition, in response to the Benghazi ARB report, OBO requested that all posts provide (1) a list of all facilities located off compound and (2) the number of desk positions at each facility. OBO intended to use this information to ensure that OBO’s property database contained records on all off-compound work facilities. According to OBO officials, the updated information from posts had been entered into its property database as of spring 2013. DS assesses six types of threats at each overseas post by evaluating the post’s security situation and assigning a corresponding threat level, which is used to determine the security standards required for facilities at that post. Published annually by State, the Security Environment Threat List documents each post’s threat levels for six threat categories, including political violence, terrorism, residential, and nonresidential crime. Each post is assigned one of four threat levels for each threat category. The levels are as follows: critical: grave impact on American diplomats; high: serious impact on American diplomats; medium: moderate impact on American diplomats; and low: minor impact on American diplomats. According to DS officials, the bureau develops the Security Environment Threat List threat levels for each post based on questionnaires filled out by post officials, and the final threat ratings are reviewed and finalized through an iterative process involving officials at overseas posts and in headquarters. DS, in conjunction with the interagency OSPB, reviews and issues uniform guidance on physical security standards for diplomatic work facilities overseas. Chaired by the Assistant Secretary of DS, OSPB includes representatives from approximately 20 U.S. agencies with personnel overseas, including intelligence, foreign affairs, and other agencies. State incorporates the OSPB’s physical security standards in the FAH for the six types of overseas work facilities, including embassy and consulate compounds. Facilities overseas, whether permanent, interim, or temporary, are required to meet the standards applicable to them. The OSPB standards vary by facility type, date of construction or acquisition, and threat level. If facilities do not meet all applicable standards, posts are required to request waivers to SECCA requirements, exceptions to OSPB standards, or both. Within State’s physical security standards, we identified six categories of key security requirements, to protect overseas work facilities against physical attacks and other dangers: (1) a 100-foot setback from the perimeter wall, (2) anti-climb perimeter walls and “clear zone,” (3) anti- ram protection, (4) hardened building exteriors, (5) controlled access to the compound or facility, and (6) a safe space for taking refuge during an attack (see fig. 4 for an illustration of the six categories at a notional embassy). In addition to the OSPB standards, State independently developed and continues to update the Physical Security Handbook, also published in the FAH, which provides detailed supporting information, such as construction specifications and diagrams, to help officials understand how to implement and meet the OSPB standards. State has supplemented the physical security standards found in the FAH with guidance found in other sources, such as OBO’s construction manuals or guidance sent out to posts. DS and State’s OIG periodically assess facility vulnerabilities to identify security deficiencies. For example, RSOs at every post are to inspect the physical security of (1) each work facility at least once every 3 years to identify vulnerabilities and (2) potential properties prior to acquisition. To support these security assessments, DS developed a physical security survey template to guide RSOs in conducting the facility inspections. According to DS officials, during these inspections the RSOs are expected to identify all instances in which a facility does not meet OSPB standards. However, while visiting posts, we learned that not all RSOs know how to determine whether a facility meets certain security requirements. For example, one RSO did not know how to determine the level of protection provided by a forced-entry and ballistic-resistant door.Furthermore, based on our review of physical security surveys for 50 facilities at 14 posts, we identified four facilities with out-of-date surveys and 14 facilities for which DS could not provide us with a survey. DS is currently redesigning the physical security survey templates and automating the survey process, which may address the problems we identified. According to DS officials, RSOs in the field had already evaluated 44 embassies and consulates and a smaller number of other work facility types with the new survey templates as of November 2013. The OIG is also supposed to inspect each overseas post once every 5 years; however, due to resource constraints, the OIG Office of Inspections has not done so. The OIG Office of Inspections has conducted inspections in an average of 24 countries per year (including all constituent posts within each country) in fiscal years 2010 through 2013. Given their limited resources, according to OIG officials, they have prioritized higher-risk posts. OIG’s post inspections cover all aspects of post management, including consular affairs, public diplomacy, and security, among other things. Each inspection team, according to OIG officials, includes one or two security inspectors who evaluate all aspects of a post’s security, including compliance with OSPB standards. Following the inspection, the OIG provides a report with all recommendations to the post’s management, DS, OBO, and other relevant bureaus which are required to respond to the OIG’s recommendations. DS combines facility, threat, and vulnerability data to rank the level of risk faced by overseas facilities. This risk matrix forms the basis for OBO’s new embassy and consulate construction plans. According to DS officials, to develop the list of facilities ranked in the risk matrix, DS obtains a list of work facilities from OBO. To rank embassy and consulate compounds and off-compound facilities according to the risks they face, DS draws data from its threat and vulnerability assessments—including the threat levels for political violence and terrorism, host-country willingness and capability ratings, facility setback distance, and a facility rating for compliance with security standards. DS’s risk matrix also draws staffing data, such as numbers of desk positions and percentages of off- compound desk positions, from OBO’s annual colocation study, which enables OBO to collect updated staffing information from posts. The DS risk matrix was developed for OBO to identify facility replacement priorities in accordance with SECCA, which mandated that State submit a report annually from 2000 to 2004 that identified diplomatic facilities that were a priority for replacement or for any major security enhancement because of vulnerability to a terrorist attack. DS has continued this practice and now typically updates the risk matrix annually. OBO uses the risk matrix to develop its Capital Security Construction Program schedule, which identifies the highest priority posts for contract awards over the next 5 years. This schedule takes into consideration the availability of land at each post, the feasibility of obtaining construction permits, and other factors. DS has, on occasion, modified the factors scored in the risk matrix to address the changing risk environment, according to a DS official. For example, in 2010, OBO requested that DS include the percentage of off- compound desk positions as one of the factors used in ranking facilities in the risk matrix. For the most current version, DS split the host- government capability and willingness score into two separate scores to reflect the increased emphasis on these factors following the September 2012 attacks. Since the September 2012 attacks against U.S. facilities overseas, State has taken several actions to better manage risks to work facilities overseas, including (1) conducting interagency facility security assessments, (2) creating the High Threat Programs Directorate in DS, and (3) taking steps to address recommendations from the Benghazi ARB report. In response to the September 2012 attacks against overseas facilities—including facilities in Libya, Sudan, Tunisia, Yemen, and Egypt, among others—State formed several Interagency Security Assessment Teams to assess security vulnerabilities at 19 posts that DS considered to be high-threat and high-risk. Each team was led by a senior DS agent and included a DS physical security expert, and two U.S. military officials. Rather than assess the facilities at the 19 posts against the OSPB standards, the teams assessed all facilities at the 19 posts for any type of security vulnerability—physical or procedural. The Interagency Security Assessment Team process resulted in a report that included a list of recommendations for State, and more specifically, recommendations for DS and OBO to install additional physical security upgrades. For example, the teams recommended that many posts install concertina wire to increase the height of their perimeter walls and further improve anti- climb measures, a security enhancement that exceeds the OSPB standards. According to State officials, State immediately began upgrading the security at 5 of the 19 posts assessed by the Interagency Security Assessment Teams and is using fiscal year 2013 and 2014 funds to upgrade security at the other posts. According to State officials, these upgrades resulted in the deferral of planned security projects at other posts. In addition, State created the new High Threat Programs Directorate within DS to ensure that those posts facing the greatest risk receive additional security-related attention. To determine which posts should fall under the new Directorate, DS developed a high-threat post risk list to rank posts, using many of the same criteria and data points used to rank facilities in the risk matrix that DS provides to OBO. Currently, the High Threat Programs Directorate is responsible for 27 posts in 20 countries and for 2 posts where operations are currently suspended. State plans to conduct annual and as-needed reviews of posts on the high-threat posts risk list, which could change the composition of the list. Moreover, the Secretary of State convened an ARB following the attacks in Benghazi, and State plans to take action on all of the ARB recommendations. In addition, that ARB made two recommendations that led to the formation of other panels that reported on various aspects of State’s security operations. State is also taking action to address most of the recommendations from those two panels’ reports. Action taken to address the Benghazi ARB recommendations: State agreed with all 29 of the ARB recommendations and as of April 2014, according to State officials, has implemented 15 of the recommendations. For example, State developed a method—the Vital Presence Validation Process—by which it can systematically review the “proper balance between acceptable risk and expected outcomes in high-threat, high-risk areas” when beginning, restarting, continuing, modifying, or discontinuing operations at individual posts. According to State officials, this transparent and repeatable process will help State determine the appropriate presence overseas through a documented, systematic, risk-based analysis. To address another recommendation, State developed a new process involving multibureau support cells and checklists to provide an action plan for opening a new post or reopening a post that had closed due to security concerns. State published checklists to support this process in the FAM and,applied the process on at least two occasions. according to State officials, State has already Action taken to address the DS Organization and Management Panel’s recommendations: Based on a Benghazi ARB recommendation, State established a panel to evaluate the organization and management of DS. This panel provided State with its report in May 2013, and State accepted 30 of the 35 recommendations in the report. According to State officials, State has begun taking action to address these recommendations. For example, the panel recommended several organizational changes that State has already implemented, including raising three DS Assistant Director positions to Deputy Assistant Secretary positions. However, State does not plan to implement a recommendation to restructure responsibilities for the new High Threat Programs Directorate. State also does not plan to implement a recommendation concerning the creation of a DS chief of staff. Decisions on the other two recommendations concerning activities by the Bureau of Intelligence and Research are pending until the bureau’s vacant assistant secretary position is filled. Action taken to address the Independent Panel on Best Practice’s recommendations: In response to the Benghazi ARB, State also established a panel of outside, independent experts with experience in high-threat, high-risk areas to help DS identify best practices for operating in these environments. The Independent Panel on Best Practices published its report in August 2013, and State plans to implement 38 of its 40 recommendations. State has begun taking action to address these recommendations. For example, State is developing (1) an accountability framework to document institutional and individual accountability and responsibility for security throughout the department and (2) a department-wide risk management policy. However, according to State officials, State has decided not to implement the panel’s recommendation that waivers to established security standards only be provided subsequent to the implementation of mitigating measures and State has not decided whether to implement the recommendation to elevate DS out of the Bureau of Management and create a new under secretary position for DS. Although State conducts a range of ongoing activities to manage risk to facilities overseas, we identified facility categorization and data reliability problems that may impact these activities: DS and OBO have not defined the conditions that would determine when a warehouse with desk positions should be categorized as an office facility and meet appropriate office physical security standards. State uses different facility categories in its physical security standards and property databases. OBO’s property database and DS’s risk matrix have data reliability problems, including missing and inaccurate data. DS and OBO have not agreed on a common definition for desk positions for the purpose of categorizing office and warehouse facilities, a decision which may have security and resource implications. According to best practices identified by GAO concerning the implementation of the Government Performance and Results Act of 1993 (GPRA) Modernization Act of 2010, agencies should have a shared understanding of definitions. Desk positions are those that require the use of designated office space, while positions that do not need office space, such as guards, garden staff, and custodial staff, are considered non- desk positions. These designations help OBO determine how much space is needed when planning construction of an embassy or consulate. However, during interviews with both DS and OBO officials in headquarters, we learned that DS and OBO do not agree on when a warehouse with desk positions must meet office standards. According to State officials, as of May 2014, DS and OBO began working together to establish a policy to determine when a warehouse with desk positions should be categorized as an office facility. According to DS officials, posts are allowed to have some part-time desk positions in warehouses, such as those for warehouse supervisors who need a computer to manage warehouse activities. Such part-time desk positions, occupied for less than 4 hours per day, are permitted in a warehouse without the warehouse having to meet office security standards. However, the DS officials also noted that if a warehouse reached an undefined threshold of part-time desk positions, the warehouse would then have to meet office standards. OBO officials indicated that they did not agree with DS’s decision to allow some part-time desk positions in warehouses without those facilities meeting office standards. In addition, when we reviewed an OBO- developed list of office and warehouse facilities located outside of embassy compounds, we identified a number of warehouses being used as offices. OBO officials stated that these facilities should meet OSPB standards for offices instead of those for warehouses, whether or not they are being occupied part time or full time. During our site visits, we identified several warehouses with office space. In a January 2013 memorandum to State’s Under Secretary for Management, State OIG noted it had identified examples of warehouses We followed up on the OIG findings being used as office space as well.during facility reviews at the posts we visited, through document reviews, and during interviews with officials at posts and in headquarters. During our facility reviews, we identified one warehouse compound that included office facilities with desk positions. The RSO who toured the warehouse compound with us stated that the compound should be required to meet the office security standards, which require more rigorous security requirements. In addition, we visited two warehouses at other posts that contained a number of desk positions with computers. DS and OBO’s lack of agreement about when a warehouse with desk positions must meet office standards may hamper the implementation of appropriate physical security upgrades at these facilities. GAO-12-1022. compound: sole occupant facilities or compounds, tenant of commercial office spaces, public office facilities, and Voice of America relay stations. State officials told us that OBO does not use these facility categories in its property database because OBO’s property database was designed to meet Federal Real Property Profile reporting requirements. OBO designates work facilities as an office, a warehouse, or a specific type of work facility, such as a library, workshop, medical office, dispatch office, or other facility to meet these reporting requirements. M/PRI officials stated that as they sought to implement Benghazi ARB recommendations, they became increasingly aware that definitional issues across different State bureaus were a challenge and noted that State is working to correct the issue. OBO started working with M/PRI in April 2014 to create a new management tool in which data from OBO’s property database will be combined with other data, such as host- government facilities and staffing data. According to State officials, all bureaus will use this management tool to access property information, which they believe will help support the use of more consistent facility categories. DS and OBO are also developing a pilot program to automatically download property data directly from OBO’s property database into a DS system to provide ready access to up-to-date property data, rather than relying on intermittent information sharing. Because DS and OBO use different terminology for facility categories, the process DS follows when developing the risk matrix has weaknesses. To develop the list of facilities ranked in the risk matrix, DS obtains a list of work facilities from OBO that includes over 1,600 facilities. DS consolidates the list, resulting in approximately 400 compounds and off- compound office facilities. OBO’s facility categories has led DS to develop an ad hoc process for creating the list of facilities ranked in the risk matrix. This process has led to inconsistencies, and has caused DS to exclude facilities from the risk matrix or rank duplicative facilities. For example, the DS official responsible for this process stated that DS has listed each tenant office space in the same facility as separate off-compound facilities for some posts but combined them as one off-compound facility for other posts. State consolidates all of the facilities in a compound into one entry in its risk matrix and also ranks the individual off-compound facilities. Moreover, while reviewing a portion of the risk matrix, we identified several tenant office spaces that DS mistakenly omitted. We identified problems with the data reliability of OBO’s property inventory database and DS’s risk matrix. Our previous work has found that results-oriented organizations make sure that the data they collect are sufficiently complete, accurate, and consistent to support decision making. Although OBO has undertaken a number of efforts to validate the information in OBO’s property database, we identified 9 data entry errors in 65 facility data records at eight of the posts we visited. For instance, records for one post included eight off-compound facilities; however, when visiting the post we learned that three of the eight facilities were located at different posts in the same country and that three of the other facilities were actually residential garages. Without accurate data on overseas facilities, OBO and other bureaus relying on data from OBO’s property database may not be in a position to make fully informed risk- related decisions. According to State officials, State recently created a standard list of posts, and OBO has committed to using that list to correct errors in the property database by August 2014, which may address some of these data reliability issues. In addition, DS is missing data for some of the factors evaluated in the risk matrix, and some of the data are incorrect. For example, DS officials did not enter certain data into the risk matrix, including (1) the number of desk positions for some facilities, (2) the threat-level scores for some facilities, and (3) the setback distance for some facilities. Without these data points, the total score for each of the facilities affected by the missing data could be skewed or incorrect. Furthermore, DS officials entered incorrect information for some of the data points in the risk matrix. For example, we identified several examples of embassy and consulate compounds with incorrect percentages of desk positions located off compound, some of which were overstated. We also identified information for two posts’ embassy compounds that was out of date and did not reflect the posts’ move into new embassy compounds. Because the overall score for each facility in the risk matrix is based on the data for eight factors, missing or incorrect data for even one factor may skew a facility’s overall score in the risk matrix, which could affect the information that OBO uses in prioritizing embassy and consulate compound construction plans. Moreover, because much of these data are also used in determining which posts fall under DS’s new High Threat Programs Directorate, DS may not have accurate information when determining which posts fall under the new directorate. State has developed security standards for most types of facilities but lacks standards for several of them, and we identified problems with some of the existing standards. Lacking standards for several types of facilities, officials are unable to systematically evaluate the security of all facilities. In addition, State’s process for updating physical security standards is not timely. In some instances, State and OSPB have taken over 8 years to update standards, which may leave some facilities more vulnerable in the interim. We also identified inconsistencies within the standards that may lead to confusion and the inconsistent application of security standards at posts. Furthermore, although OSPB is required to review the OSPB standards periodically, State does not systematically re- evaluate the existing security standards against evolving threats and risks. State has developed security standards for a variety of facilities—such as offices and warehouses—but it has not developed OSPB standards for several other types of facilities. For security reasons, we are not naming the types of facilities in this report. For some of these other types of facilities, State issued guidance on physical security requirements in a May 2011 memorandum. However, these security requirements have not been incorporated into the OSPB standards. As a result, some officials at posts we visited were not aware of the physical security requirements found in the memorandum, and the physical security measures in place for such facilities at several of the posts we visited did not meet the outlined security requirements. Federal internal control standards call for reasonable assurance that assets are safeguarded, in part through identifying and assessing risk. Because State lacks OSPB standards for some facilities, officials are unable to systematically conduct risk assessments for these facilities, and consequently, appropriate security measures may not be taken, and the personnel working in or using those facilities may be at a greater risk if their facility should come under attack. Updating the FAM or the FAH, which includes the OSPB standards and the Physical Security Handbook, is supposed to take about 60 to 90 days, according to DS officials. However, we identified several examples in which the process for updating security standards in the FAM or the FAH took more than 3 years and some that took significantly longer than that. Federal internal control standards dictate that agencies must have timely communication and information sharing to achieve objectives; therefore, it is essential that agencies update their policies in a timely fashion, particularly when the security of lives, property, and information is at stake. DS manages the process by which the OSPB standards and the Physical Security Handbook are updated. DS officials said that it is supposed to take about 60 days to update the Physical Security Handbook, which requires clearance only within State, and 90 days to update the OSPB standards, which requires approval from other OSPB members. Specifically, it is supposed to take 30 days to draft and obtain approval within DS for an update to the security standards and handbook in the FAH and another 30 days to obtain approval for the draft changes by other relevant stakeholders within State, such as OBO and the Office of the Legal Adviser. If the draft changes include changes to OSPB standards, then it is supposed to take an additional 30 days to obtain approval from OSPB members. After all of the required approvals are obtained for changes to either the FAM or the FAH, DS sends the update to the Bureau of Administration for publishing. We identified two examples of updates to the OSPB standards and the Physical Security Handbook that took 5 to 8 years to complete the process and 11 updates that are still pending after 3 to 8 years in process. Four of these updates resulted from recommendations by previous ARBs. For example, the 2005 ARB resulting from the attacks on the U.S. Consulate in Jeddah, Saudi Arabia, recommended that State and OSPB develop residential security standards to address terrorism threats. The OSPB working group completed a final draft of the standards in April 2009, but OSPB has not yet reviewed the draft standards. DS officials said the draft standards have not been sent to OSPB because other relevant State stakeholders have not yet approved them. Officials further noted that because these draft standards have been stalled in the approval process for 3 years, they may need to be modified to address threats that have been identified in the meantime before going to the OSPB for approval, thus re-starting the review process at the beginning. DS officials said they face two key challenges in managing updates to the OSPB standards and the Physical Security Handbook that at times cause major delays in the update process—a cumbersome review process and subchapter update requirements. Cumbersome review process. If a stakeholder suggests a change to the draft standard at any time during the review process, the proposed draft must go through the entire review process again (see fig.5). This requirement becomes more time consuming when someone in the later stages of the review process suggests an edit. In addition, DS officials told us that some stakeholders within State or OSPB member agencies may request additional time for reviewing proposed changes, which further delays the process. FAM and FAH subchapter update requirement. The FAH requires officials to review and update the entire subchapter when making changes to an existing FAM or FAH subchapter, and there is no specific exception for life safety updates. As a result, when DS needs to make changes to the OSPB standards or the Physical Security Handbook, it must review and update the entire subchapter in which the update is located. As an example of how this requirement delays the process, DS officials told us that draft OSPB standards for compound emergency sanctuaries were begun in 2005 and not completed until 2013. DS finished drafting the standards in 2011, but the standards spent over 2 years in the approval and clearance process because DS and the relevant OSPB working group had to update the entire subchapter, which covers several sensitive security topics. Officials told us that non-State OSPB member agencies did not have any concerns with the compound emergency sanctuary standards, but some members had concerns with the other updates to the subchapter that resulted in additional delays. As noted above, if a stakeholder suggests a change to the draft at any time during the review and approval process, the draft must go through the entire review and approval process again. Because the additional edits occurred during the final stage of the process, each recommended change resulted in a full review of the draft at every level. According to the FAH, in rare circumstances State’s Bureau of Administration will publish specific changes to a section in the FAH or the FAM without requiring a review of the full subchapter.explicitly state when or how such exceptions occur. DS officials said that they were aware of only a few instances in which the Bureau of Administration had granted such an exception. For example, following the Edward Snowden leaks of National Security Agency documents, the Bureau of Administration published changes to technical security requirements without requiring a review of the full subchapter. Officials told us they could not recall any exceptions to the subchapter update requirement on account of critical life safety updates to physical security standards. Although it may take years for State to update some security standards, we found that State at times took steps to address identified threats in advance of approving updates to the security standards. For example, according to DS officials, DS sometimes works with OBO to quantify the cost of installing certain upgrades for new construction projects to meet draft security standards for which eventual approval is anticipated. If funding is available, OBO will incorporate the upgrades into facilities currently under construction or being planned for construction so that the facilities will meet the draft security standards. For example, OBO included mantraps in new embassy and consulate construction projects following the 2005 attack on the U.S. Consulate in Jeddah, Saudi Arabia, even though OSPB did not approve standards for mantraps until 2010. Furthermore, OBO included the construction of compound emergency sanctuaries in some construction plans before OSPB finalized the relevant standards. Nevertheless, because updates to the OSPB standards and the Physical Security Handbook are not always completed in a timely manner, posts may not have security measures in place to address identified threats. We identified a number of inconsistencies among various security-related guidance documents. Our previous work has found that leading organizations strive to ensure that their core processes efficiently and effectively support mission-related outcomes. To do so, policy standards should be clear and consistent in order to support good decision making. However, we identified about 20 inconsistencies pertaining to physical security standards within State’s various security-related guidance documents; such inconsistencies may lead to confusion and the inconsistent application of some security standards. The types of inconsistencies we identified fall into three categories: Inconsistencies between the FAM and the FAH. For example, in 2010 DS changed the threat categories in the Security Environment Threat List, which impacted security standards, but the corresponding updates to the FAM and the FAH are inconsistent. For example, the FAM states that security standards against the terrorism threat are part of the physical security standards. However, there are currently no security standards for the terrorism threat in the OSPB standards within the FAH. Inconsistencies within the FAH between OSPB standards and the Physical Security Handbook. In some cases, State made an update to either the Physical Security Handbook or the OSPB standards, but not to the corresponding standard in the other part of the FAH. For example, the OSPB standards include requirements for anti-ram perimeter walls at medium- and higher-threat posts, but the Physical Security Handbook used to include this requirement only for higher- threat posts. In addition, DS published physical security specifications for consular agencies in the Physical Security Handbook, but State and OSPB have not approved and incorporated the corresponding standards in the OSPB standards. In other cases, the Physical Security Handbook was outdated. For example, when we visited posts, the Physical Security Handbook contained a physical security standards matrix that did not accurately reflect all the OSPB standards. According to DS officials at headquarters, the matrix has since been updated to address some inconsistencies and is pending final approval. Inconsistencies between the OSPB standards and other policy guidance. OBO and other State bureaus maintain security-related standards that are not incorporated into the OSPB standards. For example, the Bureau of Consular Affairs requires a consular pass- back booth in the consular section of a controlled access compound facility, but this requirement was never incorporated into the physical security standards. According to DS officials, the draft standard to incorporate this requirement in the Physical Security Handbook is currently pending final approval. Furthermore, the State OIG identified several instances in which the OBO Building and Zoning codes covered additional security requirements not captured in the OSPB standards, and we confirmed their findings. Inconsistencies among various guidance documents may lead to the inconsistent application of security standards if officials rely on one policy guide over another or are not aware of updated standards. Three different RSOs told us that the physical security standards matrix within the Physical Security Handbook—a list of standards for every type of facility at each threat level—is the primary source they use to evaluate facilities’ compliance with physical security standards, because it serves as an easy guide for facilities’ physical security requirements. However, as we previously noted, the physical security standards matrix was not up-to- date and did not accurately reflect all the OSPB standards when we visited posts. Hence, these officials may not have applied the appropriate security standards to the facilities at their posts. Some RSOs told us that State updates the OSPB standards and the Physical Security Handbook infrequently and that they learn about updates through DS cables or DS’s internal website. However, another two of the RSOs we interviewed did not know about updates made to the security standards in the past few years and therefore had not requested funding for relevant upgrades. For example, they were not aware that State had published standards for compound emergency sanctuaries in the Physical Security Handbook. The inconsistencies in the different security-related guidance documents may lead to confusion and inconsistent application of security standards, leaving some facilities at greater risk because they have not taken all appropriate security measures they are required to address. Although OSPB is required to review its security standards on a regular basis, State does not have a systematic process for evaluating the existing security standards against evolving threats and risks. The 1999 Nairobi and Dar es Salaam ARB report recommended that the U.S. government undertake a long-term strategy for protecting American officials overseas, including the assessment of security requirements to ensure that they meet the new range of global terrorist threats. Furthermore, the FAH requires OSPB to review all the OSPB standards periodically—at least once every 5 years; however, the process by which security standards are updated is either triggered by an event—a change within the organization of State, an annual review to identify out- of-date information, or an attack or other event affecting safety—rather than by a periodic and systematic evaluation of the relevance and adequacy of all the standards. Although State has updated the threat categories over the years, many of the physical security standards were developed prior to U.S. diplomatic missions being sustained in or near war zones, where the risks to U.S. personnel and facilities multiply and intensify. Furthermore, State rates numerous posts all over the world as high or critical threat for either political violence or terrorism, but their risk varies greatly due to several factors, including local infrastructure and the host-country government’s willingness and capability to provide security for U.S. facilities. Nevertheless, posts with the same threat rating are required to meet the same standards regardless of the risk each post faces. We identified several instances in which State and other officials deemed existing standards inadequate to meet the perceived threats and risks. Interagency Security Assessment Teams recommended security upgrades above current standards: Following the attacks of September 2012, the teams traveled to a judgmental sample of high- threat, high-risk posts and made recommendations at each post, many of which exceeded the threat standards at the post. Several facilities have security features that exceed requirements: While reviewing facilities at posts overseas, we identified several examples of facilities that had implemented security measures that exceeded security requirements. For example, at one post rated as having a medium threat for political violence, we found that an agency leasing two floors in an off-compound tenant office facility installed several security measures that exceeded security requirements, such as doors and a guard booth providing 15-minute forced-entry and ballistic-resistant protection–measures only required for tenant office facilities at posts with a critical threat rating for political violence. According to agency officials, they took this action because they did not believe that the OSPB standards addressed the current threats and risks that they faced in country. The RSO and other post officials approved the increased security measures. In addition, we found that posts in certain conflict zones took numerous measures that exceeded critical security standards, such as the construction of overhead cover, higher walls, and bunkers. Posts may take additional steps on their own or through DS- and OBO- funded upgrades to implement security measures that exceed OSPB standards, because post or headquarters officials believe the standards are inadequate to mitigate against risks faced by some high-threat, high- risk posts. This leaves the establishment of facility-specific security measures up to the professional judgment of post RSOs, an ad hoc process that does not draw on the collective subject-matter expertise of DS and the interagency OSPB. This current approach to addressing threats and risks not covered by the OSPB standards may leave some high or critical threat posts more vulnerable. In addition, in the absence of standards that address a post’s current threats, it may be difficult for post officials to justify funding requests for security measures that go beyond the OSPB standards. State takes steps to mitigate vulnerabilities for older, acquired, and temporary work facilities that do not meet security standards, primarily through a waivers and exceptions process to document vulnerabilities and corresponding mitigation measures; however, the waivers and exceptions process has several weaknesses. All facilities at a post are expected to meet physical security standards, but when facilities do not or cannot meet certain security standards, State mitigates identified vulnerabilities through various construction programs and its waivers and exceptions process. For example, we found that none of the 43 facilities we reviewed at higher-threat, higher-risk posts met all applicable security standards and therefore required waivers, exceptions, or both. However, we identified several weaknesses with the waivers and exceptions process. Specifically, DS does not systematically track waivers and exceptions or re-evaluate them when threats or risks change. In addition, post officials do not always request waivers and exceptions when required, and requests are not always timely or correct. Moreover, in some instances, the mitigating measures a post has agreed to undertake as a condition of a waiver or exception are not fully implemented. State addresses identified security vulnerabilities through a number of construction programs, including the Capital Security Construction Program, the Compound Security Program, and the Major Rehabilitation Program. OBO has a threat- and vulnerability-based planning process for its construction projects that includes input from DS’s analysis of threats, vulnerabilities, and risk. The risk matrix provided by DS—a ranked list of facilities based on an assessment of the physical security conditions and threat levels at each post—guides OBO’s prioritization of new construction projects and compound security projects. According to OBO documentation, OBO has moved over 30,000 people into safer facilities since 2000 through their various construction programs. The following OBO-managed construction programs address security vulnerabilities: Capital Security Construction Program. Following the 1998 Africa embassy bombings, State determined that 80 percent of its overseas facilities did not meet security standards and should be replaced. Afterwards, State began a multiyear, multibillion dollar program to replace insecure and aging diplomatic facilities worldwide (see table 2). In 2005, State established the Capital Security Construction Program, through which each agency with an overseas presence contributes funds for construction based on its overseas staffing levels. OBO has constructed 109 new facilities since 1998. Compound Security Program. The Compound Security Program complements the Capital Security Construction Program by providing interim physical security protection to vulnerable facilities until they are replaced, as well as enhancing physical security protection at facilities that will not be replaced by a new embassy or consulate compound. This program funds, among other things, projects to replace forced-entry and ballistic-resistant doors and windows, install emergency exits, and enhance environmental security by safeguarding against chemical, biological, and radiological attacks. According to OBO officials, major security upgrades at posts cost on average $6 to $10 million but may cost up to $20 million, and since 2005, OBO has completed 53 major security upgrade projects funded by the Compound Security Program. OBO has allotted about $560 million to the program since fiscal year 2009 (see table 3). Major Rehabilitation Program. This program provides for renovations, rehabilitations, expansions, or upgrades to systems and space for residential or work facilities that can no longer be physically or economically maintained by routine, preventive, and unscheduled repair activities. In addition, these projects are undertaken when new construction is not scheduled under the Capital Security Construction Program. Although the program is not focused on security upgrades, according to OBO officials, OBO strives to bring facilities up to current security requirements during major rehabilitation projects. State allotted approximately $243 million from fiscal years 2009 to 2014 for major rehabilitation. DS also provides funding for some physical security upgrades to facilities abroad. Congress appropriates funds for DS through the Worldwide Security Protection account.emergency upgrades to address emerging vulnerabilities or for upgrades to facilities that will not be addressed by OBO. This is primarily the case in conflict zones such as Afghanistan, Iraq, and Pakistan, but also applies to some other high-threat, high-risk locations. For example, DS used some funding for physical security upgrades to install higher walls in at one post and barricades at another. In addition, officials said that DS funded projects in several countries under the new High Threat Programs Directorate in fiscal year 2013 that cost approximately $2 million and included upgrades for drop-arm barriers to protect against vehicle intrusions and other physical security measures. Diplomatic work facilities are required to meet two sets of physical security standards, SECCA requirements and OSPB standards; however, when facilities do not or cannot meet all of the standards, post officials are required to request waivers to SECCA requirements, exceptions to OSPB standards, or both. SECCA requires that any site selected for a new U.S. embassy or consulate constructed after November 1999 accommodate the colocation all U.S. government personnel (except those under the command of an area military commander), and that any new U.S. diplomatic facility be located at least 100 feet from the perimeter wall. If State or other agencies acquire additional office space off compound, they are required to obtain a colocation waiver prior to occupancy of that facility. Furthermore, if a new facility does not meet the 100 foot setback requirement, the post must apply for a setback waiver. The Secretary of State may waive the SECCA requirements if the Secretary determines that security considerations permit and it is in the national interest of the United States. The FAM notes that the flexibility for State to grant waivers was provided by Congress with the expectation that waivers would be infrequent. According to the FAH, a post must request an exception for a new office construction project or a facility acquired after June 1991 if it does not fully comply with an applicable OSPB standard. Similarly, a post must request an exception for an existing office building if the building does not or cannot fully comply with physical security standards following an upgrade project. A waiver or exception request typically includes a description of mitigation steps planned or taken by the post to address identified vulnerabilities. For example, if an acquired facility does not meet blast resistance construction standards for walls, doors, and windows, the post may install additional anti-ram structures outside the perimeter wall to provide additional setback. The development of waiver and exception requests involves a collaborative drafting and multilevel review process. According to DS officials, the requests are drafted through a collaborative process, with DS officials in headquarters helping the RSO or the tenant agency write the request to ensure that it complies with department policy and appropriately articulates mitigation steps planned or taken by the post to address vulnerabilities. In addition, each waiver or exception request must pass through several layers of review at the post and within State. The Assistant Secretary of DS serves as the final reviewer for all OSPB exception requests and any SECCA waiver requests for facilities other than an embassy or consulate building. The Secretary of State must approve all SECCA waiver requests for embassy and consulate DS officials said the types of mitigation steps taken in each buildings. situation depend on the collective knowledge of the RSO and other DS staff working to mitigate a risk. In addition, the types of mitigation steps possible at individual posts depend on the availability of materials in country, shipping constraints, and host-government policies. According to 12 FAM 315.1, the Secretary delegated the authority for waiver approvals for embassies or consulates that do not substantially occupy a building—such as an office located in a large commercial office—to the Assistant Secretary of DS. Many older, acquired, and temporary work facilities at U.S. posts overseas do not meet SECCA requirements and OSPB standards for newly constructed or newly acquired facilities. Since newer facilities are expected to meet more rigorous physical security standards and most existing facilities are not new, most facilities may not meet these standards. Some embassy and consulate compounds are newly constructed and expected to meet most physical security standards. However, a significant number were constructed or acquired prior to June 1991 and are only required to meet many of the OSPB standards to the maximum extent feasible or practicable. We found that a substantial portion of the approximately 1,245 office facilities overseas do not meet SECCA requirements and OSPB standards and had requested waivers, exceptions, or both. According to DS documentation, State has processed over 400 setback waivers for various types of office facilities and about 300 colocation waivers for off-compound office facilities, in accordance with SECCA. In addition, DS has processed about 280 OSPB exceptions packages for work facilities—and each exceptions package may include requests for multiple OSPB exceptions for one facility. DS rated about a quarter of office facilities as substantially noncompliant with security standards. When DS completed the most recent version of its risk matrix, it evaluated compliance with OSPB standards for approximately 400 office compounds and facilities— including both facilities on embassy and consulate compounds and off-compound office facilities. According to DS documentation, about a quarter of its facilities worldwide received a low facility-compliance score, indicating that they did not substantially meet current OSPB standards for new facilities. However, as noted above, we found that some of the data DS used to establish its ratings for the facilities we visited were missing or inaccurate, and therefore determined that DS’s scores provide a broad indication of facility vulnerability rather than a precise estimate. As noted above, post officials are required to request waivers to SECCA requirements and exceptions to OSPB standards when facilities do not or cannot meet security standards. Federal internal control standards require the maintenance of complete and accurate documentation and effective use of information technology. However, we identified several weaknesses with the waivers and exceptions process, including tracking problems and missing waivers and exceptions. First, DS is not adequately tracking waivers and exceptions to the security standards. In January 2013, the State OIG reported that DS does not adequately track waivers and exceptions. DS does not maintain a database with waiver and exception documentation, but rather maintains a list of waivers and exceptions in a spreadsheet. When we reviewed DS’s tracking spreadsheet, we identified several problems. For example, we found nine instances in which a line item in the tracking spreadsheet contained inaccurate information about the type of approved waivers or exceptions for a facility. In addition, headquarters and post officials we met with could not always find waiver and exception documentation or were unaware of previously approved waivers and exceptions, even though DS officials stated that copies of approved waivers and exceptions are kept both at DS headquarters and at posts. For example, an official at one post told us that the post had two waivers on file that DS officials in headquarters were unable to locate. In addition, officials at three posts were either unaware of certain previously approved waivers and exceptions or could not find the documentation for them. We also found that DS did not re-evaluate previously granted waivers and exceptions to security standards for individual facilities when the level of threat or risk changed. For example, the political violence threat rating was low for one post when it obtained approval for exceptions for one facility, but the threat rating has since increased to high. Nevertheless, post and DS officials have not re-evaluated these exceptions. In addition, an agency at one post recently requested and obtained a colocation waiver and exceptions to utilize hotel rooms as office space. There are already two other agencies using the same hotel as office space. The addition of more people at one facility represents an increased risk to this facility, because it becomes a more visible and attractive target. Nonetheless, post and DS officials did not re-evaluate the previously granted waivers and exceptions based on the increased risk. did Many of the facilities we reviewed at higher-threat, higher-risk postsnot meet applicable security standards and did not have required waivers and exceptions. We reviewed 43 facilities at 10 higher-threat, higher-risk posts for compliance with applicable security standards—including existing, newly acquired, and a few newly constructed work facilities. While the level of noncompliance varied, all of the facilities are required to have approved waivers, exceptions, or both. Furthermore, we found that posts we visited did not always request waivers and exceptions when required. Based on our review of 43 facilities, we identified 3 facilities for which the post did not request a required SECCA waiver and 18 facilities missing approved OSPB exceptions (see table 4). For example, DS did not have appropriate waivers on file for 2 of the 8 tenant commercial office spaces we reviewed that did not meet SECCA’s colocation or setback requirements. In addition, DS did not have appropriate OSPB exceptions on file for 3 of the 20 embassy or consulate compound facilities we reviewed that did not meet the requirements for hardened building exteriors. Similarly, State OIG identified 4 out of 27 posts that did not submit appropriate waivers or exceptions. To address this problem, State OIG recommended that DS institute an annual certification process in which the Chief of Mission at each post would be required to certify that the post either meets all security standards or that all appropriate waivers and exceptions have been obtained. However, DS officials stated that DS has begun piloting an alternative solution to the recommendation to include a similar certification requirement of the waivers and exceptions as part of its new online physical security survey process. When this solution is fully implemented, posts will be required to verify that all relevant waivers and exceptions have been obtained when RSOs at posts fill out the physical security survey once every 3 years. We identified additional weaknesses with the 32 waivers and exceptions packages we reviewed, including (1) requests for waivers and exceptions that were filed after the facility was already occupied, (2) incorrect waivers or exceptions on file, and (3) conditions outlined in the approved waiver or exception request that were not always implemented (see table 5). Federal internal control standards require the proper execution of management directives. We obtained these 32 approved waiver and exception packages for 12 of the 14 posts we either visited or for which we interviewed officials by video teleconference. In our review of these documents, we identified the following problems: Facilities occupied prior to receiving waivers or exceptions. We identified eight instances in which post officials occupied a facility prior to submitting a required waiver or exception request. For example, officials at one post occupied a temporary facility for over a year and a half before the post obtained a setback waiver. Incorrect waivers or exceptions on file. We identified four instances in which the waivers or exceptions on file did not cover the facility currently in use or post obtained incomplete or inappropriate waivers or exceptions. For example, one post obtained an approval for a colocation waiver and OSPB exceptions for a temporary medical facility that was located on a residential compound. The proposed medical facility was a one-story safe haven container that provided 60-minute forced-entry, ballistic-resistant protection. When visiting the post, however, we learned that the medical facility was no longer located in the safe haven container; rather, the medical facility was now located in a residential building that did not meet any forced- entry, ballistic-resistant standards for office space. The post had not applied for an updated waiver or exceptions for this facility. Conditions outlined in approved waiver or exception not implemented. We identified three instances in which posts did not implement mitigating steps that were required conditions for their approved waivers and exceptions. For example, one post obtained a setback waiver and exceptions in July 2010 on the condition that they implement several upgrades. Although some of the upgrades have since been implemented, we identified three upgrades that had not been implemented when we visited the facility, including (1) improving perimeter walls to ensure they measured 9 feet all the way around the compound, (2) reinforcing the perimeter to ensure all walls were anti- ram, and (3) installing shatter-resistant window film on all the windows. Officials stated that it is the responsibility of post officials or tenant agency officials applying for the waiver or exception to ensure that upgrades agreed to as conditions of a waiver or exception are appropriately implemented, and that they do not currently monitor posts’ implementation of conditions agreed to in the granted waivers or exceptions. Because waivers and exceptions are not always requested, timely, accurate, and fully implemented, State cannot be assured that they have all the information they need and are taking all practical steps to ensure the security of work facilities. State follows some risk management principles; however, it lacks an adequate risk management policy for the physical security of its work facilities. Risk management is a strategy that helps policymakers more efficiently and effectively assess risk, allocate resources, and take actions under conditions of uncertainty. While DS outlined some principles for a risk management policy, it did not fully develop and implement the policy. We found that many of the activities DS takes to manage risk are in line with its risk management principles; however, we found that State’s risk management activities do not operate as a continuous process and do not continually incorporate new information. For example, we found that State does not use all available information when establishing threat levels. We also found that State’s current facility vulnerability assessments are not fully utilized because, among other things, the information reported by posts through a survey template is not always readily available or timely and is not in a form that facilitates automated processing and data analysis. However, State is taking steps to automate and enhance these surveys. In addition, we found examples in which the data informing DS’s risk assessments of facilities had changed, but DS lacked processes to re-evaluate the risk to those facilities. We also found that State lacked a process to re-evaluate interim and temporary facilities that have been in use longer than anticipated. Furthermore, in examining State’s feedback mechanisms, we found that State did not adequately verify that it had followed through on some risk-management related recommendations. Past GAO work has shown that risk management is a strategy that helps policymakers more efficiently and effectively assess risk, allocate As we resources, and take actions under conditions of uncertainty.stated previously, an effective risk management policy establishes a structured process for making informed choices and trade-offs about how best to use available resources and for monitoring the effects of those choices. Risk management requires a continuous process that includes the assessment of threats, vulnerabilities, and potential consequences, with actions taken to reduce or eliminate one or more of these elements of risk. Risk management should include a feedback loop that continually incorporates new information, such as changing threats or the effect of actions taken to reduce or eliminate identified threats, vulnerabilities, or consequences. Because policymakers have imperfect information for assessing risks, there is a degree of uncertainty in the information used for risk assessments—what the threats are and how likely they are to be realized. As a result, it is inevitable that assumptions and policy judgments, as well as hard data, influence decisions in risk analysis and management. It is important, therefore, that key decision makers understand the underlying assumptions and policy judgments that have been made and how these affect the results of the risk analysis and the resource decisions based on that analysis. An effective risk management policy, by providing a structured, continuous process with a feedback loop that incorporate new information and adjusts to changing conditions, can provide policymakers with better information with which to make risk decisions in an uncertain environment. To provide a basis for examining efforts for carrying out risk management, in prior work we developed a framework for risk management based on best practices and other criteria. Our risk management framework is divided into five phases that form a feedback loop: (1) setting strategic goals and objectives and determining constraints; (2) assessing the risks; (3) evaluating alternatives for addressing these risks; (4) selecting the appropriate alternatives; and (5) implementing the alternatives and monitoring the progress made and the results achieved (see fig. 6). The results generated by monitoring in phase 5 feed back into the ongoing process. In addition, because a framework includes integrated and continually updated information flows, internal controls are crucial. These include the policies, procedures, techniques, and mechanisms that enforce management’s directives and are used to help ensure that actions are taken to address risk. We used this framework, as well as the Standards for Internal Control in the Federal Government, to assess State’s risk management principles and activities. While DS created a risk management policy statement in 1997, DS has not fully developed and implemented the policy. The one-page policy statement describes six principles: asset identification, threat assessments, vulnerability assessments, risk assessments, risk decisions, and feedback. DS officials noted that a year after the statement was published, its planned implementation was largely overtaken by State’s response to the 1998 U.S. embassy bombings in Africa, and that the policy was not fully developed or implemented. For example, DS’s risk management statement lacks clear roles and responsibilities for all stakeholders and detailed guidance on how to carry out its elements, particularly with regard to implementation and monitoring. Officials further noted that, contrary to what is stated in the policy, there is no formal steering group handling the risk management process. Nevertheless, we also found that many of the activities that DS takes to manage risk align with the DS risk management policy principles and also with our risk management framework, including determining risk by combining the results of asset identification, threat assessments, and vulnerability assessments. Both the Benghazi ARB report and the resulting Report of the Independent Panel on Best Practicesdevelop a risk management policy. State has undertaken several efforts to develop a more comprehensive risk management policy. For example, according to State officials, State is applying its recently completed Vital Presence Validation Process to better manage risk when beginning, restarting, continuing, modifying, or discontinuing operations at posts, particularly high-threat, high-risk posts. However, as of February 2014, some of these efforts, including a fully developed risk management policy, remain incomplete. While many of State’s activities align with the DS risk management policy statement, in this report we have identified a number of problems with these activities. Moreover, we found that State’s ongoing activities do not operate as a continuous process that incorporates all relevant data and lack a feedback loop that continually incorporates new information (see fig. 7). We found several examples that demonstrate that State’s ongoing risk management activities are not fully linked in a continuous process that incorporates all relevant data. For example, we found that DS does not use all available information when establishing threat levels at posts. Specifically, some posts may implement security measures that go above standards, but this type of information is not effectively captured in the triennial facility inspection process to document posts’ compliance with OSPB standards, and according to DS officials, does not inform the Security Environment Threat List threat-level decisions. Furthermore, DS officials noted that the information from the triennial facility inspection process, which is used to identify facility vulnerabilities, is not currently used in a meaningful way because several issues impede DS’s ability to collect and adequately use this information. For example, because the survey forms are individual documents housed on an intranet site, DS officials in headquarters cannot easily search through the data from the surveys or conduct comparative analyses of posts’ data. In addition, we found that the surveys did not always include all facilities at posts and that headquarters could not always find the most current surveys. DS officials indicated that the new online survey process they are developing will feed certain data into a database, thus improving their ability to analyze and use the survey data. While we have not independently evaluated the online survey forms, DS officials noted that the forms include a checklist for all the current OSPB standards for each of the threat ratings, and RSOs will be required to complete the new survey templates online. In addition, according to officials, DS plans to develop a project management solution that will allow DS officials in headquarters to track and report on the data collected by the completed physical security surveys through an automated system. Another instance of current information not being fully utilized involves OBO documentation of facility compliance with the physical security standards. According to State’s policies, when OBO initiates a major rehabilitation project on a facility constructed prior to June 1991, it must request OSPB exceptions if the planned rehabilitation will not bring the facility into full compliance with current security standards. Furthermore, according to DS officials, when OBO completes a major rehabilitation project on a facility constructed prior to June 1991, the bureau is required to document with a memorandum what aspects of the facility will still not meet standards after the rehabilitation. However, DS officials told us that no one in DS tracks the OBO memoranda. Without adequately tracking this information, RSOs may not have accurate information to plan mitigation efforts and properly request exceptions to security standards. In addition, as noted above, we also found the DS officials in headquarters do not verify that physical security upgrades included as part of a waiver and exception request are completed. We also identified several instances where State’s risk management activities did not continually incorporate new information through a feedback loop. For example, as noted above, State does not have a process for evaluating the existing security standards against evolving threats and risks, and we found examples where State officials deemed existing standards inadequate to meet perceived threats and risks. Similarly, DS lacks a process to re-evaluate risk decisions such as the granting of waivers and exceptions when risk factors change. DS quantifies risk to facilities by assessing the number of personnel, threat levels, host-country capability and willingness to support the post, and vulnerabilities. At one post we visited, the consulate initially had a diplomatic presence on one floor of a tenant commercial office space. It received a colocation and setback waiver to occupy that one floor. In subsequent years, as the number of personnel grew, the consulate expanded its office space to include a second floor of the facility. However, according to post officials, there was never a reevaluation of the risk to that facility on the basis of the increased personnel presence, and the post did not request a new waiver until years later when an RSO noticed the discrepancy. Similarly, waiver and exception request packages generally include information about the current Security Environment Threat List levels, but when those levels change—from high to critical, for example—there is, according to DS officials, no process in place to notify post or headquarters to re-evaluate the waivers and exceptions previously granted. Similarly, DS lacks a process to re-evaluate interim and temporary facilities that have been in use longer than anticipated. When State opens an interim or temporary facility and grants waivers or exceptions, it is with the expectation that the facility will be replaced by another or closed within a certain time frame. There is an explicit acceptance of risk in that decision. However, officials noted that there were a number of facilities that were designated to be used on an interim or temporary basis, but because State lacked a process to re-evaluate these facilities, years later these facilities were still in use without any review of the facility designation and without revisiting of the risk decision. Similarly, the Independent Panel on Best Practices found that State redefined missions, such as Benghazi, as temporary or in other ways that did not require them to meet physical security standards. For example, at some posts, State used containerized housing units and other temporary structures as offices for years though these trailer-like facilities do not meet OSPB standards and were only intended to be used on an interim or temporary basis. State officials stated that they do not systematically review interim and temporary facilities that have been in use longer than anticipated. However, effective risk management practices require a feedback loop that continually incorporates new information, and federal standards for internal controls call for proper execution of management directives. In addition, we found that State did not adequately verify that it had followed through on the feedback it received through all past risk- management related recommendations. Federal standards for internal controls call for ensuring that the findings of audits and other reviews are promptly resolved. For example, M/PRI maintains documentation to track the implementation of all ARB recommendations. Although we did not assess the reliability of these data, we identified two examples of recommended updates to the OSPB standards that M/PRI’s documentation indicated were completed but that our evidence showed had not been completed. During the course of our review, in December 2013, one of these recommendations was completed. Moreover, when we asked DS officials about the status of their efforts to close recommendations resulting from the State OIG’s review of their waivers and exceptions process, the officials indicated that they had addressed all of the recommendations. However, in our fieldwork and document reviews, we found that DS had not addressed all of the OIG recommendations. Ensuring the safety and security of our personnel and facilities at overseas diplomatic posts has never been more challenging or important than it is today. Between September 2012 and December 2013 alone, there were 53 attacks against U.S. diplomatic personnel and facilities overseas. We found that State has taken a number of measures to enhance the security of and manage the risk to its personnel and facilities. For example, State is prioritizing security-related construction using its evaluation of threat and vulnerability levels at posts. In addition, State established a new directorate to provide additional security attention to high-threat, high-risk posts. Furthermore, we found that many of State’s risk management activities were consistent with best practices. However, we found a number of problems with State’s implementation of some of its activities, rather than with the broader activities themselves. Some of these problems involved the lack of common terminology or the reliability of data that State uses to analyze risk. Others involved the adequacy of its physical security standards. In addition, we found problems with State’s handling of its waivers and exceptions process. While each of these problems is a reason for concern, in and of itself, taken as a whole they raise a greater concern that decision makers at State may not have complete and accurate information with which to make risk management decisions. As a result, there is a greater likelihood that security risks to overseas diplomatic facilities will not be adequately addressed—a situation that could have tragic consequences for U.S. government personnel working overseas. Furthermore, State lacks a cohesive framework or policy to adequately coordinate and control its multifaceted risk management activities. A good risk management policy includes the use of all relevant information and a feedback loop that ensures that changing conditions are assessed and considered by decision makers. Such a policy helps ensure that despite uncertainty, security personnel have a continuous system in place that identifies weaknesses proactively rather than reactively. The lack of such a policy may make State more prone to not considering data needed to make effective risk decisions. While State is developing a risk management framework in response to several recommendations resulting from the attacks in Benghazi, the framework remains incomplete. Unless State implements a risk management policy that addresses the problems we identified with State’s current security efforts, State cannot be assured that the most effective security measures are in place at a time when personnel working at U.S. diplomatic facilities are facing ever increasing threats to their safety and security. To enhance the Department of State’s risk management activities, we are making 13 recommendations, which we have categorized into four groups covering (1) consistency and reliability of data; (2) applicability and effectiveness of physical security standards; (3) identification of risks and mitigation of vulnerabilities; and (4) development of risk management policies. To improve the consistency and data reliability of Department of State risk management data, we recommend that the Secretary of State: 1. Direct M/PRI, DS, and OBO to define the conditions when a warehouse should be categorized as an office facility and meet appropriate office physical security standards. 2. Direct M/PRI, DS, and OBO to harmonize the terminology State uses to categorize facilities in State’s physical security standards and property databases. 3. Direct OBO to establish a routine process for validating the accuracy of the data in OBO’s property database. 4. Direct DS to establish a routine process for validating the accuracy of the data in DS’s risk matrix. 5. Direct the Under Secretary for Management to identify and eliminate inconsistencies between and within the FAM, FAH, and other guidance concerning physical security. To strengthen the applicability and effectiveness of the Department of State’s physical security standards, we recommend that the Secretary of State work through DS or, in his capacity as chair, through the OSPB to: 6. Develop physical security standards for facilities not currently covered by existing standards. 7. Clarify existing flexibilities in the FAH to ensure that security and life- safety updates to the OSPB standards and Physical Security Handbook are updated through an expedited review process. 8. Develop a process to routinely review all OSPB standards and the Physical Security Handbook to determine if the standards adequately address evolving threats and risks. 9. Develop a policy for the use of interim and temporary facilities that includes definitions for such facilities, time frames for use, and a routine process for reassessing the interim or temporary designation. To strengthen the effectiveness of the Department of State’s ability to identify risks and mitigate vulnerabilities, we recommend that the Secretary of State: 10. Direct DS to automate its documentation of waivers and exceptions, and ensure that DS officials in headquarters and at each post have ready access to post’s waivers and exceptions documentation. 11. Direct DS to routinely ensure that necessary waivers and exceptions are in place for all work facilities at posts overseas. 12. Direct DS to develop a process to ensure that mitigating steps agreed to in granting waivers and exceptions have been implemented. To strengthen the effectiveness of the Department of State’s risk management policies, we recommend that the Secretary of State: 13. Develop a risk management policy and procedures for ensuring the physical security of diplomatic facilities, including roles and responsibilities of all stakeholders and a routine feedback process that continually incorporates new information. We provided a draft of this report for review and comment to State and USAID. We received written comments from State, which are reprinted in appendix II. State agreed with 12 of our 13 recommendations and highlighted a number of actions it is taking or plans to take to address the problems that we identified. State noted that it is not in a position to agree or disagree with our recommendation that it develop a policy for the use of interim and temporary facilities because an internal State working group is currently in the process of evaluating this issue. USAID did not provide written comments on the report. We also received technical comments from each agency, which we incorporated throughout our report as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of State, and Administrator for USAID. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8980 or courtsm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The objectives of our report were to evaluate (1) how the Department of State (State) manages risks to work facilities under chief-of-mission authority overseas; (2) the adequacy of State’s physical security standards for these work facilities; (3) State’s processes to mitigate vulnerabilities when older, acquired, and temporary work facilities overseas do not meet physical security standards; and (4) how State’s risk management activities align with its risk management policy and risk management best practices. Our scope included older, acquired (purchased or leased), and temporary diplomatic work facilities overseas, such as offices and warehouses built before the June 1991 security-construction standards. For the purposes of travel and review of post-specific documentation, we narrowed our scope to posts determined by State’s Bureau of Diplomatic Security (DS) to be high-threat, high-risk posts and which had older, acquired, and temporary diplomatic work facilities. We selected a judgmental sample of 10 posts from the 50 posts rated as the highest-threat, highest-risk. Our selection included posts placed under the new DS High Threat Programs Directorate, as well as those not placed under the new directorate but excluded posts with new embassy compounds. Our sample included posts in nine countries in three of State’s geographic regions—Africa, the Near East, and South and Central Asia. We are not naming the specific posts we visited for this review due to security concerns. For these posts, we reviewed the asset, threat, vulnerability, and risk documentation related to the post and its nonresidential facilities and conducted a physical security review of their nonresidential facilities. We conducted interviews at each of these posts with post officials, including DS’s Regional Security Officers (RSOs). We also reviewed similar documentation for 4 other high-threat, high-risk posts and interviewed officials about the documentation by video teleconference. In addition to the 14 posts, we traveled to two other posts and conducted interviews with post officials but did not review post-specific documentation or review facilities. Our findings from these posts are not generalizable to all posts. Moreover, our judgmental selection of high-threat, high-risk posts cannot be generalized to other high-threat, high-risk posts. To provide context and background and address our objectives, we reviewed classified, sensitive-but-unclassified, and unclassified documents, including U.S. laws; State’s physical security policies and procedures as found in memoranda, guidance, the Foreign Affairs Manual (FAM), and Foreign Affairs Handbooks (FAH)—most notably, the Physical Security Handbook and the Overseas Security Policy Board (OSPB) standards; DS documentation of anti-U.S. attacks, overseas posts’ physical security surveys, threat and risk ratings, and physical security waivers and exceptions; post-specific documents pertaining to physical security; State’s Bureau of Overseas Buildings Operations (OBO) facility, construction, and physical security upgrade documentation; U.S. Agency for International Development (USAID) facility and physical security documentation; classified and unclassified Accountability Review Board (ARB) reports resulting from physical security attacks and State’s documents evaluating their response to ARB recommendations; past GAO, State Office of Inspector General (OIG), and Congressional Research Service reports; and reports by congressional committees and independent panels. We also interviewed several officials in Washington, D.C., about risk management and physical security policies and standards and their implementation; these officials were from DS; OBO; State’s Office of Management Policy, Rightsizing, and Innovation (M/PRI); State’s Bureau of Conflict and Stabilization Operations; and State’s Bureau of African Affairs; OIG; as well as USAID security officials. To provide further context and background, we also analyzed State and USAID data of physical security funding allotments, interviewed officials about the data, and found the data to be sufficiently reliable to report at an aggregated level. However, we found that State runs other programs, such as OBO’s major rehabilitation program and DS’s technical field support efforts, which may include physical security upgrades as part of such projects. We did not include funding from those other State sources in our presentation of the data. To address how State manages risk to work facilities, we evaluated the reliability of OBO’s facility data in its property database, the timeliness and tracking of posts’ triennial physical security surveys, and the reliability of the data DS uses to assess risk. To evaluate the reliability of the data in OBO’s property database, OBO provided us with work facility records pulled from the database between May 2013 and January 2014. We compared these records to information we collected during discussions with post officials to identify excess facilities, missing facilities, and inaccuracies in the data. Although we identified data reliability issues for some facilities in OBO’s property database, as those issues generally involved the classification or description of facilities, we determined that the data were sufficiently reliable to describe the approximate number of U.S. diplomatic work facilities overseas. To review the timeliness of DS’s tracking of posts’ triennial physical security surveys, we requested and obtained most surveys for work facilities at the 14 posts where we reviewed facility documentation. We reviewed the documentation to determine whether each survey had been completed in the past 3 years and reviewed the documents obtained against our list of facilities for each post to determine if DS provided all the relevant surveys. If DS did not provide us with a survey we expected, we followed up with DS officials in headquarters to determine whether or not survey documentation for a facility existed and if it was appropriately maintained and tracked in headquarters and at post. To evaluate the reliability of the data DS uses to assess risk to office facilities overseas, we examined a copy of the most recent risk matrix completed by DS and identified (1) inaccurate information based on post-specific information gathered throughout the course of the engagement, (2) missing off-compound facilities and inaccurate information based on an analysis of the facilities included in the risk matrix, and (3) missing data points. Although we identified data reliability issues in the risk matrix, which may affect the risk scores for individual posts, we determined that the data were sufficiently reliable to broadly characterize overall facility vulnerability and risk scores at the aggregate level. To address the adequacy of State’s physical security standards, we evaluated the consistency of select physical security standards across various types of policy guidance and the timeliness of updates to those policies. We conducted a number of activities to evaluate the consistency of physical security standards. We reviewed the physical security standards for work facilities described in the FAM and the FAH— specifically, the FAH sections containing the OSPB standards and the Physical Security Handbook—and identified inconsistencies between the FAM and the FAH and inconsistencies between the two sections of the FAH. We also reviewed policy guidance documented in memoranda and compared that to the physical security standards outlined in the FAH. Finally, we identified inconsistencies between the FAH and the OBO Building and Zoning Codes discussed in an OIG report. To evaluate the timeliness of updates to physical security standards in the FAM and the FAH, we interviewed DS officials to understand the process State follows to update the FAM and the FAH and determined that updates should take approximately 60 to 90 days. We then reviewed (1) all ARB reports resulting from attacks on U.S. diplomatic facilities since 1998 or that included recommendations related to physical security and (2) joint DS- OBO memoranda concerning physical security standards in process. We examined this documentation and identified instances in which it has taken State more than a year to update these standards. To address how State mitigates vulnerabilities if facilities do not meet applicable physical security standards, we asked post officials a standard set of questions; identified several ways to measure general compliance with physical security standards; and evaluated State’s waivers and exceptions process. Using the judgmental sample described above, we traveled to 12 posts and conducted work focused on 4 other posts by teleconference. Our sample included nine countries in three of State’s geographic regions—Africa, the Near East, and South and Central Asia. As noted above, we selected these posts due to their relatively high DS- established threat and risk ratings and the presence of facilities that fell within our scope. For security reasons, we are not naming the specific posts we visited for this review. At 16 posts overseas, we asked State, USAID, and other agency officials in-person and by video-conference a standard set of questions regarding the implementation of physical security policies and procedures to understand how State identifies and mitigates vulnerabilities. We also identified three ways to measure general compliance with physical security standards based on State documentation. First, we reviewed the list of embassy and consulate compounds. We found that a significant number were constructed or acquired prior to 1991; because those facilities are only required to meet many of the physical security standards to the maximum extent feasible or practicable, we determined that many of those facilities may not meet standards. Second, we reviewed DS’s list of approved waivers and exceptions, which they use to track this documentation, and counted the number of facilities with colocation and setback waivers and exceptions. We determined that each facility with a waiver or exception does not meet all physical security standards. We interviewed DS officials about the waivers and exceptions spreadsheet. While we found problems with some entries in the spreadsheet, we determined that the data were sufficiently reliable to report a general order of magnitude of the number of waivers and exceptions. Third, we obtained and reviewed the 2013 risk matrix that DS completed in September 2013. We then reviewed the facility compliance scores for each facility ranked in DS’s risk matrix to determine the number of facilities that DS has found do not meet most OSPB standards for new facilities. To make that determination, we identified all facilities with a standards compliance score in the bottom half of the 10-point range. However, because SECCA’s 100-foot setback requirement received its own rating in the matrix and was not considered as part of the facility compliance rating, our analysis of DS’s standards compliance score does not include the extent to which facilities met the 100-foot setback requirement. Due to the limitations with DS’s ratings that we noted earlier, we are only reporting this information to provide a broad indication of concerns with facilities’ compliance with standards and not to provide a precise estimate of the number of facilities with particular ratings. Furthermore, at 10 posts we visited, we evaluated the compliance of all work facilities—a combined total of 43 different offices and warehouses— against the existing physical security standards. Prior to reviewing overseas facilities, we reviewed prior recommendations made by OIG or the Interagency Security Assessment Teams. We then developed a physical security checklist for each of the four facility types we reviewed— chanceries or consulates, sole occupant of building or compound, tenant of commercial office space, and unclassified warehouse—on the basis of the current security standards specified in the OSPB standards and the Physical Security Handbook. The physical security requirements in the OSPB standards vary by facility type, date of construction or acquisition, and threat level. Because we identified some inconsistencies between these two policy guides, we always included the higher of the two standards in our physical security checklist in those instances in which we identified an inconsistency. For example, the OSPB did not include the compound emergency sanctuary requirement in the OSPB standards until after our post visits in December 2013. However, because State included the standards for compound emergency sanctuaries in the Physical Security Handbook in October 2012, we assessed facilities against this standard during our facility reviews. We then used these checklists to evaluate the compliance of work facilities at the 10 posts we visited. In general, the facilities in our sample were not comparable to those on recently constructed embassy or consulate compounds, which were constructed to meet current security standards. Our findings from these posts are not generalizable to all posts. To evaluate the adequacy of State’s waivers and exceptions process, which is one process by which State mitigates vulnerabilities when facilities do not meet standards, we reviewed DS’s list of waivers and exceptions, post-specific physical security surveys, waivers and exceptions for 14 of the 16 posts in which we conducted work, and our post-specific physical security checklists for the 10 posts to which we traveled. We then analyzed DS’s list of waivers and exceptions against the other documentation we collected and our physical security checklists to identify any issues with DS’s tracking of waivers and exceptions. We also reviewed our physical security checklists and identified all security deficiencies for which a waiver or exception should have been requested; we then compared that information with DS’s list of waivers and exceptions and the post-specific waivers and exceptions to identify missing waivers and exceptions. In addition, we reviewed the post- specific documentation to determine if post officials requested waivers and exceptions in a timely manner and if the documentation was accurate. Finally, we identified mitigation measures outlined in the approved waiver or exception request that the post was expected to implement and evaluated that information against our physical security checklists to determine if all agreed upon mitigation measures had been implemented. To address how State’s risk management activities align with its policies and best practices, we assessed DS’s risk management policy and, drawing on our other findings, State’s current risk management efforts against best practices identified by GAO as well as federal standards for internal control. In addition, we reviewed M/PRI’s ARB recommendation matrix to assess the extent to which State had addressed and closed past ARB recommendations. However, based on the work we conducted when reviewing the timeliness of updates to physical security standards, we identified two instances of recommendations that State closed though it had not completed the actions cited in closing them. We conducted this performance audit from March 2013 to June 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The original version of this report is a restricted report and was issued on June 5, 2014, copies of which are available for official use only.public version of the original report does not contain certain information that State regarded as Sensitive but Unclassified and requested that we remove. We provided State a draft copy of this public report for sensitivity review, and State agreed that we had appropriately removed all Sensitive but Unclassified information. Michael J. Courts, (202) 512-8980 or courtsm@gao.gov. In addition to the contact named above, Anthony Moran (Assistant Director), Amanda Bartine, Thomas Costa, David Dayton, Etana Finkler, Farhanaz Kermalli, Ann McDonough-Hughes, Brian Tremblay, and Ozzy Trevino made key contributions to this report. John Bauckman, Martin De Alteriis, Mark Dowling, Brandon Hunt, Mary Moutsos, and Ramon Rodriguez provided technical assistance.
U.S. policy can call for U.S. personnel to be posted to high-threat, high-risk posts overseas. To maintain a presence in these locations, State has often relied on older, acquired (purchased or leased), and temporary work facilities that do not meet the same security standards as more recently constructed permanent facilities. GAO was asked to review how State assures the security of these work facilities. GAO evaluated (1) how State manages risks at work facilities overseas; (2) the adequacy of State's physical security standards for these facilities; (3) State's processes to address vulnerabilities when older, acquired, and temporary overseas facilities do not meet physical security standards; and (4) the extent to which State's activities to manage risks to its overseas work facilities align with State's risk management policy and with risk management best practices. GAO reviewed U.S. laws and State's policies, procedures, and standards for risk management and physical security. GAO reviewed facilities at a judgmental sample of 10 higher-threat, higher-risk, geographically dispersed, overseas posts and interviewed officials from State and other agencies in Washington, D.C., and at 16 overseas posts, including the 10 posts at which GAO reviewed facilities. To manage risks at its overseas work facilities, the Department of State (State) tracks information about each facility, assesses threat levels at posts, develops security standards to meet threats facing different types of facilities overseas, identifies vulnerabilities, and sets risk-based construction priorities. For example, State assesses six types of threats, such as terrorism, and assigns threat levels, which correspond to physical security standards at each overseas post. However, GAO found several inconsistencies in terminology used to categorize properties and within the property inventory database used to track them, raising questions about the reliability of the data. For example, GAO identified a facility categorized as a warehouse that included offices and therefore should have been subject to more stringent standards. Gaps in categorization and tracking of facilities could hamper the proper implementation of physical security standards. Although State has established physical security standards for most types of overseas facilities, GAO identified some facility types for which standards were lacking or unclear, instances in which the standards were not updated in a timely manner, and inconsistencies within the standards. The following are examples: It is unclear what standards apply to some types of facilities. In some instances, updating standards took more than 8 years. One set of standards requires anti-ram perimeter walls at medium- and higher-threat posts; another required them only at higher-threat posts. Furthermore, GAO found that State lacks a process for reassessing standards against evolving threats and risks. GAO identified several posts that put security measures in place that exceed the standards because the standards did not adequately address emerging threats and risks. Without adequate and up-to-date standards, post officials rely on an ad hoc process to establish security measures rather than systematically drawing upon collective subject-matter expertise. Although State takes steps to mitigate vulnerabilities to older, acquired, and temporary work facilities, its waivers and exceptions process has weaknesses. When posts cannot meet security standards for a given facility, the posts must submit requests for waivers and exceptions, which identify steps the post will take to mitigate vulnerabilities. However, GAO found neither posts nor headquarters systematically tracks the waivers and exceptions and that State has no process to re-evaluate waivers and exceptions when the threat or risk changes. Furthermore, posts do not always request required waivers and exceptions and do not always take required mitigation steps. With such deficiencies, State cannot be assured it has all the information needed to mitigate facility vulnerabilities and that mitigation measures have been implemented. GAO found that State has not fully developed and implemented a risk management policy for overseas facilities. Furthermore, State's risk management activities do not operate as a continuous process or continually incorporate new information. State does not use all available information when establishing threat levels at posts, such as when posts find it necessary to implement measures that exceed security standards. State also lacks processes to re-evaluate the risk to interim and temporary facilities that have been in use longer than anticipated. Without a fully developed risk management policy, State may lack the information needed to make the best security decisions concerning personnel and facilities. To manage risk to overseas work facilities, State conducts a range of ongoing activities, including the setting of security standards. However, GAO identified a number of problems with these activities. Moreover, GAO found that State lacked a fully developed risk management policy to coordinate these activities (see figure). This is the public version of a Sensitive but Unclassified report by the same title. GAO is making 13 recommendations for State to address gaps in its security-related activities, standards, and policies. State generally agreed with GAO’s recommendations. Specifically, GAO is recommending that the Secretary of State: 1. Define the conditions when a warehouse should be categorized as an office facility and meet appropriate security standards. 2. Harmonize the terminology State uses to categorize facilities in its security standards and property databases. 3. Establish a routine process for validating the accuracy of the data in State’s property database. 4. Establish a routine process for validating the accuracy of the data in State’s risk matrix. 5. Identify and eliminate inconsistencies between and within State’s physical security guidance. 6. Develop physical security standards for facilities not currently covered by existing standards. 7. Clarify existing flexibilities to ensure that security and life-safety updates to the security standards are updated through an expedited review process. 8. Develop a process to routinely review all security standards to determine if the standards adequately address evolving threats and risks. 9. Develop a policy for the use of interim and temporary facilities that includes definitions for such facilities, time frames for use, and a routine process for reassessing the interim or temporary designation. 10. Automate waivers and exceptions documentation, and ensure that headquarters and post officials have ready access to the documentation. 11. Routinely ensure that necessary waivers and exceptions are in place for all work facilities at posts overseas. 12. Develop a process to ensure that mitigating steps agreed to in granting waivers and exceptions have been implemented. 13. Develop a risk management policy and procedures for ensuring the physical security of diplomatic facilities, including roles and responsibilities of all stakeholders and a routine feedback process that continually incorporates new information.
While there is no single definition of municipal fiscal crisis, both academic research and state policy documents distinguish between municipalities in distress, crisis, and in extreme cases, bankruptcy. In managing revenue and expenses, local governments occasionally confront deficits and periods when they lack enough cash to cover expenses. Most of the time, they find ways to get through the temporary trouble by, for example, borrowing money over the short term. But when budget gaps widen and a city cannot pay its bills, meet its payroll, balance its budget, or carry out essential services, the local government is viewed as distressed. Municipal officials usually respond with some combination of service cuts, worker layoffs, tax and fee increases, reserve spending, and borrowing. If those measures do not work and the city no longer has the money to meet its obligations, the distress can escalate into a crisis or financial emergency, which may include defaulting on a bond payment or, in rare instances, filing for protection under Chapter 9 of the U.S. Bankruptcy Code (Chapter 9). Chapter 9 provides a municipality with protection from creditors while the municipality develops and negotiates a plan for adjusting its debts. Among other requirements, a municipality may seek such bankruptcy protection in a federal bankruptcy court if it is authorized to do so under state law, and the municipality can prove to the bankruptcy court that it is insolvent. Twenty-seven states authorize municipalities to file for Chapter 9 bankruptcy, but 15 of those states have conditions or limitations on the authorization. Of the remaining 23 states, 21 do not have specific authorizations, and 2 specifically prohibit their municipalities from filing for Chapter 9. Chapter 9 filings are rare for general purpose municipalities (e.g., cities, towns, and counties). From January 1980 to June 2014, 43 of approximately 39,000 general purpose municipalities filed for Chapter 9. These municipalities tended to be small in population: only 8 of the 43 municipalities had a population over 50,000. Three of the four municipalities in our review have filed for Chapter 9: Detroit, Michigan; Camden, New Jersey; and Stockton, California (see figure 1). Congress has provided assistance to municipalities in fiscal crisis by using a variety of approaches on a case-by-case basis. For example, in 1975 New York City faced a serious fiscal crisis. New York City had accumulated $14 billion in debt, and was unable to pay for normal operating expenses. That year Congress passed legislation to provide short-term loans to New York City to assist with its fiscal crisis. As a condition of receiving these loans, the city had to agree to develop more stringent financial procedures including a new accounting system that would allow an auditor to perform an audit and render an opinion on the city’s financial statements. In a prior report on New York City’s financial plan, we concluded that the federal government’s intervention, along with other factors, helped to stabilize the city’s fiscal crisis. Congress also took steps to assist the District of Columbia during its fiscal crisis in 1995. In 1994, the District was running a $335 million budget deficit and could no longer pay its bills. In response, Congress passed the District of Columbia Financial Responsibility and Management Assistance This act established the District of Columbia Financial Act in April 1995. Responsibility and Management Assistance Authority—a financial control board—to assist the District in restoring financial solvency and improving management effectiveness during a control period. By 2001, the District had balanced its budget for 4 consecutive fiscal years in accordance with generally accepted accounting principles, obtained access to both short- term and long-term credit markets, and repaid outstanding debt it owed to the U.S. Treasury. As a result, the control period ended and the District returned to self-governance. In 2001, we testified that Congress’ creation of a control board contributed to the improvement in the fiscal health of the District. In prior work, we identified several guidelines for Congress to consider when evaluating the need for a federal response to a large failing firm or municipality. Those guidelines included considering whether the problem was localized or widespread and whether the costs of a municipal collapse would outweigh the costs of providing aid. We also provided guidelines for structuring a federal intervention, such as developing clear goals and objectives and protecting the financial interest of the federal government. In addition to these federal efforts, 19 states have passed laws establishing mechanisms to assist municipalities in fiscal crisis, in part to avoid the need for these entities to file for Chapter 9 protection in federal court. designate a receiver, emergency fiscal manager, state agency head, or financial control board to administer the intervention. Depending on the state, this entity may take a number of actions, including restructuring debt and labor contracts, raising taxes and fees, offering state-backed loans and grants, providing technical assistance, and even dissolving the local government. Three of the four municipalities in our review were subject to state interventions to assist with the fiscal crisis. Camden, New Jersey, had a state-assigned fiscal monitor that provided oversight. Both Detroit, Michigan, and Flint, Michigan, had state-appointed emergency fiscal managers with broad authority to oversee all operations of government in lieu of elected officials. There was no state intervention in Stockton, California (see figure 1 above). The 19 states with laws allowing intervention in municipal fiscal crisis are: Connecticut, Florida, Indiana, Illinois, Maine, Massachusetts, Michigan, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Carolina, Ohio, Oregon, Pennsylvania, Rhode Island, Tennessee, and Texas. See Pew Charitable Trusts, The State Role in Local Government Financial Distress (Washington, D.C.: July 2013). throughout this report. Grants represent one form of federal assistance consisting of payments in cash or in kind to a state or local government or a nongovernmental recipient for a specified purpose. Grant programs are typically subject to a wide range of accountability requirements under their authorizing legislation or appropriation and implementing regulations so that funding is spent for its intended purpose. For example, the Department of Housing and Urban Development (HUD) administers Community Development Block Grants (CDBG) to aid states and localities in providing housing, economic development, and other community development activities. Congress mandated that HUD administer these grant programs in a manner that principally benefits low- and moderate-income persons, aids in the prevention or elimination of slums or blight, or meets urgent community development needs. HUD regulations direct grant recipients to prepare planning documents and maintain certain records proving the legislation’s requirements as a condition to receiving funds. In addition, grant programs are also subject to crosscutting requirements applicable to most assistance programs. For example, recipients of grant funds are prohibited from using those funds to lobby members and employees of Congress and executive agency employees. The Office of Management and Budget (OMB) is responsible for developing government-wide policies to ensure that grants are managed properly and that grant funds are spent in accordance with applicable laws and regulations. Until recently, OMB published guidance in various circulars to aid grant-making agencies with such subjects as audit and record keeping and the allowability of costs. In December 2013, OMB consolidated its grants management circulars into a single document, Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards, to streamline its guidance, promote consistency among grantees, and reduce administrative burden on nonfederal entities. For this review, we selected the grant programs listed below. For a brief description of these programs as well as the award amounts for our selected cities, see appendix II. Community Development Block Grant Entitlement Program (CDBG) administered by the Department of Housing and Urban Development (HUD). HOME Investment Partnerships Program (HOME) administered by HUD. Federal Transit Formula Grant Program, administered by the U.S. Department of Transportation’s (DOT) Federal Transit Administration (FTA). Highway Planning and Construction Grant Program administered by DOT’s Federal Highway Administration (FHWA). Edward Byrne Memorial Justice Assistance Grant Program (JAG) administered by the Department of Justice (Justice). Community Oriented Policing Services (COPS) Hiring Program administered by Justice. Assistance to Firefighters Grant Program (AFG) administered by Department of Homeland Security’s (DHS) Federal Emergency Management Agency (FEMA). Staffing for Adequate Fire and Emergency Response (SAFER) grant program administered by FEMA. The capacity of grant recipients is a key factor in grants management which can have a significant impact on a program’s success. Capacity involves both the maintenance of appropriate resources and the ability to effectively manage and utilize those resources. In prior work, we have Human capital capacity identified several different types of capacity.describes the extent to which an organization has sufficient staff with the knowledge and technical skills needed to effectively meet its goals and objectives. Financial capacity is the ability of an organization to meet financial responsibilities related to federal grants, such as matching requirements. Organizational capacity refers to the degree to which an organization is institutionally prepared for grant management and implementation, including its ability to employ technology for grant oversight and reporting. A lack of capacity in any of these three dimensions can adversely impact a recipient’s ability to effectively manage and implement federal grants. We found challenges related to each of these three types of capacity at the four municipalities we reviewed. All four municipalities experienced reductions in their human capital capacity due to fiscal crisis, but the effect of those reductions on the management of selected grants varied. From 2009 to 2013, these municipalities experienced workforce declines ranging from 18 to 44 percent (see table 1). In an effort to cut costs, these municipalities laid off city employees, imposed furloughs, and cut wages (which according to officials in Stockton, in turn led to higher staff attrition rates). In three municipalities—Detroit, Flint, and Stockton— this downsizing directly impacted city staff responsible for the management and oversight of federal grants. For example, Detroit’s Planning and Development Department, which administers HUD’s CDBG and HOME grants received by the city, lost more than a third of its workforce between 2009 and 2013—falling from 173 to 110 FTEs. According to Detroit officials, it was difficult for the staff that remained to carry out all of the department’s grant compliance and oversight responsibilities. They said the loss ultimately contributed to adverse single audit findings, monitoring findings and special grant conditions from HUD. For example, in a 2013 monitoring report for the CDBG program, HUD found seven deficiencies, such as incorrect grant charges for staff time and failure to demonstrate adequate controls to prevent charging CDBG for unallowable costs. HUD officials also noted that Detroit had failed to close its single audit findings from fiscal years 2010 through 2012 in part due to a lack in human capital capacity. According to HUD’s monitoring report, Detroit did not “have the capacity to improve its capacity.” For additional information on the issue of grant administrative costs, see GAO, Grants Management: Programs at HHS and HUD Collect Administrative Cost Information but Differences in Cost Caps and Definitions Create Challenges, GAO-15-118 (Washington, D.C.: Dec. 12, 2014). received by the city can also influence how severely grant management staff are impacted. In addition to having a sufficient number of staff, municipalities also need to have personnel with the right knowledge, skills, and abilities to manage their grants effectively. Local officials in Detroit, Flint, and Stockton told us that reductions in staff due to fiscal crisis led to grant management skills gaps in their workforce. With overall lower staff numbers, remaining staff were left to cover a larger set of responsibilities, including managing grant programs that they had not been familiar with prior to the staff reductions. Officials representing Detroit and Flint told us that when they lost grant management staff, the resulting skill shortage sometimes contributed to violations of grant agreements or grant funds remaining unspent in city accounts. For example, Flint’s Department of Community and Economic Development, which administers HUD’s CDBG and HOME grant programs, lost a number of key staff during its fiscal crisis through layoffs and attrition, including an experienced employee who reviewed and approved grant expenditures. Flint officials as well as HUD’s technical assistance providers for the CDBG and HOME programs told us that losing staff with critical grant management knowledge contributed to compliance problems, resulting in a series of critical audits of Flint’s HOME program by HUD’s Inspector General from 2009 to 2013.According to staff from HUD’s Office of the Inspector General, staff turnover in Flint contributed to grant management knowledge gaps and subsequent audit findings. These findings had serious monetary consequences for the city of Flint. Flint officials told us that the city owed HUD approximately $1.1 million in 2014 because Flint could not ensure that its indirect costs had been appropriately calculated and allocated across HUD’s grant programs. In addition to increasing the risk of violations of grant agreements, losing grant management skills made it more difficult for officials in Detroit, Flint, and Stockton to draw down grant funds. For example, staffing levels in Detroit’s Department of Transportation, which administers FTA’s Federal Transit Formula Grant program, fell from 1,514 FTEs in 2009 to 809 in 2013. In addition, over the span of 3 years, the department had 4 directors. According to Detroit officials, this change in management caused a lack of direction and consistency in priorities, which particularly affected the departments’ procurement staff. Federal Transit Formula grantees—including Detroit— use these grant funds to finance the procurement and maintenance of transit equipment and facilities, such as buses and bus terminals. A lack of employees with the skills to process procurement requests and administer grants caused some grant funds from FTA to remain unspent in accounts. Officials in two municipalities—Detroit and Stockton—told us that turnover in senior- and mid-level management contributed to federal grant management challenges. According to city officials, this happened for two reasons. First, because some cities in fiscal crisis must furlough employees, lower salaries, or reduce retirement benefits, senior staff members chose to leave their positions while they could still vest their retirement benefits based on their highest salary levels. Second, more experienced staff members had more marketable skills and were able to find other jobs more easily than the junior staff members. Officials in these two municipalities explained that losing senior staff created gaps in institutional knowledge and made it more difficult for remaining staff to meet existing grant requirements. These gaps in institutional knowledge were exacerbated by a lack of robust knowledge transfer practices, which heightened the risk to federal fund management as a city government lost staff because there was no mechanism in place for staff to pass down knowledge to their successors before they left. Knowledge management had been a long-standing challenge for the city of Detroit. Detroit had few written grant policies to help transfer knowledge about grants management. According to a city of Detroit report and Detroit officials, grant management policies and procedures in Detroit varied among grant-recipient departments. Some departments had policies and procedures while others did not. This resulted in ad hoc procedures, passed on from one employee to the next. When an employee who was knowledgeable in one area of grants management retired, his or her knowledge also left. Detroit officials said they believed that limitations in the city’s ability to effectively manage and preserve existing knowledge and expertise regarding grant management contributed to the city’s history of poor audit findings. Detroit had 90 compliance findings on its single audit in 2011 and 98 findings in 2012.2011 and $14.8 million in 2012. A cost becomes questioned when the auditors review grant expenditures and cannot find sufficient documentation to prove that the expenditure was eligible under the terms of the related grant program. In some cases, Detroit had to return part or all of these questioned costs to relevant federal departments or had The questioned costs totaled $31.6 million in funding withheld. As of February 2015 Detroit officials were working to implement written grant management policies and procedures as a part of the city’s response to its fiscal crisis and bankruptcy. A lack of financial capacity at two of the municipalities we reviewed—Flint and Stockton—reduced their ability to apply for federal grants that call for local resource investments or maintenance of effort provisions. Officials in Flint told us that they struggled to generate local resources needed to make the city competitive for some federal grants. A manager with Flint’s Department of Transportation told us that the city wanted to apply for a Transportation Investment Generating Economic Recovery (TIGER) grant, which is a competitive grant program administered by DOT that supports road, rail, transit, and port projects. TIGER grant applications are evaluated in part by the level of nonfederal financial commitments that grantees are able to contribute to the proposed project. Because of the city’s limited budgetary resources, Flint needed to postpone submitting an application for at least 3 years in order to obtain the local funds to make the application competitive. Other federal grant programs require grantees to demonstrate that they will maintain the level of nonfederal funding for the program that was in effect prior to receiving the federal grant award. The purpose of this maintenance of effort requirement is to prevent grantees from substituting federal dollars for local dollars. Flint and Stockton did not apply for competitive federal grants with maintenance of effort requirements because their city governments were unable to ensure that they would maintain nonfederal funding at current levels. For example, officials in Stockton told us that the city decided not to reapply for an AFG grant because it could not afford the maintenance of effort requirements. As part of the AFG grant terms, a grantee must agree to maintain local expenditure levels of at least 80 percent of the average expenditures in the 2 fiscal years prior to the grant award. These officials told us that certifying that they will maintain expenditure levels was not always possible for municipalities in fiscal crisis. In two municipalities we reviewed, a chronic lack of investment in organizational capacity— specifically in information technology (IT) systems—challenged the ability of these communities to oversee and report on grants in an accurate and timely way. In Detroit, the IT systems that handled grants management were outdated and fragmented, making it difficult to capture reliable financial information. Senior city officials told us that they did not know the total amount of grant funds Detroit received from the federal government because their various IT systems did not communicate with one another. According to an outside review commissioned by the city to assess its grant management system, grant account information appeared in numerous makeshift spreadsheets that did not necessarily match the city’s central accounting system and Detroit’s general ledger did not update automatically with grant payroll or budgeting data. These IT inconsistencies made it impossible for Detroit to capture reliable financial information. The report also found that basic accounting practices like proper award setup and closeout, cost allocation, and reconciliation were overlooked or omitted, leaving Detroit with mismatched records and grant funds that were subject to expiration. In Detroit’s 2011 and 2012 single audit reports, external auditors found IT deficiencies in every federal grant program they reviewed.general fund had to cover disallowed costs and federal grant de- obligations. In other words, these broken IT systems exacerbated the fiscal crisis by contributing to inefficiencies and extra costs for the city’s general fund. As a result of these and other single audit findings, Detroit’s Although the grants accounting system in Stockton generally produced reliable financial information, senior city officials told us that the system could not generate timely reports to inform local decision making. Stockton’s 20-year-old accounting system did not generate the automatic reports that more modern systems are designed to produce. This required city employees to manually process financial data to produce financial reports. Because of the time involved, city employees often chose not to produce the reports, leading to late reporting and outdated numbers. For example, rather than running comparisons of budgeted spending to actual spending on a monthly basis, senior Stockton officials told us that city employees had instead produced these comparisons on a quarterly basis. Members of the Stockton City Council as well as local auditing groups told us that the absence of timely financial data made it more difficult for the city’s leadership to make informed financial decisions. Three municipalities—Flint, Stockton, and Detroit—have consolidated their grant management processes in an effort to improve citywide oversight and accountability for federal grant funds. To address challenges with financial and organizational capacity, Flint and Stockton instituted a new grant application preapproval process for all city departments. As part of the new process, whenever a city department official intends to apply for a federal grant, that department official must notify city finance officials for approval to apply. The city finance officials review this notification to identify any potential costs for the city that the grant may entail. If these officials approve this notification, the department may apply for the federal grant. Officials from both these cities told us that this process was intended to notify appropriate city officials of any matching or maintenance of effort requirements associated with federal grants. Another benefit that these officials identified was that the notification process allowed city leadership to be aware of any effects that the grants may have on the city’s legacy costs, such as retiree health care. Detroit has also taken steps to overhaul its grant management system, including establishing a new citywide Office of Grants Management. Grant management problems have plagued Detroit for years. In April 2012, Detroit signed a consent agreement with the state of Michigan that As a first required the city to restructure its grant management system.step to meet this requirement, officials worked with outside consultants to assess the current state of the city’s grants management and to identify potential reforms. Then in June 2014, the Emergency Manager directed the Chief Financial Officer to establish a central Office of Grants Management. According to Detroit officials, benefits of a stable, centralized grants management office include better management, compliance, accountability and oversight and reporting of grant data. It also results in better trained staff, and clear and up-to-date grant financial and performance data. A top priority for this office is to ensure the proper management and fiscal integrity of grants. Detroit officials told us that they have begun the process of implementing grant management policies and procedures to standardize processes across the city and to help build a culture of compliance and integrity. These policies and procedures include grant planning, pre-award processes, award acceptance, post- award management, and compliance and monitoring. Three municipalities in our review—Detroit, Flint, and Camden— collaborated with local nonprofits to apply for federal grants. Officials from these cities told us that this collaboration helped them address challenges they faced with human capital capacity. For example, officials in Detroit worked with the Detroit Public Safety Foundation to identify and apply for federal grants to help support the Detroit Police and Fire Departments. This foundation assisted the police department with its 2014 COPS grant application and the fire department with securing its AFG grants in 2011 and 2013 and its SAFER grants in 2011, 2012, and 2013. Detroit Fire Department officials told us that without the help of the Public Safety Foundation, they would have limited capacity to apply for competitive federal grants. Similarly, the city of Flint partnered with the Flint Area Reinvestment Office and the Charles Stewart Mott Foundation to identify and apply for federal grants. The Flint Area Reinvestment Office is a local nonprofit organization with the mission to “inform, organize, and facilitate local partner collaboration on strategic opportunities that attract federal and state resources.” The Charles Stewart Mott Foundation—which began in Flint in 1926—supports a variety of projects through its Flint Area Program, such as economic development, job training, and emergency services projects. A senior Flint official told us that one of the valuable contributions these local nonprofits made was to coordinate grant applications in the area to help ensure that multiple organizations were not applying for the same grant. These two organizations helped the city apply for a COPS grant, which it received in 2013. In addition to taking steps that directly improve federal grant management—such as consolidating grant management processes and working with local nonprofits to apply for federal grants—some municipalities also recognized the need for making improvements to systemic financial and organization problems to set a proper foundation for sound grant management. Two of the municipalities in our review— Flint and Stockton—established committees to recommend changes in city governance necessary to improve long-term fiscal health and stability. These municipalities recognized that their fiscal issues were the result of long-term, systemic policies and structures. Therefore, they created committees to recommend changes to improve their long-term financial capacity. Flint officials said that systemic changes were needed to protect the fiscal future of the city. In response, Flint’s Emergency Manager appointed members to a Blue Ribbon Committee on Governance Reform. The committee explored the structures, policies, and practices that contributed to Flint’s financial difficulties. It also proposed changes designed to help the city avoid returning to those difficulties in the future. In June 2014, the committee issued a number of recommendations to Flint’s Emergency Manager including that he embrace the use of multi- year budgeting, strategic planning, and long-term financial forecasts. In November 2014, Flint citizens voted to adopt four of six Blue Ribbon Committee recommendations. Stockton City Council members created a similar group, the Charter Review Advisory Committee, to advise the council on potential changes to the city charter, including administrative issues, election rules, term limits, and civil service reforms. Effective grant oversight procedures help ensure that waste, fraud, and abuse are minimized and that program funds are being spent appropriately. Such procedures include identifying the nature and extent of grant recipients’ risks and managing those risks; having skilled staff to oversee recipients to ensure they are using sound financial practices and meeting program objectives and requirements; and using and sharing information about grant recipients. Our past work has shown that to ensure that grant funds are used for intended purposes, federal agencies need effective processes for: (1) monitoring the financial management of grants; (2) ensuring results through performance monitoring; (3) using audits to provide valuable information about recipients; and (4) monitoring subrecipients as a critical element of grant success. We reviewed implementation of these monitoring procedures for each of the eight selected grant programs in Detroit, Flint, Camden, and Stockton in fiscal years 2009 through 2013. It is important to note that agencies use these monitoring procedures for all grantees, not just those in fiscal crisis. Four of the programs—CDBG, HOME, JAG, and COPS—consistently assessed risk during this period when determining the amount and type of oversight they would provide their grantees. See figure 2 for an overview of the risk assessments and monitoring actions taken for our selected grant programs in Detroit, Flint, Camden, and Stockton in fiscal years 2009 to 2013. These programs considered a variety of risk factors. For example, to assess risk for JAG grants in 2013, program officials used a grant assessment tool that included 29 risk indicators, such as the size of the grant award, timeliness of progress reports, and whether there had been an inspector general audit for the grantee in the previous 2 years. If a grantee scored higher than a certain threshold on these indicators, the grantee would likely be considered a high priority for in-depth monitoring activities, such as enhanced desk reviews or site visits. Two of these four programs—CDBG and HOME—considered risk factors that would likely be impacted by a municipality experiencing fiscal crisis, such as measures of employee loss, turnover, or extended vacancies of key staff. The four other grant programs—Federal Transit Formula, Highway Planning and Construction, AFG, and SAFER—have taken steps toward incorporating more risk assessments into grant monitoring processes. In 2014, FTA began formally using a new list of risk factors to determine whether to conduct enhanced oversight of a Federal Transit Formula grantee. This included whether the grantee had a state financial oversight control board or similar mechanism, which some state agencies require as a result of being in fiscal crisis. Of the eight programs we reviewed, the Federal Transit Formula program was the only one to have a risk factor directly linked to municipal fiscal distress. Prior to this change, FTA conducted routine monitoring reviews of its approximately 600 Federal Transit Formula grantees at least once every 3 years—or about 200 grantees per year. According to FTA officials, the new risk factors allow FTA to better target these reviews based on grantee risk and need. In 2014, FHWA also made improvements to its processes for identifying risk for its locally-administered Highway Planning and Construction grant projects. By law state departments of transportation are the direct recipients of Highway Planning and Construction Grant funds and have the primary role to oversee grant funds that are administered by subrecipients, such as municipalities. However, FHWA is responsible for monitoring the state departments of transportation to ensure that states are accountable for implementing federal requirements and conducting adequate oversight of federal funds. In August 2014, FHWA published an order that established a uniform methodology for assessing risk in the stewardship and oversight of locally-administered Highway Planning and Construction grant funds by state departments of transportation. For example, the order provides a guide for FHWA officials to use to assess the extent to which state departments of transportation have acceptable review and oversight plans detailing state oversight activities for locally- administered projects. FHWA developed this order to help provide reasonable assurance that Highway Planning and Construction grant projects comply with key federal requirements. In carrying out its oversight of AFG and SAFER grants, FEMA conducted both financial and programmatic monitoring. Financial monitoring primarily focuses on statutory and regulatory compliance of financial matters, while programmatic monitoring focuses on grant progress, targeting issues that may impact achievement of the grant’s goals. Prior to fiscal year 2013, a FEMA official told us that the reasons that AFG or SAFER grantees were chosen for in-depth programmatic monitoring were unclear, as those choices were often left to the discretion of regional program officials. These officials explained that, in fiscal year 2013, the agency conducted a baseline risk review of all new grantees to help inform the selection of grants for programmatic monitoring. For financial monitoring, prior to 2013 FEMA applied risk factors to a sample of grants to inform in-depth monitoring decisions. In response to a recommendation from the DHS Inspector General, in 2014 program officials said that they incorporated a set of three financial questions into the programmatic baseline risk review discussed above. When program officials found deficiencies through monitoring, they typically required corrective actions from the grantee; however, selected municipalities did not always take corrective actions to address these deficiencies. This contributed to continued grant management problems, resulting in a potential financial risk. However, the actual impact of these problems on proper use of federal funds is unclear. Further, the municipalities appeared to face these difficulties even when officials from different federal programs took different enforcement approaches. For example, in administering and monitoring Federal Transit Formula Grants in fiscal year 2012, FTA contractors found that the Detroit Department of Transportation had increased the amount of some of its contracts by more than $100,000 without including proper documentation to support the changes to the contract. In response, FTA required that Detroit provide evidence of adequate documentation to support future change orders to contracts. As stated in its monitoring reports, FTA has required numerous corrective actions from Detroit during our review period (fiscal years 2009 through 2013). Between 2009 and 2013 FTA found over 60 deficiencies with Detroit’s Federal Transit Formula Grants. According to FTA officials, Detroit would submit corrective action plans to address such deficiencies, but would not follow through on the plans. To enforce corrective actions, FTA officials told us that they could choose to withhold funds from Detroit. However, these officials said that they were hesitant to withhold funds because, while doing so may lead to changes in behavior of local officials, it would also deprive the city’s residents of the benefit of services provided by the funds. Instead, in April 2013 FTA placed Detroit on restricted draw down status. While in this status, FTA officials told us that all requests by the city of Detroit for payments under their Federal Transit Formula Grant are first to be reviewed by FTA officials to ensure that the costs are eligible for reimbursement and that city officials have included the necessary documentation. HUD officials also found chronic monitoring deficiencies in Detroit, but they took a different enforcement approach. Between 2009 and 2013, HUD’s grant monitoring reports identified 29 deficiencies in Detroit’s CDBG and HOME grant programs. In general, deficiencies were found in the following areas, among others: poor procurement practices, inadequate calculation of administrative and indirect costs to the grants, poor financial reporting, and lack of key documentation. In a December 2012 letter to the city of Detroit, HUD designated the city as a “high risk grantee” and imposed special grant conditions requiring Detroit to provide written procedures for how it would maintain compliance with the regulations governing its grant funds. As a result of these conditions HUD withheld its fiscal year 2012 formula funds—including CDBG and HOME grants—until Detroit had provided the agency with sufficient documentation to satisfy HUD officials that the city could properly manage the funds. HUD officials told us that the agency released these funds gradually in fiscal year 2013 as Detroit demonstrated that it satisfied the requirements set forth in the grant conditions. HUD had a similar experience with continued monitoring deficiencies with the city of Flint. For example, in its 2011 monitoring report on Flint’s HOME grant program, HUD stated that it had not received responses from Flint on how the city planned to address the agency’s 2010 monitoring findings, despite the fact that Flint officials had repeatedly promised to provide them. As a result, HUD officials told us that they withheld Flint’s fiscal year 2011 HOME funds, and—similar to the experience in Detroit—only released those funds in 2014 after Flint had addressed its monitoring deficiencies. Both the White House Working Group on Detroit and individual federal agencies took steps to improve collaboration with, and assistance to, municipalities experiencing fiscal crisis. The White House Working Group on Detroit was composed of staff from multiple federal agencies, including OMB, Treasury, HUD, and DOT, and was led by a coordinator who acted as a liaison between the Working Group and the city of Detroit. According to federal officials, the idea and structure of the White House Working Group on Detroit drew heavily from one of the White House’s place-based In July 2011, assistance initiatives: Strong Cities, Strong Communities.the White House launched the Strong Cities, Strong Communities (SC2) pilot, which deployed teams of federal employees from a range of different agencies to work alongside mayors and their staffs in cities— including Detroit. As part of this effort, the administration established a White House Council on Strong Cities, Strong Communities. This council is co-chaired by the Secretary of Housing and Urban Development and the Assistant to the President for Domestic Policy. Among the goals for the SC2 initiative is to improve relationships between local and federal agencies and coordination across federal programs needed to spark economic growth in distressed areas. Officials told us that the White House Working Group on Detroit was modeled to be an enhanced version of the SC2 initiative. According to the working group’s coordinator, one objective of the White House Working Group on Detroit was to facilitate information sharing between federal agencies and Detroit officials to help the city solve its fiscal crisis. It sought to accomplish this objective by meeting with senior city leaders to discuss their priorities and then connecting these officials with available resources or expertise needed to respond to city problems. For example, Detroit officials identified the city’s outdated IT systems as one of the top hurdles to its fiscal recovery. In response, the White House Office of Science and Technology Policy and the National Economic Council convened a group of top IT leaders in municipal government. These experts, dubbed the Tech Team, met with Detroit officials, assessed the city’s IT systems, and developed a set of recommendations with the purpose of streamlining government processes, saving money, and improving city services. Detroit city officials told us that they were following through on the Tech Team’s recommendations. For example, the Tech Team recommended that Detroit establish a cabinet-level position within city government charged with leveraging technology and innovation to improve the delivery of government services. In February 2014, Detroit hired a Chief Information Officer to lead IT improvements in the city. As a first step, the Chief Information Officer stated that Detroit is working on another of the Tech Team’s recommendations—evaluating citywide IT infrastructure—by completing a comprehensive analysis of the current IT systems in the city, providing new computers and issuing requests for proposals for new records management systems for the police and fire departments. Justice’s Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF) collaborated with two of our selected municipalities to help leverage limited local, state, and federal public safety resources. ATF collaborated with the Stockton Police Department to reduce firearms and gang crime.Historically, Stockton has experienced high crime rates. Officials there explained that much of Stockton’s crime was drug and gang related as the city is located at the intersection of two major drug corridors. As a result of severe budget cuts and layoffs since 2009, Stockton Police Department officials told us that they have been unable to respond to nonviolent crimes. Instead, officials said that Stockton has focused its limited capacity on its most violent crimes. Despite these efforts, the city experienced a surge in violent crimes in 2012, with a record 71 homicides. Stockton officials told us that they reached out to ATF to provide technical assistance to the city’s gang crime task force in 2012. ATF responded by collaborating with Stockton on a special operation to: (1) target and remove violent criminals who illegally trafficked and possessed firearms; and (2) dismantle criminal organizations working in Stockton. According to ATF and Stockton officials, experienced undercover ATF agents from throughout the country were deployed alongside local ATF agents and Stockton police officers to conduct covert investigations of some of the most violent criminals in Stockton and surrounding areas. An ATF report found that as a result of this partnership, 44 defendants were charged with a variety of federal firearms, narcotics, and robbery offenses and 11 more were charged with various state offenses. The operation also resulted in the seizure of 84 firearms and nearly 60 pounds of illegal drugs. The White House Working Group and selected agencies provided flexibilities on some grant requirements to assist municipalities in fiscal crisis. Generally, federal grant programs have rules and requirements regarding how grantees may spend funds. These conditions may be outlined in the legislation that established the grant program or through additional requirements established by the grant-making agency. Federal agencies can provide flexibilities on such grant requirements in certain circumstances. For example, members of the White House Working Group on Detroit from both OMB and Treasury used such flexibilities to allow Detroit to leverage previously allocated grant funds to address urban blight in the city. A 2013 survey of Detroit’s properties found that approximately 85,000 structures and vacant lots either met the definition of blighted property or showed indications of future blight, and roughly 40,000 of those structures needed immediate removal. A senior OMB official told us that one of his tasks as a member of the White House Working Group was to identify all existing federal funds that were already set aside for Detroit. In June 2013, as part of this effort, staff at OMB and Treasury identified unused resources from the Hardest Hit Fund that had been Although the given to Michigan to distribute throughout the state.Hardest Hit Fund was typically used to prevent foreclosures, these officials determined that it was possible within the legal limits of the grant requirements to redirect $100 million of Michigan’s Hardest Hit Funds to Detroit and to other Michigan cities for use in the demolition of blighted properties. In addition, OMB officials identified CDBG and HOME grant funds that had been previously allocated to the city but were in danger of expiring. Working together with staff from HUD and the city of Detroit, these officials told us they were able to quickly formulate plans that met grant requirements, thereby enabling the city to use the grant funds before they expired. In another example of federal agencies providing flexibility, the COPS Office worked closely with city officials in Camden to help legally transfer its grant funds during the city-county police consolidation. In May 2013, Camden dissolved its city police department and created a new Metro Division for the city of Camden within the existing Camden County Police Department. Camden officials told us that without dissolving the city police department, Camden would have been unable to continue to afford the salary and benefit costs of its police force. When Camden officials started working on the plan to consolidate the city and county police departments, officials reached out to the COPS Office to discuss what would happen with the city’s active COPS grant. Camden officials told us that the COPS Office was very helpful in providing options and flexibilities for Camden to continue to use the COPS grant. For example, the COPS Office provided Camden with several options and worked with the city to find a way to maintain its status as primary grantee but to transfer grant funds to the new county police force. The COPS Office worked with Camden officials in the police and finance departments to ensure that the transfer and transition occurred in a manner that met grant requirements. As a result, COPS officials told us that Camden remained in compliance with grant regulations while maintaining access to grant funds that supported community police in the newly consolidated force. Federal agencies provided a variety of technical assistance and training to help the municipalities in fiscal crisis included in our review to overcome knowledge gaps and human capital capacity challenges. For example, HUD provided in-depth technical assistance to help Flint and Detroit administer its grant programs. In 2010, HUD changed the way that it structures and delivers technical assistance. This approach, called OneCPD, was a departure from the manner in which technical assistance was previously delivered specific to a single program and often not coordinated with other technical assistance being offered. According to HUD, OneCPD was intended to provide nationwide, comprehensive, needs-based and cross-program technical assistance.us that grantees or HUD field offices may request technical assistance from the agency, which will then assign a technical assistance provider to the grantee. HUD’s technical assistance provider developed technical assistance plans for Flint and Detroit to improve their grant management capabilities. As outlined in its technical assistance plan for Flint, this provider conducted an assessment to determine and prioritize Flint’s needs and to address capacity gaps. Subsequently, the technical assistance provider worked with Flint to develop a comprehensive work plan to address both past and future demands; develop more organized and complete policies and procedures; and design processes for self- auditing, monitoring, and compliance. FTA assigned a senior member of its regional office in Chicago to assist Detroit during its fiscal crisis. According to FTA officials, the Regional Counsel served as an advisor and liaison to Detroit’s Department of Transportation since September 2013. For example, an FTA official told us that the Regional Counsel met in person with officials in Detroit’s Department of Transportation at least once per month and participated in multiple teleconferences throughout the month to assist city officials with administering the Federal Transit Formula Grant program. Both FTA and Detroit officials said that the FTA Regional Counsel assisted the city by providing technical assistance on a variety of grant management issues. For example, the Regional Counsel provided input and advice on Detroit’s draft fleet management plan for its city bus service. The Regional Counsel also worked with Detroit officials to provide needed training. For example, an FTA official told us that in April 2014 the Regional Counsel organized a training course on FTA procurement requirements. The Regional Counsel has worked to identify other discretionary federal grant programs available for Detroit’s transit system. For example, an FTA official told us that the Regional Counsel connected Detroit officials with federal officials in DHS’ Transportation Security Administration (TSA) to learn about grants at TSA that support security programs for transit agencies. Detroit officials told us that the FTA Regional Counsel has been helpful with providing a direct line of communication between the city and FTA. Similarly, FEMA conducted an onsite technical assistance visit to Detroit in March 2014 to provide expertise and guidance on its SAFER grants after program officials noticed that the city was slow to spend its numerous open SAFER grants totaling approximately $55 million. The city was using SAFER to fund nearly 300 fire department positions. Once in Detroit, FEMA officials discovered that turnover among city staff managing the grants contributed to a lack of knowledge about how to submit payment requests. In addition, because SAFER involves payroll, using these grant funds relies on Detroit’s payroll system, financial accounting system, and grants system, all of which face challenges. Detroit was working to improve these systems, but a FEMA official explained that these broken systems and staff turnover meant that Detroit had not made a payment request in 6 months. These infrequent payment requests added to the complication of tracking down payroll information for these 300 individuals. According to FEMA officials, during its technical assistance visit, FEMA worked with Detroit officials on how to set staffing maintenance levels (e.g., how many firefighters to maintain on the payroll) to stay in compliance with the grant. Further, FEMA officials told us that they found that Detroit was including too much information in their payment requests, which also contributed to processing delays. These officials stated that they worked with Detroit on how to provide enough information to be compliant without further overburdening the payment request process. Documentation and sharing of lessons learned from the efforts to assist Detroit has been limited. Senior officials at OMB and HUD told us that they knew of no formal plans to document and share such information, but that they saw value in doing so. In fact, these officials told us that there have been instances of this happening informally and they believed it would be a good idea to capture lessons learned more formally to help institutionalize improvements to the administration’s broader place-based initiatives as well as any future efforts to help municipalities in fiscal crisis. Local officials were also interested in lessons learned. In both Stockton and Flint, city officials wanted to learn about what was working in Detroit and in other cities dealing with a fiscal crisis. Stockton officials told us that they understood that, given Detroit’s size and the amount of public attention it had received, its situation warranted a level of direct response from the federal government that smaller cities probably could not expect. However, these officials believed that their city and other municipalities could still benefit from some of the approaches and advice offered to Detroit. The informal structure of the White House Working Group may be one reason that lessons learned have not been formally documented and shared. Officials involved with the working group told us that the composition of the group was driven by the needs of the city of Detroit. When Detroit faced difficulties with blight, the working group assembled agency officials from Treasury, HUD, OMB, and the Environmental Protection Agency to advise city officials about how available grant funds could be used for blight remediation. When the city faced difficulties with street lighting, the working group assembled officials from the Department of Energy to provide technical assistance and advice. After addressing such needs in Detroit, these federal officials typically returned to their usual responsibilities at their respective agencies. In such an environment, and especially in the absence of a clear articulation of the need to identify and preserve promising practices, it is unlikely that staff would take the time to systematically document good practices or lessons learned that could then be shared with other interested agencies or municipalities. Our prior work has shown that collaboration among federal and local grant participants, particularly with regard to information sharing, is important for effective grant management. In the absence of a formal structure to capture lessons learned, OMB—in its leadership role in agency management of programs and resources to achieve administration policy—would be well positioned to direct such an effort. OMB officials told us that the administration plans to continue its commitment to assist Detroit, in part by creating an executive director position within OMB charged with leading the administration’s efforts. See White House Memorandum to Heads of Executive Departments and Agencies, M- 09-28 (Aug. 11, 2009) for more on the White House’s broader place-based initiatives. Place-based initiatives aim to coordinate and leverage federal resources in a specific locality. about economic turnaround efforts. This site, called the National Resource Network, provides a resource library, technical assistance library, and opportunities for selected municipalities in economic or fiscal distress to request assistance from the network. A senior official with the SC2 Initiative told us that the National Resource Network is intended to be the platform for federal agencies to share lessons learned and best practices with municipalities in economic and fiscal distress. Given that Detroit is one of the cities that has taken part in the SC2 pilot, its National Resource Network website might be a natural fit to share lessons learned from the efforts of the White House Working Group on Detroit. Officials indicated that they were not aware of plans for a formal evaluation of the efforts of the White House Working Group, including an effort to document and share good practices. Although the informal operation of the White House Working Group helped connect Detroit with resources and expertise it needed to help address its fiscal crisis, if federal officials do not assign formal responsibility for documenting lessons from Detroit’s experience in a timely manner, opportunities to leverage that knowledge may be lost. Moreover, such efforts need not be resource intensive, given that the infrastructure to share the information already exists. Cities facing serious financial crisis or in Chapter 9 bankruptcy provide a special challenge to the federal government and its grant-making agencies. On one hand, the losses of human capital, financial, and organizational capacity that can accompany such serious financial distress present municipalities with significant challenges to their ability to effectively obtain and manage federal grants. In light of this challenge, and the responsibility that federal grant-making agencies have to the American taxpayer to ensure that grant funds are spent efficiently and appropriately, all the agencies we reviewed used—or had recently incorporated—risk assessments when conducting their grant monitoring and oversight activities. Although not specifically fashioned for cities in fiscal crisis, such risk assessments consider a variety of factors that are likely to be impacted by a municipality in such a situation. On the other hand, cities facing financial crisis are examples of organizations that particularly need the assistance and support the federal government and federal grants can provide. In response to the Detroit bankruptcy, both the White House Working Group and individual agencies have taken actions such as improving collaboration, providing grant flexibilities, and offering direct assistance and training. Detroit’s emergence from the Chapter 9 process and the new and sometimes innovative relationships it has developed with its federal partners are a promising start. However, the federal government has not developed a mechanism for documenting lessons from Detroit’s experience, and if these lessons are not captured in a timely manner, experiences from officials who have first-hand knowledge may be lost. We recommend that the Director of the Office of Management and Budget direct, as appropriate, federal agencies involved with the White House Working Group on Detroit, to collect good practices and lessons learned from their efforts to assist Detroit during its fiscal crisis and share them with other federal agencies and local governments. Toward this end, OMB may want to consider making use of existing knowledge and capacity associated with the Strong Cities, Strong Communities Initiative and its National Resource Network. We provided a draft of this report to Assistant Attorney General for Administration, the Secretaries of the Departments of Homeland Security, Housing and Urban Development, Transportation, Treasury, and the Director of the OMB. Both the Department of Housing and Urban Development and the Office of Management and Budget generally agreed with the report; however, OMB staff neither agreed nor disagreed with our recommendation. The Departments of Housing and Urban Development and Justice provided technical comments, which we incorporated as appropriate. The Departments of Homeland Security, Transportation, and Treasury did not have any comments on the draft report. We also provided drafts of the examples included in this report to cognizant officials from the cities of Detroit, Flint, Camden, and Stockton to verify their accuracy and completeness, and incorporated changes as appropriate. We are sending copies of this report to the heads of the Departments of Homeland Security, Housing and Urban Development, Justice, Transportation, Treasury, and OMB as well as interested congressional committees and other interested parties, including the state and local officials we contacted for this review. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact J. Christopher Mihm at (202) 512-6806 or mihmj@gao.gov or Robert Cramer at (202) 512-7227 or cramerr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. Key contributors to this report are listed in appendix III. This report (1) identifies challenges that selected municipalities in fiscal crisis have experienced when managing federal grants and steps those municipalities took to address the challenges; (2) reviews the internal controls, monitoring, and oversight processes that federal agencies used to oversee selected grant programs to several municipalities in fiscal crisis; and (3) examines actions the White House Working Group on Detroit and selected federal agencies took to assist selected municipalities in fiscal crisis. To conduct this work, we focused on four municipalities in fiscal crisis as case examples: Detroit, Michigan; Flint, Michigan; Camden, New Jersey; and Stockton, California. We selected these municipalities based on a number of factors. First, we applied two threshold fiscal crisis criteria, which included either filing for Chapter 9 municipal bankruptcy or being declared in fiscal crisis by their state government. Once these criteria were met, we selected the municipalities with relatively high levels of federal investment in terms of population and federal grant obligations. We considered those municipalities with populations over 50,000, using 2010 Census data to estimate the population figures. We also narrowed the pool of municipalities to those with federal grant obligation amounts of at least $5 million between fiscal years 2011 and 2013. To obtain this obligation data, we used grant obligation figures from USASpending.gov. Once we applied these criteria, we then selected a group of municipalities that would provide variety in terms of the state intervention type and geographic location. States use different types of interventions to assist municipalities in fiscal crisis. Some states intervene with an emergency fiscal manager, a state oversight board, or a state agency, while other states provide no interventions. Our selection provided two municipalities with emergency fiscal managers (Detroit and Flint), one municipality with oversight from a state agency (Camden), and one municipality with no state intervention (Stockton). Finally, we considered geographic diversity when selecting the municipalities and our final selection includes municipalities on the east coast, west coast, and in the Midwest. Based on the grants that our four municipalities received, we selected eight grant programs for our review. Grant selection was also based on the following criteria: (1) dollar amount; (2) grant type (e.g., direct or pass-through); and (3) incidence across multiple municipalities. Findings from these cases are not generalizable to all municipalities in fiscal crisis. See appendix II for a list of the selected grants in our review and the grant award amounts for our selected cities between 2009 and 2013. We chose the period of fiscal years 2009 through 2013 because it included the latest 5 years with available monitoring data at the time of our review. To identify challenges that selected municipalities in fiscal crisis have experienced when managing federal grants and the steps those municipalities took to address those challenges, we primarily relied on interviews with local, state, and federal officials. We conducted site visits to the four selected municipalities and interviewed elected leadership and departmental staff in charge of managing the selected grants. In the case of the one pass through grant included in our sample, we interviewed state officials responsible for overseeing the distribution of that grant to our selected cities. We also interviewed federal headquarters and regional staff where applicable who oversee the selected grants and researchers and professional organizations that were knowledgeable about municipal fiscal crises and challenges that municipalities faced. In these interviews, we asked local, state, federal, and nongovernmental officials to describe the challenges that the selected municipalities in fiscal crisis faced regarding federal grants management. We reviewed and analyzed our interviews with federal, state, and local officials to identify grant management challenges. To illustrate the reduced capacity of the selected cities, we used full-time equivalent (FTE) data from published Comprehensive Annual Financial Reports for Detroit, Flint, and Stockton as well as state municipal aid applications for Camden. To determine that these data were sufficiently reliable for the purposes of this report, we checked for consistency across published financial reports for the selected cities. We also verified these numbers with cognizant city officials. To review the internal controls, monitoring, and oversight processes that federal agencies used to oversee the selected grants made to our four case example municipalities, we examined grant laws, regulations, and oversight policies for fiscal years 2009 to 2013 for our eight selected grant programs. We compared the monitoring policies for the eight grant programs with the implementation documentation for those policies in the four selected agencies. For example, if an agency policy stated that grants would receive risk scores that would help determine the appropriate level of monitoring, we checked for documentation of the risk scores and subsequent monitoring actions such as site visits or desk reviews. Examples of oversight implementation documentation that we reviewed for our selected grant programs included grant risk assessment worksheets, monitoring reports, sanction letters, and monitoring follow up documents. We also reviewed monitoring findings of single audits and office of inspector general audit reports. We interviewed cognizant local, state, and federal officials about these monitoring policies and actions. To examine the actions the White House Working Group on Detroit and selected federal agencies took to assist selected municipalities in fiscal crisis, we interviewed local, state, and federal officials involved with grant management for the four selected municipalities and eight selected grant programs. We conducted site visits to the four selected municipalities and interviewed elected leadership and departmental staff in charge of managing the selected grants. In the case of the one pass-through grant included in our sample, we interviewed state officials responsible for overseeing the distribution of that grant to our selected cities. In these interviews, we asked officials to describe the actions that the White House Working Group and selected federal agencies took to assist them during their fiscal crisis. We asked officials to describe the actions that were helpful and the actions that could be improved. We also interviewed federal headquarters and regional staff who oversee the selected grants to obtain their perspectives about the actions they took to assist these selected municipalities. To obtain a government-wide perspective we interviewed members of the White House Working Group on Detroit, described by agency officials as an interagency collaborative effort to help coordinate the federal response to Detroit’s fiscal crisis, as well as officials at the Office of Management and Budget and at the Department of the Treasury’s Office of State and Local Finance. We reviewed our interviews with federal, state, and local officials to identify actions taken by federal agencies that assisted municipalities in fiscal crisis. We used criteria from our prior work to inform the usefulness of these actions, including our work on effective federal collaboration, implementing interagency collaborative mechanisms, and state and local grant management. We conducted this performance audit from February 2014 to March 2015, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Overview of Selected Grant Programs Agency and administering component Department of Housing and Urban Development, Community Planning and Development (CPD) Department of Housing and Urban Development, CPD Department of Transportation, Federal Transit Administration (FTA) Department of Transportation, Federal Highway Administration (FHWA) Department of Justice, Office of Justice Programs (OJP) Department of Justice, COPS Office Department of Homeland Security, Federal Emergency Management Administration (FEMA) J. Christopher Mihm at (202) 512-6806 or mihmj@gao.gov Robert J. Cramer at (202) 512-7227 or cramerr@gao.gov. In addition to contacts named above, Peter Del Toro (Assistant Director); Rebecca Rose O’Connor (Analyst-in-Charge); and Benjamin L. Sponholtz made major contributions to this report. Additionally, Joy Booth; Amy Bowser; Shane Close; Steve Cohen; Cathy Colwell; Beryl H. Davis; Kim McGatlin; and Rebecca Shea made key contributions to this report.
Similar to the federal and state sectors, local governments are facing long-term fiscal pressures. In cases of fiscal crisis, municipalities may be required to make significant cuts to personnel that may impact their oversight of federal grants. GAO was asked to review the oversight of federal grants received by municipalities in fiscal crisis. This report (1) identifies challenges that selected municipalities in fiscal crisis experienced when managing federal grants and steps taken by those municipalities; (2) reviews the monitoring processes that federal agencies used to oversee selected grants to selected municipalities; and (3) examines actions the White House Working Group on Detroit and selected federal agencies took to assist municipalities in fiscal crisis. For this review, GAO conducted site visits to four municipalities in fiscal crisis: Detroit, Michigan; Flint, Michigan; Camden, New Jersey; and Stockton, California. GAO focused on eight grant programs administered by DHS, HUD, Justice, and DOT. The basis for selecting these grant programs included dollar amount and grant type. GAO reviewed grant oversight policies and actions for fiscal years 2009-2013 and interviewed local, state, and federal officials, including those at Treasury and OMB. Grant management challenges experienced by municipalities in fiscal crisis. The diminished capacity of selected municipalities in fiscal crisis hindered their ability to manage federal grants in several ways. First, reductions in human capital capacity through the loss of staff greatly reduced the ability of some cities to carry out grant compliance and oversight responsibilities. Second, the loss of human capital capacity also led to grant management skills gaps. For example, in Detroit, Michigan, loss and turnover of staff with the skills to properly draw down funds caused some grant funds to remain unspent. Third, decreased financial capacity reduced some municipalities' ability to obtain federal grants. For example, both Flint, Michigan, and Stockton, California, did not apply for competitive federal grants with maintenance of effort requirements because their city governments were unable to ensure that they would maintain non-federal funding at current levels. Fourth, outdated information technology (IT) systems hampered municipalities' ability to oversee and report on federal grants. For example, Detroit's 2011 and 2012 single audits identified IT deficiencies in every federal grant program reviewed, which led to the city having to pay back some federal grant funds. In response to these challenges, the four municipalities GAO reviewed have taken a number of actions to improve their management of federal grants including centralizing their grant management processes and partnering with local nonprofits to apply for grants. Federal grant monitoring and oversight processes. The eight grant programs GAO reviewed used, or had recently implemented, a risk-based approach to grant monitoring and oversight. These approaches applied to all grantees not just those in fiscal crisis. The grant programs administered by the Department of Housing and Urban Development (HUD) and the Department of Justice (Justice) consistently assessed grantees against a variety of risk factors to help program officials determine the need for more in-depth monitoring actions such as onsite monitoring visits. When program officials at HUD, Justice, the Department of Transportation (DOT), and the Department of Homeland Security (DHS) found deficiencies through monitoring actions, they required corrective actions from their grantees. However, in some cases, local grantees did not implement these corrective actions, resulting in continued grant management problems. In such cases, federal program officials took actions such as increasing the level of financial oversight or withholding grant funds until the grantee improved its grant management processes. Actions taken to assist municipalities in fiscal crisis. The White House Working Group on Detroit—an interagency group assembled by the White House to assist Detroit—as well as selected agencies took a variety of actions to aid municipalities in fiscal crisis. These actions included improving collaboration between selected municipalities and federal agencies, providing flexibilities to help grantees meet grant requirements, and offering direct technical assistance. However, neither individual agencies nor the Office of Management and Budget (OMB), which was involved in the working group and has an interagency leadership role in achieving administration policy, have formal plans to document and share lessons learned from the efforts to assist Detroit with other federal agencies and local governments. GAO recommends that OMB direct federal agencies involved in the White House Working Group on Detroit to document and share lessons learned from federal efforts to assist Detroit. OMB neither agreed nor disagreed with this recommendation.
Under the Federal Meat Inspection Act, the Poultry Products Inspection Act, and the Egg Products Inspection Act, USDA, through FSIS, is responsible for ensuring the safety of meat, poultry, and certain egg products. Under the Federal Food, Drug, and Cosmetic Act and the Public Health Service Act, FDA is responsible for all other foods, including fruits and vegetables; dairy products; seafood; and certain canned, frozen, and packaged foods. The food-processing sector is generally described as the middle segment of the farm-to-table continuum—it extends from the time livestock and crops leave the farm for slaughter and processing into food until it reaches retail establishments. FDA and FSIS work to ensure the safety of food products processed in the United States through a regulatory system of preventive controls that identifies hazards early in the production process to minimize the risk of contamination. Known as the Hazard Analysis and Critical Control Point (HACCP) system, it makes food-processing facilities responsible for developing a plan that identifies harmful microbiological, chemical, and physical hazards that are reasonably likely to occur and establishes critical control points to prevent or reduce contamination. Through their inspection programs, FDA and FSIS verify that food processors are implementing their HACCP plans. FDA inspects over 57,000 food facilities every 5 years on average, and USDA inspects over 6,000 meat and poultry slaughter and processing facilities daily. Individual states also conduct yearly inspections of about 300,000 food-processing facilities, including small firms with fewer than 10 employees and large corporations with thousands of employees and multiple processing plants located in many states. Both FDA and FSIS have the authority to take enforcement actions as necessary to ensure that facilities meet the agencies’ safety and sanitation regulatory requirements. As we reported in 2001, in fiscal year 1999, the latest year for which such information was available, FDA, FSIS, and the states spent a total of about $1.3 billion on food safety activities. Following the events of September 11, 2001, the federal government intensified its efforts to address the potential for deliberate contamination of agriculture and food products. On October 8, 2001, the President issued an executive order establishing the Office of Homeland Security, which added the agriculture and food industries to the list of critical infrastructure systems needing protection from terrorist attack. In addition, the Congress provided FDA and USDA with emergency funding to prevent, prepare for, and respond to potential bioterrorist attacks through the Department of Defense Appropriation Act of 2002: $97 million for FDA and $15 million for FSIS. For the most part, FDA has used the emergency funds to enhance the security of imported food by hiring new inspectors and increasing inspections at U.S. ports of entry. FSIS has used its emergency funds to support its food security activities, which include, among other things, providing educational and specialized training. FDA’s fiscal year 2003 budget builds upon funding received from the fiscal year 2002 appropriation plus the fiscal year 2002 emergency supplemental funding of $97 million to counter terrorism. FDA plans to seek additional funding in the future for food safety activities and security activities related to terrorism. FSIS is asking for an additional $28 million. The Congress also enacted the Public Health Security and Bioterrorism Preparedness and Response Act of 2002, which contains numerous provisions designed to enhance the safety and security of the food, drug, and water industries. In addition, both FDA and USDA have taken many actions to better protect the food supply against deliberate contamination. For example, FDA has hired 655 new food safety investigators and laboratory personnel in the field. In addition, it has participated in several exercises at the federal and state levels to enhance emergency response procedures. Furthermore, FDA is working with CDC to initiate and implement a nationwide Laboratory Response Network for foods to identify laboratory capacity for testing agents that could be used to deliberately contaminate food. It has also provided additional laboratory training for food safety personnel and sought stakeholders’ input to develop regulations that are required by the new bioterrorism legislation. Moreover, FDA worked with the Office of the Surgeon General, U.S. Air Force, to adapt a version of the Operational Risk Management approach to examine the relative risks of intentional contamination during various stages of food production and distribution. Within the Department of Health and Human Services, both FDA and CDC have worked closely with federal, state, and local agencies to enhance their surveillance of diseases caused by foodborne pathogens. FDA’s efforts to reduce food security risks also include working with other federal agencies, trade associations, and the Alliance for Food Security. USDA has formed a Homeland Security Council to develop a Department- wide plan to coordinate efforts between all USDA agencies and offices. The Department has also established the FSIS Office of Food Security and Emergency Preparedness to centralize the Department’s work on security matters. USDA has also coordinated with other government agencies, such as the Office of Homeland Security, the Federal Bureau of Investigation (FBI), and FDA, to develop prevention, detection, and response procedures to better protect the nation’s food supply. USDA will be increasing the number of import inspectors by 20. These inspectors will place special emphasis on food security in addition to their traditional food safety role. In addition, USDA has participated in several exercises at the federal and state levels to enhance response procedures and has conducted risk assessments for domestic and imported food. Since this review began, USDA has conducted three simulation exercises at the Department and agency level to test the Department’s response to a terrorist attack and is planning three additional simulations for the spring of 2003. USDA has also conducted preparedness-training sessions for veterinarians and circuit supervisors. (Circuit supervisors supervise the work of in-plant inspection personnel and discuss the security guidelines with them.) Experts from government and academia generally agree that terrorists could use food products as a vehicle for introducing harmful agents into the food supply. Just recently, the National Academies reported that terrorists could use toxic chemicals or infectious agents to contaminate food production facilities and that, although much attention has been paid to ensuring safety and purity throughout the various stages of processing and distribution, protecting the food supply from intentional contamination has not been a major focus of federal agencies. Among other things, the report says that FDA should act promptly to extend its HACCP methodology so that it could be used to deal effectively with the deliberate contamination of the food supply. In February 2002, CDC reported that although the food and water systems in the United States are among the safest in the world, the nationwide outbreaks due to unintentional food or water contamination demonstrate the ongoing need for vigilance in protecting food and water supplies. All of the bioterrorism experts whom we consulted from academia agreed that the food supply is at risk. The food safety statutes do not specifically authorize FDA or USDA to require food processors to implement any type of security measures designed to prevent the intentional contamination of the foods they produce. While these agencies’ food safety statutes can be interpreted to provide authority to impose certain security requirements, as opposed to food safety requirements, neither agency believes it has the authority to regulate all aspects of security. Counsel in the Department of Health and Human Service’s Office of the Assistant Secretary for Legislation advised that FDA’s authorities under the Federal Food, Drug, and Cosmetic Act and the Public Health Service Act provide FDA with tools to adopt measures to control insanitary preparation, packing, and holding conditions that could lead to unsafe food; detect contamination of food; and control contaminated food. However, Counsel also advised that FDA’s food safety authorities do not extend to the regulation of physical facility security measures. FDA’s counsel provided a similar assessment, telling us that, to the extent that food safety and security overlap, FDA might be able to require the industry to take precautionary steps to improve security but observed that there is little overlap between safety and security. One area where safety and security do overlap is in the handling of hazardous materials. FDA’s existing safety regulations specify that hazardous chemicals should be stored so that they cannot contaminate food products. This requirement overlaps with FDA’s food security guidelines advising that hazardous chemicals be stored in a secure area and that access to them be limited. USDA, on the other hand, has a somewhat more expansive view of the extent to which its statutory authority allows it to require food processors to adopt certain security measures. USDA’s general counsel concluded that to the extent that security precautions pertain to activities closely related to sanitary conditions in the food preparation process, FSIS has the authority to require food processors to implement certain security measures. The general counsel concluded that FSIS could require facilities to develop and maintain a food security management plan concerning their response to an actual threat involving product tampering, since this is directly related to food adulteration. Such a plan could be added to a current HACCP plan or it could be entirely separate. USDA also believes that FSIS has authority to mandate its “inside security” guidelines, such as controlling or restricting access to certain areas, monitoring the operation of equipment to prevent tampering, and keeping accurate inventories of restricted ingredients and hazardous chemicals. Similarly, USDA believes that many of its security measures that address shipping and receiving food products or protecting water and ice used in processing products also could be made mandatory. These measures include putting tamper-proof seals on incoming and outgoing shipments and controlling access to water lines and ice storage. On the other hand, USDA believes that the “outside security” measures included in its guidelines, such as securing plant boundaries and providing guards, alarms, and outside lighting, have little to do with sanitation in the facility or the immediate food-processing environment and, therefore, could not be made mandatory under existing authorities. With respect to the guidelines’ personnel security measures, USDA noted that FSIS has limited authority over personnel matters at food-processing facilities and could not require facilities to perform personnel background checks before hiring. In response to the nation’s growing concerns regarding the potential for deliberate contamination of the food supply, FDA and USDA issued guidelines to the food-processing industry suggesting measures to enhance security at their facilities. Among other things, the guidelines suggests conducting a risk assessment, developing a plan to address security risks at plants, and adopting a wide range of security measures inside and outside the premises. Food-processing facilities are not required to adopt any of the security measures but are encouraged to adopt those that they feel are best suited for their operations. Although both agencies have alerted their field inspection personnel to be vigilant about security issues, they have also told the inspectors that they are not authorized to enforce these measures and have instructed them not to document their observations regarding security because of the possible release of this information under the Freedom of Information Act and the potential for the misuse of this information. As a result, FDA and USDA currently do not know the extent to which food security measures are being implemented at food-processing facilities. In contrast, the Congress directed medium-size and large-size community water systems, which are privately or publicly owned, to assess their vulnerability to terrorist attacks and to develop an emergency response plan to prepare for such an event. The act also authorized funding to be used for basic security enhancements, such as the installation of fencing, gating, lighting, or security cameras. This approach enables the Environmental Protection Agency (EPA) to monitor the water industry’s security efforts and could be a possible model for the food safety agencies. In 2002, FDA and FSIS each issued voluntary security guidelines to the food-processing industry to help federal- and state-inspected plants identify ways to enhance their security. The agencies encouraged food processors, among others, to review their current operations and adopt those security measures suggested in the guidelines that they believed would be best suited for their facilities. Officials from both FDA and FSIS told us that there was little or no coordination between the two agencies in developing these guidelines. The FDA guidance contains over 100 recommended security measures covering seven areas of plant operation, such as managing food security, physical (outside) security, and computer security. FSIS’s guidelines contain 68 security measures and cover seven areas of plant operation. Figure 1 summarizes key aspects of both agencies’ voluntary security guidelines for industry. FDA and FSIS have made the guidelines available on the Internet. These guidelines are very similar—one difference is that FSIS’s contain security measures for slaughter facilities. Some state governments have also acted to protect food products from deliberate contamination. We learned from 11 state auditing offices that food safety regulatory officials from most of these states are providing industry or state inspectors with guidelines, either in the form of the FDA and FSIS guidelines or guidelines developed by the state officials themselves. In addition, three states have enacted new legislation or regulations addressing the security of food products. Although FDA and FSIS do not assess the extent to which food processors are implementing security measures, the agencies have asked their field inspection personnel to be on heightened alert and to discuss, but not interpret, the security guidance with facility officials during their routine food safety inspections. However, both FDA and USDA have instructed their field inspection personnel to refrain from enforcing any aspects of the security guidelines because the agencies generally believe that they lack such authority. They have also instructed their field personnel not to document plants’ security measures because they are concerned that such information would be subject to Freedom of Information Act requests. More specifically, FDA’s instructions to its field personnel specify that they should neither perform a comprehensive food security audit of the establishment nor conduct extensive interviews to determine the extent to which preventive measures suggested in the guidelines have been adopted. The goals, according to FDA, are to heighten industry’s awareness of food security practices, facilitate an exchange of information between FDA and industry on the subject of food security, and encourage plant management to voluntarily implement those preventive measures that they believe are most appropriate for their operation. In short, FDA inspectors are encouraged to discuss food security concerns with plant management and to provide them with copies of the guidelines. Although the exact details of such discussions are not to be recorded, inspectors are required to document in their inspection reports that such discussions took place and that they gave a copy of the guidelines to facility management. Similarly, FSIS has informed its field inspectors that they have no regulatory duties regarding the enforcement of the guidelines. Initially, the agency instructed its inspectors to refer any questions from facility managers to USDA’s Technical Service Center in Omaha, Nebraska. Recently the agency modified its position regarding direct discussions of food security and now allows inspectors to discuss, but not interpret, security with facility management. Inspectors are still instructed not to document these conversations or enforce the adoption of any security measure. Officials from both agencies expressed concerns about gathering security information from facilities because it could be subject to public disclosure through Freedom of Information Act requests. If terrorists gained access to this information, it could give them a road map to target the most vulnerable areas in a food-processing plant. Recent congressional efforts to better protect the nation’s drinking water from terrorist acts may offer a model for FDA and USDA to help monitor security measures adopted at food-processing facilities as well as to identify any security gaps that may exist at these facilities. Although there are differences in how the government regulates drinking water and food, food and water are essential daily consumption elements, and both are regulated to ensure their safety. In June 2002, the Congress enacted the Public Health Security and Bioterrorism Preparedness and Response Act of 2002, which, among other things, amended the Safe Drinking Water Act. The Bioterrorism Act requires medium-size and large-size community water systems (those serving over 3,300 people), which are privately and publicly owned, to certify to EPA that they have assessed their vulnerability to a terrorist attack and developed emergency plans to prepare for and respond to such an attack. These water systems serve 91 percent of the United States’ population. Each community’s water system is required to conduct a vulnerability assessment and submit a copy of the assessment to EPA. The act specifies that the vulnerability assessment is exempt from disclosure under the Freedom of Information Act, except for the identity of the community water system and the date on which it certifies compliance. Community water systems are also required to prepare an emergency response plan that incorporates the results of their vulnerability assessments. In addition, the act authorizes funding for financial assistance to community water systems to support the purchasing of security equipment, such as fencing, gating, lighting, or security cameras. FDA and FSIS lack comprehensive information on the extent to which food-processing companies are adopting security measures. However, officials from the majority of the food trade associations that we contacted believe that their members are implementing a range of measures to enhance security at their facilities. We found that the five food-processing facilities we visited in various geographic regions around the country are also implementing an array of security measures that range from developing risk assessment plans to hiring security contractors. Furthermore, our survey of FDA and FSIS inspectors indicates that, generally, food-processing facilities are implementing a range of security measures. The survey responses indicate, however, that the inspectors were more aware of those security measures that were the most visible to them during the course of their regular food safety inspections. According to trade association officials, food processors are voluntarily taking steps to prevent the deliberate contamination of their products, including adopting many of the measures suggested by FDA and FSIS, such as installing fences, requiring that employees wear identification, and restricting access to certain plant areas. Association officials told us that most large food-processing facilities already have ample security plans that include many of the recommendations made by FDA and FSIS. One trade association recently conducted a survey of its members and asked for their opinions about FSIS’s Guidelines. Most of the respondents indicated that they were aware of the guidelines; they believed the guidelines were for the most part practical and workable; and they used them in their security plans. However, these officials were unable to provide data on the extent to which the food-processing industry is implementing security measures to prevent or mitigate the potential deliberate contamination of food products. Trade association officials also said that they provided FDA and FSIS with comments on the voluntary guidelines and, in some cases, have also issued their own food security guidelines to their members. Although the officials generally believe that the agencies’ guidelines are reasonable, they do not want the government to regulate food security. They also feel that some companies, especially small facilities with limited resources, are unable to implement all the measures in the guidelines. Therefore, these officials believe it is important for the guidelines to remain voluntary. The industry is involved in improving food security in other ways as well. For example, the food industry associations formed the Alliance for Food Security to facilitate the exchange of information about food security issues. The Alliance is composed of trade associations representing the food chain, from commodity production through processing, packaging, distribution, and retail sale, as well as government agencies responsible for food and water safety, public health, and law enforcement. Similarly, led by the Food Marketing Institute, the food industry and FBI established the Information Sharing and Analysis Center (ISAC), which serves as a contact point for gathering, analyzing, and disseminating information among companies and the multiagency National Infrastructure Protection Center based at FBI headquarters. Through ISAC, FBI officials have notified food manufacturers of warnings and threats that the Center deems to be credible. ISAC also provides a voluntary mechanism for reporting suspicious activity in a confidential manner and for developing solutions. We visited five food-processing facilities, including a slaughter plant and facilities that produce beverages and ready-to-eat products. Although these facilities are not in any way representative of all food-processing plants nationwide, they provide some information about the types of security measures that some facilities are implementing. All five facilities had conducted risk analyses and, on the basis of the results, had implemented a number of security measures similar to those suggested in the FDA and FSIS guidelines. For example, all five facilities limited access to the facility through such means as requiring visitors to enter through a guard shack and to provide identification. In addition, employees at three of the facilities could enter the facility only by using magnetic cards. However, managers at the five facilities offered differing opinions about personnel security. Although all of the facilities we visited performed background checks on their employees that included verification of social security numbers, only some verified prior work experience, criminal history, and level of education. One company also required that its contractors, such as construction companies working in the facility, perform employment, education, and criminal checks of their own employees. The facilities also used different protocols for employee access to different areas within the plant. For example, at four of the facilities, employees were limited to those areas of the plant in which they worked. While the managers at these facilities generally complimented FDA’s and USDA’s security guidelines, they said that they do not want the agencies to regulate security. Rather, they believe that the agencies should develop a nonprescriptive framework or strategy for industry and then leave them to decide how to meet their individual requirements. One manager believes that food security responsibilities should be moved to the Department of Homeland Security. Finally, our discussions with trade association officials and food- processing industry officials revealed that the industry is very concerned about sharing security information with federal agencies because of the possibility that it could provide a road map for terrorist groups if it were released under the Freedom of Information Act. Although the act exempts from public release certain national security, trade secret, and commercial or financial information, industry officials are generally skeptical about the government’s ability to prevent the release of sensitive security information at food-processing facilities. FBI officials told us that they have cited these exemptions when assuring ISAC members that security information shared with them will be protected from public release. These officials explained that the courts have generally ruled that the commercial information exemption protects those who voluntarily provide the government with information if the information is of a kind that the provider would not ordinarily release to the public. However, the FBI officials we interviewed believe that the government should find some way of assuring industry that sensitive security information is protected from public release. FDA and FSIS survey respondents observed a range of security measures being implemented at food-processing facilities, although both FDA and FSIS respondents were able to provide more information about those security measures that were most visible during the course of their normal inspection duties. Figure 2 shows selected categories of security measures recommended in the FDA and FSIS security guidelines that were most visible to inspectors. The majority of the FDA survey respondents said they were able to observe security measures, such as fencing around the plants’ perimeter, limiting access to restricted areas, securing hazardous materials, and providing adequate interior and exterior lighting. Likewise, most of FSIS’s circuit supervisors were able to observe outside security measures including alarmed emergency exits, plant perimeter protection, positive employee identification, and the inspection of incoming and outgoing vehicles. Survey respondents provided fewer observations regarding other types of security measures included in the FDA and FSIS guidelines—in some instances because these measures were less visible to them. For example, FDA respondents were less able to comment on whether they noticed or knew of the presence of security measures designed to account for missing stock or for other finished product irregularities. (See fig. 3.) Similarly, FSIS respondents were less unable to comment on the extent to which facilities were performing background checks on new employees or implementing proper mail-handling practices. More than half of FSIS’s survey respondents stated that large plants— those with at least 500 employees—had implemented a range of security measures, including the areas of outside security, storage, slaughter and processing, and personnel security. Fewer of these respondents observed these security measures at smaller plants. Some FDA and FSIS respondents provided additional comments that the very small firms typically lack the financial resources to implement many of the security measures suggested in the government guidelines. Similarly, some respondents commented that many of the security measures might not be necessary at smaller establishments. Additionally, most of the FDA respondents reported that they had not received training on food security; while nearly all of the FSIS respondents reported that they had recently received such training. Some of the FSIS respondents further stated that although they had received food security training, further training was greatly needed in the field. Such training would be beneficial because field personnel are encouraged to discuss security measures with managers at the facilities they inspect. Finally, responses to our survey showed that FDA and FSIS respondents have different levels of “satisfaction” with or “confidence” in the efforts of the processing facilities they inspect to ensure the protection of food from acts of deliberate contamination. While nearly half of the FSIS respondents said they were somewhat or very confident of the efforts made by the food processors they inspect, slightly over one-fourth of the FDA respondents were satisfied or very satisfied with the efforts made by the food processors they inspect. Thirty-seven food regulatory officials interviewed by state auditors in 11 states provided opinions on their overall level of satisfaction with federal, state, and industry efforts to protect food from intentional contamination. Table 1 shows that nearly half of the state regulatory officials interviewed expressed satisfaction with the efforts made by federal, state, and industry to safeguard food products—though these results cannot be generalized to all state regulatory officials. Finally, most of the state officials interviewed by state auditors believed it was either “important” or “very important” for states to monitor whether companies have adopted security measures to prevent acts of deliberate contamination; 3 of the 11 states are already requiring their inspectors to do so. The vulnerability of the food supply to potential acts of deliberate contamination is a national concern. The President addressed this concern in the October 8, 2001, executive order establishing the Office of Homeland Security and adding the agriculture and food industries to the list of critical infrastructure systems needing protection from terrorist attack. The National Academies have also concluded in a recently released report that infectious agents and toxic chemicals could be used by terrorists to contaminate food-processing facilities. Among other things, the report says that FDA should act promptly to extend its Hazard Analysis and Critical Control Point methodology so it might be used to deal effectively with deliberate contamination of the food supply. The Centers for Disease Control and Prevention also reported recently on the need to better protect our nation’s food and water supplies. These assessments underscore the need to enhance security at food- processing facilities. Although FDA and FSIS recognize that need and have taken action to encourage food processors to voluntarily adopt security measures, these actions may be insufficient. Because the agencies believe that they generally lack authority to mandate security measures and are concerned that such information would be subject to Freedom of Information Act requests, they do not collect information on industry’s voluntary implementation of security measures. The agencies are, therefore, unable to determine the extent to which food processors have voluntarily implemented such measures. Both FDA and USDA have completed risk assessments. However, without the ability to require food- processing facilities to provide information on their security measures, these federal agencies cannot fully assess industry’s efforts to prevent or reduce the vulnerability of the nation’s food supply to deliberate contamination. Similarly, they cannot advise processors on needed security enhancements. Furthermore, lacking baseline information on the facilities’ security condition, the agencies would be unprepared to advise food-processing facilities on any additional actions needed if the federal government were to go to a higher threat alert. Finally, the lack of security training for FDA food inspectors on the voluntary security guidelines issued for food processors and the limited number of FSIS inspectors that have so far received training on the voluntary security guidelines hamper the inspectors’ ability to conduct informed discussions regarding security measures with facility personnel as they are currently instructed to do. In order to reduce the risk of deliberate contamination of food products, we are recommending that the Secretary of Health and Human Services and the Secretary of Agriculture study their agencies’ existing statutes and identify what additional authorities they may need relating to security measures at food-processing facilities. On the basis of the results of these studies, the agencies should seek additional authority from the Congress, as needed. To increase field inspectors’ knowledge and understanding of food security issues and facilitate their discussions about the voluntary security guidelines with plant personnel, we are also recommending that the Secretary of Health and Human Services and the Secretary of Agriculture provide training for their agencies’ field staff on the security measures discussed in the voluntary guidelines. We provided FDA and USDA with a draft of this report for their review and comment. We received written and clarifying oral comments from each agency. The agencies also provided technical comments, which we incorporated into the report as appropriate. FDA agreed with our recommendation that it provide all food inspection personnel with training on security measures. Subsequently, FDA officials told us that the agency did not have an opinion on our recommendation that it study what additional authorities it may need relating to security measures at food- processing facilities. In its written comments, FDA stated that the report is factual and describes accurately the events and actions that FDA has taken on food security. FDA also commented that one of the goals of its voluntary guidance to industry is to heighten awareness of food security practices and that the role of its investigators is first and foremost food safety. FDA also said that it does not have sufficient security expertise to provide industry with consultation in this area. FDA further commented that although HACCP and other preventive controls are appropriate measures to enhance food safety, HACCP does not afford similar advantages for addressing deliberate contamination, tampering, and/or terrorist actions related to the food supply. Our report underscores that the role of FDA’s investigators is primarily one of food safety. Nevertheless, we believe that it is also crucial for cognizant agencies to have information about industry’s security efforts so that they can assess the extent to which the risk of deliberate contamination is being mitigated. We also believe that possessing such information is important if it becomes necessary to advise food processors on needed security enhancements. With regard to HACCP, our report does not take a position on the feasibility of using HACCP as a means to control deliberate contamination; instead, we report on the opinion of the National Academies. FDA’s comments are presented in appendix V. In its written comments, USDA agreed with the contents of our report. Subsequently, USDA’s food safety officials confirmed that the agency also agrees with the report’s recommendations. In its letter, USDA commented that it has already conducted a comprehensive risk assessment of the food supply without plant security information and that knowing whether a plant employed one or several security measures was not needed to assess the risk. Our report acknowledges that USDA has conducted a comprehensive risk assessment, but we believe that it is crucial for cognizant agencies to have information about industry’s security efforts so that they can assess the extent to which the risk of deliberate contamination is being mitigated. USDA’s comments are presented in appendix IV. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Agriculture and Health and Human Services; the Director of the Federal Bureau of Investigation; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact Maria Cristina Gobin or me at (202) 512-3841. Key contributors to this report are listed in appendix VI. To determine the extent to which the current federal food safety statutes can be effectively used to regulate security at food-processing facilities, we analyzed the Food and Drug Administration’s (FDA) and the U.S. Department of Agriculture’s (USDA) existing statutory authorities. We discussed these authorities with FDA and USDA counsel and requested a legal opinion to determine the extent to which each agency believes its existing authorities allow it to regulate food security. We then independently reviewed these authorities to draw our own conclusions. To describe the actions that FDA and USDA have taken to help food processors prevent or reduce the risk of deliberate food contamination, we met with staff from FDA and FSIS to review the voluntary guidelines issued by each agency. To better understand the provisions of the guidelines, we met with agency program staff responsible for issuing the guidelines and for receiving industry comments on it. To learn how the guidelines would be implemented, we met with FDA and USDA’s Food Safety and Inspection Service (FSIS) officials responsible for field operations and with staff from field offices in Atlanta, Georgia, and Beltsville, Maryland. Finally, to gather additional information about the vulnerability of the food supply to acts of deliberate contamination, we contacted nine experts from academia, including experts in food safety and in bioterrorism. To describe how the government is determining the extent to which food- processing companies are implementing security procedures, we asked FDA and FSIS program officials about the nature of the information they are collecting about industry security measures. We also conducted surveys of agency field personnel to obtain their observations about and knowledge of food security measures taken at facilities they regularly inspect for food safety. Our FDA survey, which was Web-based, was administered to all 150 field investigators who recorded 465 or more hours for domestic food inspection from June 1, 2001 to May 31, 2002. Our survey of FSIS staff was a telephone survey of a randomly selected stratified sample of 50 circuit supervisors. Our response rate for these surveys was higher than 85 percent for FDA and 90 percent for FSIS, and respondents included participants from all the agencies’ geographic regions. Before administering the surveys, we discussed with and obtained input from FDA and FSIS program officials. We also pretested the surveys at field locations to ensure that our questions were valid, clear, and precise and that responding to the survey did not place an undue burden on the respondents. In addition, we contacted state audit offices in all 50 states to collect information about state government actions designed to prevent the deliberate contamination of food products. Of the 50 state audit offices we contacted, only 11 agreed to help us collect this information: Arizona, Florida, Maryland, Michigan, New York, North Carolina, Oklahoma, Oregon, Pennsylvania, Tennessee, and Texas. To determine the extent to which the food-processing industry is implementing security measures to better protect its products against deliberate contamination, we contacted officials from 13 trade associations representing, among others, the meat and poultry, dairy, egg, and fruits and vegetables industries and the food-processing industry. We discussed the guidelines that their organizations have issued, and they described what actions their constituents are taking to protect their products. We also visited five food-processing facilities in various geographic regions to ask corporate and plant officials about the actions they have taken to protect their products and facilities against intentional contamination. These facilities included a slaughter plant as well as facilities that produce beverages and ready-to-eat products. We recognize that the efforts of these five facilities are not necessarily representative of the whole food-processing industry. To identify the concerns that the industry has about sharing sensitive information with federal agencies, we spoke with industry representatives as well as officials from the Federal Bureau of Investigation’s National Infrastructure Protection Center. We conducted our review from February through December 2002 in accordance with generally accepted government auditing standards. In addition to those named above, John Johnson, John Nicholson, Jr., Stuart Ryba, and Margaret Skiba made key contributions to this report. Nancy Crothers, Doreen S. Feldman, Oliver Easterwood, Evan Gilman, and Ronald La Due Lake also made important contributions.
The events of September 11, 2001, have placed added emphasis on ensuring the security of the nation's food supply. GAO examined (1) whether FDA and USDA have sufficient authority under current statutes to require that food processors adopt security measures, (2) what security guidelines FDA and USDA have provided to industry, and (3) what security measures food processors have adopted. Federal food safety statutes give the Food and Drug Administration (FDA) and the U.S. Department of Agriculture (USDA) broad authority to regulate the safety of the U.S. food supply but do not specifically authorize them to impose security requirements at food-processing facilities. However, these agencies' food safety statutes can be interpreted to provide authority to impose certain security measures. FDA believes that its statutes authorize it to regulate food security to the extent that food security and safety overlap but observes that there is little overlap between security and safety. USDA believes that it could require food processors to adopt certain security measures that are closely related to sanitary conditions inside the facility. USDA also believes that the statutes, however, cannot be interpreted to authorize the regulation of security measures that are not associated with the immediate food-processing environment, such as requiring fences, alarms, and outside lighting. Neither agency believes that it has the authority to regulate all aspects of security at food-processing facilities. Both FDA and USDA issued voluntary security guidelines to help food processors identify measures to prevent or mitigate the risk of deliberate contamination. Because these guidelines are voluntary, neither agency enforces, monitors, or documents their implementation. Both FDA and USDA have asked their inspectors to be vigilant and to discuss security with managers at food-processing facilities, but the agencies have stressed that inspectors should not enforce the implementation of security measures or document any observations because of the possible release of this information under the Freedom of Information Act and the potential for the misuse of this information. Since FDA and USDA do not monitor and document food processors' implementation of security guidelines, the extent of the industry's adoption of security measures is unknown. According to officials of trade associations and the five facilities we visited, however, food processors are implementing a range of security measures. In addition, the FDA and USDA field inspectors we surveyed indicated that most facilities have implemented some security measures, such as installing fences. However, the inspectors were less able to comment on security measures that were not as obvious, such as accounting for missing stock and implementing proper mail-handling practices. The inspectors also noted that while USDA has provided some of its field supervisory personnel with security training on the voluntary security guidelines it issued, it has not provided most of its inspectors with such training. FDA has not provided its staff with any training on the security guidelines. Without training on the security guidelines, inspectors are limited in their ability to conduct informed discussions regarding security with managers at food-processing facilities.
In fiscal year 2011, VA provided about $ 4.3 billion in pension benefits for about 517,000 recipients. These benefits are available to low-income wartime veterans who are age 65 and older, or who are under age 65 but are permanently and totally disabled as a result of conditions unrelated to their military service.also be eligible for these benefits. At the end of fiscal year 2011, about Surviving spouses and dependent children may 314,000 pension recipients were veterans and about 203,000 were survivors. Also, about 329,000 recipients were over 65 and the average age was 71 for veterans and 79 for survivors. Average annual payments in fiscal year 2011 were $9,669 for veterans and $6,209 for survivors. VA provides pension benefits through its Veterans Benefits Administration (VBA), and accredits representatives of veterans’ service organizations, attorneys, and claims agents to assist claimants with the preparation and submission of VA claims at no charge. To become accredited, an individual must meet certain requirements set forth in federal law. Claims processors assess claims at VBA’s three Pension Management Centers (PMC) in Philadelphia, Penn.; Milwaukee, Wis.; and Saint Paul, Minn. As part of the pension program, VA provides enhanced pension benefit amounts to veterans and surviving family members who demonstrate the need for aid and attendance, or who are considered permanently housebound. For pension beneficiaries who are deemed unable to manage their affairs due to mental impairments, VA appoints a fiduciary to manage the beneficiary’s finances. To qualify for pension benefits, claimants’ countable income must not exceed annual pension limits that are set by statute. These income limits are also the maximum annual pension payment that a beneficiary may receive. Such limits may vary based on whether claimants are veterans or survivors and their family composition, as well as whether claimants need aid and attendance or are considered housebound. For example, to qualify for pension benefits in 2012, a veteran with no dependents and who is in need of Aid and Attendance benefits cannot have income that exceeds $20,447, while a surviving spouse in similar circumstances cannot have an income that exceeds $13,138. In determining if a claimant’s income is below program thresholds, VA includes recurring sources of income such as the Social Security Administration’s (SSA) retirement and disability benefits, but not income from public assistance programs such as Supplemental Security Income (SSI). VA also allows some expenses, such as certain unreimbursed medical expenses that exceed 5 percent of the maximum pension amount the claimant is eligible for, to be deducted from a claimant’s countable income. The annual amount pension beneficiaries receive is the difference between their countable income and the maximum pension amount they would be eligible for (see table 1). VA’s policy manual specifically states that the pension program is not intended to protect substantial assets or preserve an estate for a beneficiary’s heirs. In assessing financial eligibility for pension benefits, VA also considers net worth or the total value of claimants’ assets, such as bank accounts, stocks, bonds, mutual funds, and any property other than the claimant’s dwelling, a reasonable lot area, a vehicle, and personal belongings. There are no thresholds on the value of a claimant’s assets that are defined in statute. However, according to VA’s procedures manual, claims processors are generally required to formally determine if claimants with assets worth over $80,000 have financial resources that will last a reasonable period of time to pay for their basic expenses. In making this determination, claims processors consider net worth, income, expenses, age, and life expectancy to determine if claimants’ financial resources are sufficient to pay for their expenses without assistance from VA. Ongoing eligibility for pension recipients who previously reported any income other than, or in addition to, Social Security income is also assessed. These recipients must complete an annual Eligibility Verification Report (EVR), which requests information on income and assets, that is used to determine if recipients continue to be financially eligible for the pension program. Potential VA pension recipients may also be eligible for other means- tested programs. For example, they may be eligible for Medicaid, a joint federal-state health care financing program that provides coverage for long-term care services for certain individuals whose income and resources do not exceed specific thresholds. Each state administers its Medicaid program and establishes specific income and resource eligibility requirements that must fall within federal standards, but we reported in 2007 that in most states, an individual must have $2,000 or less in countable financial resources to be eligible. Similarly, the SSI program provides cash benefits to individuals who are age 65 or older, blind, or disabled, and who have limited income and whose financial resources are $2,000 or less ($3,000 if the individual lives with their spouse). We found several potential vulnerabilities in the VA pension program’s design, as well as in VA’s policies and procedures, that hinder the department’s ability to ensure that only those in financial need receive benefits. More specifically, the program allows claimants to transfer assets prior to applying for benefits, and VA lacks complete information on claimants’ finances, relies on self-reported information, and does not utilize all opportunities for coordination within the agency. Additionally, guidance that claims processors use may be unclear. Despite being means-tested, the program currently permits VA pension claimants to transfer assets and reduce their net worth prior to applying for these benefits. Federal regulations state that, when evaluating financial eligibility for pension benefits, assets gifted to someone that does not reside in the claimant’s household will reduce the claimant’s net worth if all rights of ownership and control of the assets have been relinquished. As a result, prior to applying for benefits, claimants can transfer excess assets to someone outside their household to meet the financial eligibility criteria for VA pension benefits and be approved, as long as they no longer retain ownership or control of the assets. For example, we identified a case involving a pension recipient who transferred over a million dollars in assets into an irrevocable trust less than 3 months prior to applying for these benefits. VA was aware of the asset transfer when this pension claim was approved and did not count the trust as part of the claimant’s net worth. Although these types of transfers are generally permitted under law for the pension program, this practice is not consistent with other federal means-tested programs and weakens the pension program’s goal of supporting those with financial need. In contrast, for Medicaid—another means tested program—federal law explicitly restricts eligibility for coverage for long term care for certain individuals who transfer assets for less than fair market value prior to applying., As a result, when an individual applies for Medicaid coverage for long-term care, states conduct a look-back—a review to determine if the applicant transferred assets for less than fair market value prior to applying. Individuals who transfer assets for less than fair market value during the 60 months prior to applying may be denied eligibility for long-term care coverage for a period of time, known as the penalty period. For example, gifting assets would generally be considered a transfer of assets at less than fair market value and would result in a penalty period. Also, under the SSI program, claimants who transfer assets for less than fair market value prior to applying may become ineligible for these benefits for up to 36 months. An asset transfer at less than fair market value would occur when the claimant gifts or sells a resource and gets in return an amount that is less than the value of the resource on the open market at the time of the transfer. VA lacks complete information on claimants’ finances because the forms used to assess financial eligibility do not prompt applicants to report certain types of income and asset information. While the instructions on the pension application forms ask claimants to report all income sources and assets they own, the forms do not provide spaces for claimants to report some types of income and assets. For example, even though elderly pension claimants may receive private monthly retirement income, such as income from a company’s retirement plan, the application forms do not specifically provide space for claimants to report such income. According to SSA, in 2009, 9 percent of the aggregate income of those age 65 and older consisted of private pension income. The application forms do provide a space to report other income sources not specifically itemized on the forms. However, some claims processors we spoke with said claimants who report an amount in that space do not usually specify the source of this income, or if this amount represents a single or a combination of income sources. As a result, they have to follow up with the claimant to obtain this information, which delays the processing of these claims. Similarly, although the application forms specifically ask claimants to report assets such as bank accounts, stocks, and real property, the forms do not ask about other common assets such as annuities and trusts, which need to be considered when VA assesses claimants’ financial eligibility. (See figure 1 to view the section of the application form pertaining to income and assets.) We found cases where claimants did not report assets that they are not specifically asked to report. For example, in one case a claimant did not report a trust with assets valued at about $575,000. In another case, a claimant did not report a trust worth about $612,000. In contrast, we reviewed several state application forms for Medicaid long-term care benefits that specifically asked individuals to report information about annuities and trusts they may own, as well as retirement income. VA’s application forms also do not provide a specific space for claimants to report asset transfers, even though the instructions on the veterans’ application form ask claimants to disclose this information. Asset transfers to someone outside the claimant’s household are allowed under the pension program, as long as the claimant relinquishes ownership and control of the asset. However, VA still needs to know about any asset transfers when assessing a claimant’s financial eligibility because, consistent with VA’s regulations, the department must determine whether the claimant retains ownership and control of the transferred asset and if this asset should be counted as part of the claimant’s net worth. Without a designated space to report this type of information, claimants may not report asset transfers on the application forms. For example, we saw one case where a veteran transferred assets worth about $500,000 into an irrevocable trust 2 weeks prior to applying and did not report this on the application. VA learned of this asset transfer because the claims processor inquired about how the claimant’s medical expenses were being paid. If the claims processor had not identified these assets and determined that they should be included in the claimant’s net worth, because the claimant had not relinquished all ownership and control, the claim could have been approved. Application forms that do not specifically request information about certain income sources and assets, as well as asset transfers, may prevent VA from obtaining complete information about claimants’ financial situation to properly assess their eligibility for pension benefits. When assessing pension claimants’ eligibility, VA relies primarily on self- reported financial information that, unlike other means-tested programs, is not independently verified. VA does not require claimants to submit documents that corroborate self-reported financial information with their application, such as bank statements and tax returns. VA also does not require receipts to verify some types of claimed deductible expenses, even though these expenses may be a factor that enables some pension claimants to qualify for benefits. Without independent verification of self- reported financial information, VA will have difficulty detecting fraudulent claims. We identified cases where VA found individuals were advised by third parties to claim expenses they did not incur related to assistance with everyday living activities. For example, we saw one claim that was prepared by a financial planner in which $1,700 in monthly caregiver payments to a daughter were claimed. The claimant subsequently stated to VA that he did not pay his daughter any caregiver fees. In another case, a pension recipient claimed an attorney advised him to claim he was paying his son $1,000 per month for services that were not being provided in order to be eligible for a higher pension rate. The recipient subsequently withdrew this claimed medical expense. Most claims processors we spoke with said they accept self-reported financial information unless questions arise, and in those cases, supporting documentation may be requested. In contrast, some state Medicaid programs and the SSI program require applicants to submit documents that support some reported financial information, such as bank statements and tax returns. VA also does not make use of existing opportunities to verify self-reported financial information during the initial eligibility determination. For example, VA conducts computer matches to verify reported income from SSA benefits during the initial claims assessment process but is not using this type of technology to verify the accuracy of other self-reported financial information. Additional automated systems may be available that would enable VA to independently verify financial information during the initial eligibility assessment. For example, while VA performs a data match with Internal Revenue Service and SSA data to assess ongoing eligibility, it does not perform this match at the time of the initial claims assessment. In addition, for the SSI program, SSA recently implemented the Access to Financial Institutions system that allows the program to electronically request and receive records from financial institutions and verify an applicant’s or recipient’s financial information. Similarly, Medicaid requires states to implement an asset verification system for assessing applicants’ and recipients’ financial eligibility. VA’s efforts to verify ongoing eligibility for pension benefits also have some shortcomings. Pension recipients who have previously reported income in addition to, or other than, Social Security income must annually complete an EVR. However, like the application forms, the EVRs do not provide spaces for claimants to report private retirement income, annuities, trusts, or asset transfers, and self-reported financial information is not independently verified unless the claims processor has questions. In addition, because not all pension recipients complete an EVR, VA may not be able to identify potential changes in the financial situation of recipients that may affect their ongoing eligibility for these benefits. Other efforts to verify ongoing eligibility may not be effective in identifying ineligible pension recipients. VA’s Income Verification Match (IVM) program uses a computer match to compare income reported to VA by pension recipients for a given year with SSA earned income data and IRS unearned income data for that year, to determine if these recipients have any unreported income. However, there is about a 15-month lag between when a pension recipient reports income and when the IVM can be conducted, and the delay may be even longer. For example, in 2011, VA was completing IVMs for income information that was reported in 2007. As a result, improper payments may be made to ineligible pension recipients for at least over a year, but possibly several years, before the error is detected. In one case we reviewed, a beneficiary, who was approved for benefits in 2004 and reported $900 in net worth when he applied, had stocks worth over $162,000 at that time, which was only identified through the IVM process in 2007. This created an overpayment of over $18,000 that VA eventually waived. In addition to the IVM not being conducted in a timely manner, the match does not identify any assets that do not generate income, such as deferred annuities for which payments have not begun. Therefore, the IVM would not be effective in identifying these types of assets. Ultimately, delays in the IVM process prevent VA from promptly detecting improper pension payments and increase the magnitude of these payments. Opportunities for coordination between VA’s pension and fiduciary programs to identify ineligible pension recipients are not always maximized.fiduciary program are pension recipients. Field examiners in this program visit beneficiaries and fiduciaries, and prepare reports that may contain financial information of some pension recipients. Claims processors had access to these reports, but VA issued guidance in July 2011 that restricts pension claims processors from accessing them in VA’s electronic case file system. VA determined that claims processors did not need to review fiduciary program reports as part of their daily work. This guidance was issued due to concerns about the privacy of fiduciaries’ personal information, and concerns that pension recipients in the fiduciary According to VA officials, over half of VA beneficiaries in the program were being put under greater scrutiny. However, fiduciary field exam reports may contain information on beneficiaries’ finances that could be useful for claims processors in assessing eligibility for pension benefits. While safeguarding fiduciaries’ personal information is important, access to these reports allows claims processors to obtain a more accurate picture of a beneficiary’s financial situation. As a result, critical information to identify potentially ineligible individuals is not received, which may result in improper payments. Fiduciary program staff must notify the pertinent PMC when they identify information that may affect the ongoing eligibility of a pension recipient for these benefits, such as changes in a recipient’s income and assets. Claims processors generally rely on notification from fiduciary program staff about possible financial ineligibility of pension beneficiaries, since these claims processors no longer have direct access to those documents. A VA official from one of the PMCs told us that when claims processors had access to field exam reports prior to the issuance of the new guidance, cases of asset transfers or unreported assets were identified from reviews of these reports, even when there was no prior notification from fiduciary program staff. In addition, as part of our case file review, we identified cases of asset transfers or unreported assets that were identified in fiduciary field exam reports. Without access to field exam reports from the fiduciary program, claims processors may not have all available information to assess an individual’s financial eligibility. VA’s guidance to claims processors on assessing financial eligibility for VA pension benefits is unclear about when certain assets should be counted as part of an applicant’s net worth. As a result, claims processors may make inconsistent eligibility decisions. For example, VA’s procedures manual states that the value of any property owned by pension claimants must be considered when assessing financial eligibility for benefits, but the manual does not specifically discuss when or under what circumstance annuities or trusts should count as part of net worth. According to VA officials, and consistent with VA regulations, the decision as to whether an asset should be counted in a claimant’s net worth depends on whether the claimant has ownership and control of the asset. However, VA has not adequately defined the concept of ownership and control of assets in either its regulations or internal guidance and policy documents. As a result, VA cannot ensure that claims processors are making fully informed eligibility decisions that are consistent with VA policy. Several claims processors we spoke with confirmed that guidance on assessing net worth is unclear, and that it is difficult to determine when to count certain assets. For example, one claims processor expressed uncertainty whether to count trusts established for children residing outside of a claimant’s household when the funds are being used to pay for claimant’s expenses, since VA’s regulations do not directly address these types of cases. A VA official acknowledged that guidance on what constitutes ownership and control of an asset could be improved. We were provided local training material from one PMC on when to count assets in a trust and found that it seemed inconsistent with VA’s regulations regarding when to count assets. For example, the PMC training material stated that a claim involving assets transferred into a trust a claimant cannot access would likely be denied due to excess net worth. However, as we noted earlier, VA regulations indicate that assets gifted to someone outside a claimant’s household should not be counted as part of net worth if ownership and control of the asset has been relinquished. Also, according to VA officials we spoke with, claims processors do not have access to VA attorneys who could assist them in examining trust agreements and other documents to determine if a claimant has ownership and control of an asset. Unclear or disparate guidance about counting assets as part of net worth may also lead to different decisions in similar cases. For example, we saw two separate cases in which, just prior to applying, claimants transferred excess assets into trusts to which they did not have access. One of the claims was approved, but the other was denied. For the approved claim, VA determined the claimant did not have ownership and control of the trust and therefore did not count it in the veteran’s net worth. For the denied claim, VA also determined that the claimant did not have access to the trust, but the claim was denied because the claims processor felt the applicant was attempting to manipulate assets to qualify for benefits. The denial letter to the claimant explained that VA’s income programs are not intended to protect substantial assets or build up the beneficiary’s estate for heirs. Further, we found that VA also lacks specific guidance on how to determine whether or not a claimant’s financial resources are sufficient to meet their basic needs without the pension benefit. VA’s procedures manual states that pension claims should be denied if a claimant’s financial resources are sufficient enough to pay for their living expenses for a “reasonable period of time,” but it does not define this term. As a result, claims processors must use their own discretion to determine what period of time is reasonable for claimants to use their assets before needing the assistance of the VA pension. Among case files we reviewed, we found inconsistent claims decisions for claimants whose financial resources would last about the same amount of time and who had similar life expectancies. For example, two veterans whose net worth was projected to provide for their needs for 2 years received different decisions on their claims based on this net worth. In this instance, a 90- year-old with a life expectancy of 4.4 years was denied benefits, while a 94-year-old with a life expectancy of 3.2 years was approved. Also, when we presented a hypothetical scenario of a claimant whose financial resources would last a specific amount of time, different processors at the same PMC gave differing opinions about whether the claimant should be approved for benefits. We identified over 200 organizations located throughout the country that market their services to help veterans and surviving spouses qualify for VA pension benefits by transferring or preserving excess assets.organizations consist primarily of financial planners and attorneys offering products and services such as annuities and the establishment of trusts, to enable potential VA pension claimants with excess assets to meet financial eligibility criteria for VA pension benefits. For example, one organization marketed on its website that it develops financial plans which include various insurance products, and that its specific area of expertise is to help VA pension claimants with hundreds of thousands of dollars in assets obtain approval for these benefits. Also, a law firm we identified marketed transferring excess assets into special trusts to enable VA pension These claimants to qualify for these benefits. These services being marketed and provided by these organizations are legally permissible under program rules because current federal law and regulations allow VA pension claimants to transfer assets and reduce their net worth prior to applying for benefits. (See figure 2 for excerpts from websites of organizations that offer to transfer assets to help claimants qualify for pension benefits.) During our investigative calls to 19 organizations, all of them correctly pointed out that pension claimants can legally transfer assets prior to applying. These organizations indicated that it is possible to qualify for VA pension benefits despite having excess assets, and almost all provided information on how to transfer these assets. (See figure 3 for transcript excerpts of calls with organizations on services they provide to qualify for VA pension benefits.) A number of different strategies may be used to transfer pension claimants’ excess assets so that they meet financial eligibility thresholds. Among the 19 organizations our investigative staff contacted, about half advised transferring excess assets into an irrevocable trust with a family member as the trustee to direct funds to pay for the veteran’s expenses. About half also advised placing excess assets into some type of annuity. Among these, several advised placing excess assets into an immediate annuity that generates income for the client. In employing this strategy, assets that VA would count when determining financial eligibility for pension benefits are converted into monthly income. This monthly income would fall below program thresholds and enable the claimant to still qualify for the benefits. About one-third of the organizations recommended strategies that included the use of both annuities and trusts. For example, one organization we contacted advised repositioning some excess assets into an irrevocable trust, with the son as the trustee, and placing remaining excess assets into a deferred annuity that would not be completely accessible, since most of the funds could not be withdrawn without a penalty. In addition, several organization representatives we interviewed also told us they may advise using caretaker agreements to enable a client to qualify for VA pension benefits. Organizations told us this strategy generally involves the pension claimant transferring assets to family members as part of a contract, in exchange for caretaker services to be provided by these family members for the remainder of the claimant’s lifetime. Some organization representatives we interviewed told us that transferring assets to qualify for VA pension benefits is advantageous for elderly pension claimants because it enables them to have more income to pay for care expenses and remain out of a nursing home for a longer period of time. For example, representatives from one organization said the use of immediate income annuities allows pension claimants to increase their monthly income that, combined with the VA pension, could help pay for assisted living or in-home care costs. Other financial planners and attorneys said if claimants do not conduct financial or estate planning to qualify for the VA pension and instead spend down their assets prior to applying, the monthly amount of the pension benefit they eventually receive may be insufficient to pay for their long-term care. They said that, as a result, these claimants may decide to seek Medicaid coverage for nursing home care because of their lack of financial resources, when they could have remained in an assisted living facility or at home with the aid of the VA pension. Some of these organizations told us that nursing home care financed by Medicaid is more costly for the government than if the veteran had received the VA pension benefit and obtained care in a lower-cost assisted living facility. Many organizations we identified also conduct presentations on VA pension benefits at assisted living or retirement communities to identify prospective clients. According to attorneys and officials from state attorneys general offices we spoke with, managers of assisted living facilities or retirement communities may have an interest in inviting organization representatives to conduct presentations on VA pension benefits because these benefits allow them to obtain new residents by making the costs more affordable. For example, we obtained documentation indicating that one retirement community paid an organization representative a fee for a new resident he helped the facility obtain. Another community in another state paid organization representatives fees to assist residents in completing the VA pension application. Some products may not be suitable for elderly veterans because they may lose access to funds they may need for future expenses, such as medical care. To help elderly clients become financially eligible for VA pension benefits, some organizations may sell deferred annuities, which would make the client unable to access the funds in the annuity during their expected lifetime without facing high withdrawal fees, according to some attorneys we spoke with. An elderly advocacy organization representative we spoke with also noted that elderly individuals are impoverishing themselves by purchasing these products when they may need the transferred assets to pay for their long-term care expenses. As part of our investigative work, one organization provided a financial plan to qualify for VA pension benefits that included both an immediate annuity as well as a deferred annuity for an 86-year-old veteran that would generate payments only after the veteran’s life expectancy. Some organizations that assist in transferring assets to qualify people for VA pension benefits may not consider the implications of these transfers on eligibility for Medicaid coverage for long-term care. Individuals who transfer assets to qualify for the VA pension may become ineligible for Medicaid coverage for long-term care services they may need in the future. For example, asset transfers that may enable someone to qualify for the VA pension program, such as gifts to someone not residing in a claimant’s household, the purchase of deferred annuities, or the establishment of trusts, may result in a delay in Medicaid eligibility if the assets were transferred for less than fair market value during the 60- month look-back period. According to several attorneys we spoke with, some organization representatives are unaware or indifferent to the adverse effects on Medicaid eligibility of the products and services they market to qualify for the VA pension. As a result, potential pension claimants may be unaware the purchase of these products and services may subsequently delay their eligibility for Medicaid. In addition to the potential adverse impact of transferring assets, we heard concerns that marketing strategies used by some of these companies may be misleading. According to several attorneys we spoke with, some organization representatives market their services in a way that leads potential pension claimants and their family members to believe they are veterans advocates working for a nonprofit organization, or are endorsed by VA. As a result, they may fail to realize these representatives are actually interested in selling financial Products. For example, some organization representatives may tell attendees during presentations at assisted living facilities that their services consist of providing information on VA pension benefits and assisting with the application, and do not disclose they are insurance agents selling annuities to help people qualify for these benefits. One elder law attorney we spoke with said that many attendees at these presentations may have Alzheimer’s disease or dementia, and are not in a position to make decisions about their finances. Therefore, they are vulnerable to being convinced by these representatives that they must purchase a financial product to qualify for these benefits. Concerns have also been raised that VA’s accreditation of individuals to assist with applying for VA benefits may have unintended consequences. According to attorneys and officials in one state, organization representatives use their VA accreditation to assist in preparing claims as a marketing tool that generates trust and allows them to attract clients. Claimants may not understand that this accreditation only means that the individual is proficient in VA’s policies and procedures to assist in preparing and submitting VA benefits claims, and does not ensure the Products and services these individuals are selling are in claimant’s best interests. Finally, some organizations may provide erroneous information to clients, or fail to follow through on assisting them with submitting the pension application, which can adversely affect pension claimants. For example, one veteran said he was told by an organization representative to sell his home prior to applying for the VA pension and that he did not have to report the proceeds from the sale on the application. He followed this advice, but VA identified these assets, which caused him to incur a debt to VA of $40,000 resulting from a benefit overpayment. Organizations may also promise assistance with the application process to any interested pension claimant but, unbeknownst to the claimant, may not follow through in providing this service if the claimant does not want to transfer assets. For example, the daughter of a veteran we spoke with, who sought application assistance from an organization representative, told us the representative never submitted her father’s pension claim to VA as promised. She learned of this about a year after she thought the claim was submitted and had to reapply through a county veterans service officer. Her father was approved 2 months later but passed away less than a month after his approval. She believes her father could have received benefits for a year if the representative had submitted the claim, and believes he did not do so because she did not want to use his services to transfer assets. The costs of services provided by these organizations to assist in qualifying for VA pension benefits varied, but organizations may be charging prohibited fees. Among the 19 organizations our investigative staff contacted for this review, about one-third said they did not charge for their services to help claimants qualify for VA pension benefits. For example, financial planners told us that, generally, there are no direct costs associated with transferring assets into an annuity, but that costs would be included in the terms of the annuity, such as the commission earned by the insurance agent. Among organizations that did charge for services, fees ranged from a few hundred dollars for benefits counseling to up to $10,000 for the establishment of a trust. Also, although federal law prohibits charging fees to assist in completing and submitting applications for VA benefits, representatives from veterans advocacy groups and some attorneys we spoke with raised concerns that these organizations may be charging for fees related to the application, or find ways to circumvent this prohibition, such as by claiming they are charging for benefits counseling. For example, one organization our investigative staff contacted charged $850 to have an attorney work on the application process, a $225 analysis fee, and $1,600 for the establishment of a trust. Another organization representative indicated he charged a “long-term planning fee” of $1,200 to be paid prior to services being provided. The organization representative asked that someone other than the veteran pay this fee, claiming that only disinterested third parties can be charged fees but not the veteran. In addition, concerns have been raised that fees charged may be excessive for the services provided. In July 2011, California enacted a law generally prohibiting unreasonable fees from being charged for these services. The VA pension program provides a critical benefit to veterans, many of whom are elderly, who have only limited financial resources to support themselves. Current federal law allows veterans to transfer significant assets prior to applying for a VA pension and still be approved for benefits, but this arrangement seems to circumvent the intended purpose of the program and wastes taxpayer dollars. Without stronger controls over asset transfers, similar to other means-tested programs like Medicaid’s look-back and penalty period, VA cannot ensure that only those with financial need receive pension benefits. As a result, VA pension claimants who have sufficient assets to pay for their expenses can transfer these assets and qualify for this means-tested benefit. Moreover, because VA’s policies and procedures for assessing the initial financial eligibility of pension claimants do not adequately ensure that only veterans and surviving spouses who meet financial eligibility requirements are granted benefits, the program is vulnerable to abuse. In particular, claims processors’ reliance on unverified self-reported information when assessing eligibility means that VA cannot be assured that it is obtaining all relevant financial information from claimants, including information on asset transfers, trusts, annuities, and other forms of retirement income. Without all this information, claims processors may improperly grant pension benefits to claimants who do not meet financial eligibility requirements. In addition, while safeguarding fiduciaries’ personal information is important, the lack of adequate coordination between VA’s pension and fiduciary programs may result in missed opportunities to identify financially ineligible pension claimants, further undermining program integrity. Finally, because VA’s guidance concerning when assets should be counted as part of a claimant’s net worth and how to evaluate a claimant’s net worth in determining eligibility lack sufficient clarity, the program remains vulnerable to inconsistent interpretation and payments to ineligible individuals. Ultimately, in this era of constrained financial resources, VA has a responsibility to manage limited funds wisely, and help ensure continued public support for this important program. To ensure that only those in financial need are granted VA pension benefits, Congress should consider establishing a look-back and penalty period for claimants who transfer assets for less than fair market value prior to applying, similar to other means-tested programs. To improve VA’s ability to ensure that only veterans and surviving spouses with financial need receive VA pension benefits, the Secretary of Veterans Affairs should direct the Undersecretary for Benefits to take the following four actions: 1. Modify pension application forms, as well as EVR forms, to include space for claimants or recipients to report asset transfers, and to specify annuities, trusts, or private retirement income. For assets, such as annuities and trusts that are reported, forms should also request related documentation to enable claims processors to determine if claimants or recipients retain ownership and control of these assets. 2. For all claimants, verify financial information during the initial claims assessment process. This may include requesting supporting documentation such as bank statements and tax returns, or using automated databases that can verify financial information. 3. Strengthen coordination between pension and fiduciary programs to identify pension claimants or recipients who have transferred or unreported assets, such as allowing claims processors access to fiduciary field exam reports for these cases. 4. Revise the VA procedures manual to better define the concept of ownership and control to help claims processors determine when specific types of assets such as annuities and trusts should be counted as part of net worth, and establish a more specific criteria for what is considered a reasonable period of time for pension claimants to use up their financial resources before becoming eligible for pension benefits. We provided a draft of this report to the Secretary of Veterans Affairs for review and comment. In its comments (see app. III), VA generally agreed with our conclusions, concurred with three of our recommendations, and concurred in principle with one other recommendation. The agency concurred with our recommendation to modify pension application and eligibility verification forms to include a space for claimants or recipients to report asset transfers, to specify annuities, trusts, and private retirement income, and to request related supporting documentation. VA concurred in principle with our second recommendation that the department verify financial information during the initial claims process. VA noted, however, that conducting this verification would add additional time to adjudicate pension claims. VA said it expects to complete an analysis by November 1, 2012 of whether financial information can be verified without placing undue burdens on claimants and recipients. We acknowledge that rigorous verification processes can sometimes entail additional time during the initial claims phase, but we continue to believe that such verification is an important part of ensuring that VA adequately balances its stewardship responsibilities with its service activities. We support the analysis VA is undertaking. Regarding our recommendation to strengthen coordination between the pension and fiduciary programs, VA concurred and noted that it has established a workgroup that is developing procedures to further enable fiduciary program staff to share income information with pension program staff. VA also concurred with our recommendation that the procedures manual be revised to better define the concept of ownership and control of assets and to establish a more specific criteria for what is considered a reasonable period of time for claimants to use their financial resources before becoming eligible for pension benefits. VA stated that it is drafting regulations that would address the effect on eligibility of transferring assets prior to applying for pension benefits. They noted these regulations would address and clarify the various factors VA uses to determine whether a claimant’s net worth precludes eligibility for pension benefits and would provide a more consistent set of rules for adjudicating claims. They added that upon completion of the rulemaking proceeding VA will amend its manual provisions consistent with the new regulations and provide the procedures to implement them. They expect to complete this revision by December 1, 2013. While VA did not directly comment on GAO’s Matter for Congressional Consideration related to establishing a statutory look-back and penalty period, VA did note that “unlike Medicaid and SSI, the statutes governing VA’s pension program lack provisions addressing the effects of transfers of assets on eligibility for program benefits, e.g., a look-back and penalty period.” VA asserted that after identifying gaps in VA’s regulations on this point, it has begun drafting regulations to address the issue. VA noted in its comments that any regulations it promulgates on this issue will be subject to challenge in the U.S. Court of Appeals for the Federal Circuit. While we commend VA’s efforts in this area, having a clearer statutory basis for this regulatory effort may help ensure that the regulations, should they be finalized, would be more likely to withstand potential legal challenges in the courts. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, and other interested parties. The report is also available at no charge on GAO’s website at http:/www.gao.gov. If you or your staff members have any questions concerning this report, please contact me at (202) 512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members who made key contributions to this report are listed in appendix IV. The objectives of our review were to examine (1) how the design and management of the Department of Veterans Affairs’ (VA) pension program ensure that only those with financial need receive pension benefits and (2) what is known about organizations that are marketing financial products and services to veterans and survivors to enable them to qualify for VA pension benefits. To determine how the design and management of VA’s pension program ensure that only those with financial need receive pension benefits, we reviewed relevant federal laws and regulations, as well as VA’s policies, procedures, and guidance regarding how VA assesses financial eligibility for pension benefits. We examined VA’s pension application forms and other documents VA uses to collect financial information from pension claimants or recipients. Also, we visited VA’s three PMCs in Philadelphia, Milwaukee, and St. Paul, and interviewed staff and officials from these locations as well as from VA’s central office. To verify how VA assesses the net worth of pension claimants, we conducted a review of a nongeneralizable random sample of 85 of the total of 3,196 fiscal year 2010 pension claim files completed by each of the PMCs that were entered in VA’s electronic case file system, in which VA had to formally determine if the claimant’s assets were excessive to be approved for pension benefits. We also reviewed pension claims files VA provided us that involved asset transfers or unreported income and assets. In addition, we reviewed past GAO reports on VA’s pension program, Medicaid coverage for long-term care, and the Supplemental Security Income program, as well as relevant federal laws and regulations to learn how these other means-tested programs assess financial eligibility of claimants. To determine what is known about organizations that are marketing financial products and services to veterans and survivors to enable them to qualify for VA pension benefits, we conducted an Internet search and interviews with stakeholders to identify organizations that market financial products and services to help veterans and surviving spouses meet the eligibility criteria for VA pension benefits. For our Internet search, we used the following search terms “Veterans Affairs and Pension Benefits,” “Veterans Affairs and Aid and Attendance Benefits,” and “Veterans Affairs and Pension and Aid and Attendance Benefits.” We applied three criteria when we examined the content of the websites obtained from our results to develop a list of organizations that market these services. To be included in our list, the organization’s website must indicate they provide services to help someone qualify for VA pension benefits or assess eligibility for VA benefits, and either indicate they provide products such as annuities or trusts to transfer assets or indicate they provide services to protect or preserve assets. In addition to our Internet search, we also included in our list several organizations that met these criteria that we identified through interviews with veterans advocacy groups, state officials, and attorneys. In applying these criteria, we developed a list of over 200 organizations that market these services. We used a methodology where two analysts had to agree that the organization met the criteria. Our investigative staff contacted a judgmental sample of 25 of the organizations on our list posing as the son of an 86-year-old veteran with over $300,000 in countable assets who is interested in applying for VA pension benefits. The 25 organizations were judgmentally selected to achieve geographic dispersion and include both financial planners and attorneys. For these calls, we sought to identify the types of products being marketed, their terms and costs, and the effect on the veterans’ access to their assets. The addresses for the main offices of the companies selected represent 13 different states that encompass about one-half of the veteran population age 65 and older. These states also include three states that represent one-fourth of the veteran population age 65 and older. Of the 25 companies contacted, our investigative staff was able to have a discussion with a representative for 19 of these organizations. For the other six companies, we either did not receive a response to a phone message or our phone calls to the organization were not answered. To learn more about the types of products and services that may be provided to enable someone to meet the financial eligibility criteria for VA pension benefits, we also interviewed attorneys and financial planners, as well as representatives from the National Association of Insurance Commissioners. To identify the implications of transferring assets to qualify for VA pension benefits, we spoke with attorneys, representatives of veterans and elderly advocacy groups, state and local government officials, and family members of pension claimants that we were referred to who used the services of organizations to apply for these benefits. To learn about any investigations involving the practices of some of these companies, we spoke with officials from VA’s Office of Inspector General and officials from state attorneys general offices in California, Iowa, Montana, Oregon, Pennsylvania, Texas, and Washington. We conducted this performance audit from July 2011 to May 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Department of Veterans Affairs (VA) provides pension benefits to eligible veterans and surviving spouses whose income and assets are below program thresholds. However, current VA regulations permit claimants to transfer excess assets prior to applying. Organizations market financial Products and services to help prospective pension claimants transfer excess assets and become financially eligible for these benefits. An investigator from our Forensic Audits and Investigative Service team had phone conversations with representatives from 19 of these organizations to learn if the organization would transfer a claimant’s excess assets, the types of services provided, and any fees charged. (See appendix I for more information on our scope and methodology.) Because VA’s pension benefits are meant for claimants with financial need, we selected portions of three of these calls that show organizations transfer significant assets to help claimants qualify for the benefits, and the types of services they provide to do so. The full transcripts of these three calls are provided below. Call 1: Caller is a GAO investigator phoning on behalf of his fictitious 86-year-old father who was a veteran, seeking VA pension benefits, who wants to learn about the services provided by the company. The company representative describes how his father can qualify for these benefits, despite having significant assets. (Whereupon, an outgoing call was placed by the GAO investigator to a company representative.) COMPANY REPRESENTATIVE: . GAO INVESTIGATOR: Hello? COMPANY REPRESENTATIVE: Hello, this is . GAO INVESTIGATOR: Hey, , this is . COMPANY REPRESENTATIVE: Hey, , how are you doing? GAO INVESTIGATOR: I’m doing good. I got your messages. I’m sorry, it’s just been a little nuts. COMPANY REPRESENTATIVE: Not a problem. GAO INVESTIGATOR: You still there? COMPANY REPRESENTATIVE: Yeah, I’m here. Yes. GAO INVESTIGATOR: I was calling — it’s your brother or your brother-in-law that I spoke to? COMPANY REPRESENTATIVE: My brother-in-law. GAO INVESTIGATOR: Yeah, I’m trying to make a decision here with my father. We are going to have to, you know, make some decisions on what we’re going to do with him. And I just wanted to see, you know, before we go draining all his resources, what our options are. COMPANY REPRESENTATIVE: Okay. You don’t — he’s not in a community yet or he is? GAO INVESTIGATOR: He’s not, he’s still living at his house. COMPANY REPRESENTATIVE: Okay. GAO INVESTIGATOR: But, you know, he’s got a lot of, you know, physical limitations, he’s got difficulty hearing, and he can’t really move around, so you know – COMPANY REPRESENTATIVE: Did his doctor say he needs assistance from another person on a regular basis? GAO INVESTIGATOR: Well, I imagine. I mean, I didn’t ask that question, specifically, but I’m sure he would. I mean, right now, you know, we’re kind of trying to take care of him ourselves, and you know, we’ve got somebody helping, but we’re going to need something more full time. COMPANY REPRESENTATIVE: Yeah. You guys are helping out with cooking, cleaning. Is he still able to drive or no? GAO INVESTIGATOR: No. COMPANY REPRESENTATIVE: Okay. So he needs transportation. You know, these are the things they are looking for. Did you say his vision is an issue? GAO INVESTIGATOR: No, his hearing, is what I said. COMPANY REPRESENTATIVE: (Laughter) I’m sorry. GAO INVESTIGATOR: And his, not yours. COMPANY REPRESENTATIVE: (Laughter) Well, maybe mine, a little bit. Anyhow, yeah, the VA kind of looks at, you know, daily activities — the activities of daily living. And if he can’t do some of those things, he needs assistance, you know, then he can qualify for the benefit. They don’t mean if somebody is completely bedridden or handicapped, they just mean if somebody needs assistance and help with some parts of their life. What we would be able to do – we have people that are still able to drive and live at home, but they can’t do certain — they can’t carry the bags from the car if they go grocery shopping, because they don’t have the dexterity or the strength. GAO INVESTIGATOR: All right. COMPANY REPRESENTATIVE: So, you know, the VA looks at it and says, they can’t even go shopping for themselves, they can’t carry the bags from the car, they can’t lift them, you know, how are they going to get them into the house? GAO INVESTIGATOR: Well, I’m sure that’s not a problem. I mean, he definitely is, you know, he needs help with just going to the bathroom, getting in and out of bed and stuff like that. COMPANY REPRESENTATIVE: Yeah. He needs help getting in and out of bed, getting to the bathroom, those are the things they’re looking at. Absolutely, he needs this assistance, and he can qualify for the benefit. And everything else is just about preparing yourself for the benefit, doing the paperwork and so forth. GAO INVESTIGATOR: Okay. COMPANY REPRESENTATIVE: And that a process, in and of itself. What I would suggest is get together, you know. This is — this doesn’t— this isn’t like a one-time sit-down and it’s all done, you know. This can take several weeks, and sometimes even up to six weeks, to get all the paperwork completed. GAO INVESTIGATOR: Okay. COMPANY REPRESENTATIVE: So you know, but it’s a matter of getting started. You know, and that’s what I — you know, if your dad needs assistance, and he was a wartime Veteran, we can get him the benefit. All right? GAO INVESTIGATOR: Okay. Well, you know what, my big concern – yeah, you know, which I mentioned to your brother-in-law is, um — you know, he’s got some assets, and I don’t know how that affects things. COMPANY REPRESENTATIVE: You know, the assets come into play, and that’s part of the process. We would explain all that to you – what, what — where you need to go, how — what needs to be done. Ideally, an accredited attorney that we – that we work with, he’ll have that conversation with you. He’ll explain that to you in more detail. GAO INVESTIGATOR: Okay. COMPANY REPRESENTATIVE: But anyone — and I will just tell you this. The VA allows you to qualify, regardless of what your assets are. And I’ve had people with over a million dollars qualify for this benefit. GAO INVESTIGATOR: Wow. COMPANY REPRESENTATIVE: So you know, you’ll hear you can only have this much money, you can do this. You’ll even be told you don’t qualify. GAO INVESTIGATOR: And how do you do that, though, I mean, that’s what I don’t understand. COMPANY REPRESENTATIVE: Well, you have to reposition the assets, that’s all. You know, like I said, that’s — that’s part of what the attorney will talk about. From a process standpoint, I’ll gather all the information that we need from you, what will go on the VA application. And we will get a letter back from our VA-accredited attorney, and he will outline and tell you you do or you don’t qualify. Some people qualify immediately; other people, like in your situation, if your family has some assets, you may have to jump through some hoops in order to get the benefit. But the VA outlines it and says, this is what you’re allowed to do, in order to qualify. And, you know, we’ll share that with you. We’ll show you exactly what you need to do, how to do it, because it has to be done a certain way in order to qualify. Look at this as kind of something you’re going to do one time, all right? This isn’t like doing your taxes, you know, where you need to remember it to understand it for next year. You are going to do this once, and it’s going to be out of your life. GAO INVESTIGATOR: Okay. Here’s a question that I have. Does he still have control of the assets? COMPANY REPRESENTATIVE: Your family will. GAO INVESTIGATOR: Okay. COMPANY REPRESENTATIVE: Yeah, your family will. I mean, all his money, his monthly money, will go right into his checking account, just like it probably does, Social Security, pension, whatever. The VA benefit will go right into his checking account. All that money will keep going right into his account, and he will have access to that. GAO INVESTIGATOR: Okay, all right. Okay, just so I understand it, so you’re just talking about putting it under a different name or are you putting it in a special account? COMPANY REPRESENTATIVE: , here’s the thing. I can’t get into all that with you over the phone. GAO INVESTIGATOR: Okay. COMPANY REPRESENTATIVE: It will get so complicated and so confusing. This conversation that I need to have with you will take about an hour, just to get the process started, and then we will get into all that stuff. Every person I have tried to help with this benefit, when they try to get to the — like into the high school level questions, before understanding the kindergarten and grade school level questions, they never get the benefit, because they can’t — they can’t under —they get so confused. So it’s almost like, once you’ve seen a dead body you can’t unsee it, and you can’t focus on anything else. And so what I’m trying to share with you, you know, if you just, you know, take a bite at a time, you know, like the old saying, you can’t eat an elephant in one bite, we need just a bite of your time. You will get through this and you’ll get the money. But if we try to jump ahead, you know, I’ll tell you, it’s never been successful. GAO INVESTIGATOR: Okay. What – COMPANY REPRESENTATIVE: And I hope you understand. I’m just giving you my expertise and experience in this. We do over — we submit over four hundred apps a month, and everybody gets the benefit, so we know how to do it, we know how to get it done. And nothing is going to be a surprise to you. Everything is going to be here, this is your option. If you want this, you’ve got to do this. And then it’s up for you to decide. But it’s just a matter of getting you to that point where you have all the facts, so you can make a decision. And so the questions you’re asking are all valid, you know, they are all the questions that we’ll be delving into very deeply. If you need a CPA involved, we have a CPA on our team. We have our attorney on our team that I use, . He’ll be a part of all the conversations if we need, so all throughout. And none of that is costing you any money, because that’s part of my fee. But what I’m saying is, all those questions that you have now, when you are ready for the answers, we’ll have those conversations. But right now, you’re not ready for the answers. It’s difficult to understand this, why this, what that? All those answers you are going to get from me right now are going to create more and more questions, and things are going to get so confusing for you. GAO INVESTIGATOR: Okay. COMPANY REPRESENTATIVE: This process is already confusing enough, I’ve got to tell you. GAO INVESTIGATOR: Okay. COMPANY REPRESENTATIVE: There are some three hundred to four hundred thousand applications a month — I mean, I shouldn’t say a month. The VA has over — between three hundred and four hundred thousand applications backlogged, sitting there, because people didn’t do the process right, and it will take them up to two years to get approved. You know, that — that’s — it is difficult. It has to be done a certain way, and I’ll get you there. I promise, I’ll get you there, but you just have to go through it step-by-step. GAO INVESTIGATOR: Okay. Well, the only other question I have then is the cost. What is the cost involved? COMPANY REPRESENTATIVE: If there is any cost, it would be with the attorney. They charge — they’ll charge a fee for setting up certain documents, and we’ll get to that, as well. The worst case, let’s say your dad, he has a house, and you’re not able to sell the house. See, while he’s living in it, they don’t care that he owns a home, but when he’s out of the house, they consider it an asset. We have to — we’ll have to do something with the house, as well. If you were planning on selling it, fine. If you weren’t planning on selling it, that’s fine, too, but we’ll have to address it. The worst case scenario would be about fourteen hundred bucks. That’s a worst case scenario. GAO INVESTIGATOR: Okay. COMPANY REPRESENTATIVE: And when you understand what that all entails, you’ll be like, geez, fourteen hundred bucks, let’s find out tomorrow. That’s another thing, you know, when you understand everything that you get with that. And he’ll make sure everything is done the right way so that the VA can never come back at you, seeing that the house is protected, your mother is protected, you know. I’m just saying, there’s a whole lot to it, and to try to answer it over the phone is more than tough. GAO INVESTIGATOR: Okay. All right. So there’s no — that’s just an attorney fee? I mean, there’s no fee for you? COMPANY REPRESENTATIVE: Exactly. GAO INVESTIGATOR: There’s no fee for you, at all? COMPANY REPRESENTATIVE: No, not at all. GAO INVESTIGATOR: Does the VA pay you or something? COMPANY REPRESENTATIVE: Hang on a second. Let me do this. I hate to try to get you — are you busy during the day? GAO INVESTIGATOR: Ummmm. COMPANY REPRESENTATIVE: Is there like an hour of time that you and I can get together and get the process started, so I can show you how — how it all works? GAO INVESTIGATOR: Yeah, I mean, probably, but probably not until, you know, after the holidays. COMPANY REPRESENTATIVE: Okay. Then let’s do this. If you have your schedule, my schedule is tied up until the second week of January, already filled with seminars and things to — so people can come and see me. The second week I have at least two seminars, and I usually have thirty to forty people at each seminar. GAO INVESTIGATOR: Uh-huh. COMPANY REPRESENTATIVE: And then about half of those people sit down with me and want to go to the next step. GAO INVESTIGATOR: Right. COMPANY REPRESENTATIVE: So if I do two or four presentations, I mean, I’ve got thirty to fifty appointments during the second week of January. So if you and I can get together in the first week, I can get you started before all that mess starts. GAO INVESTIGATOR: Okay. Well, I wonder if it wouldn’t be beneficial to go to one of the seminars? COMPANY REPRESENTATIVE: Well, the seminar is in . GAO INVESTIGATOR: In where? COMPANY REPRESENTATIVE: In , but really, what I do there is more of a blanket meeting. If you already know you have a situation, you already know you have an interest, I go over that same information that I go over in the seminar. But the seminar, it’s just information, and I will be giving that to you face-to-face, and be able to collect the information and get started on the process. GAO INVESTIGATOR: Okay. Yeah, I mean, I don’t have a lot of questions, you know, I just want to know what types of Products : you’re talking about that we would — where the assets would go, how — I mean, are we talking about – COMPANY REPRESENTATIVE: It all depends on your dad’s needs. Right now we don’t know – I don’t know anything about your situation. I don’t know what your costs are, I don’t know what his expenses, his needs are. I don’t have any idea what the cash flow management requirements will be – GAO INVESTIGATOR: Yeah. COMPANY REPRESENTATIVE: — this year, next year, five years down the road. You know, as a financial advisor, you know, I come from the banking industry, where I worked in the trust department, and my clients were all multi, multi-millionaires. And all I did for them was identify what their needs were going to be year in and year out, into the future – GAO INVESTIGATOR: Right. COMPANY REPRESENTATIVE: — protecting their assets, so that they knew that money was going to be there (unintelligible). Like your dad, the last thing he wants to do is have his nest egg at risk. GAO INVESTIGATOR: Right. COMPANY REPRESENTATIVE: He’s going to need — he’s going to need income from it to maybe offset some of the cost of his retirement community, perhaps. GAO INVESTIGATOR: Uh-huh. COMPANY REPRESENTATIVE: I don’t know, you know, I don’t have any answers, at this point, because I don’t know what his needs are, what your family needs are, you know, how many kids are there, who all is involved. GAO INVESTIGATOR: I mean, basically, it’s just him. I mean, he’s got his Social Security, and then, if he qualifies for the VA Pension, he would have that. So I imagine that would be enough income for him. So it’s just a matter of doing something with the assets, so he doesn’t lose it. So – COMPANY REPRESENTATIVE: Exactly. And that’s something you and I will discuss and work on. Are you — are you handling his affairs now? GAO INVESTIGATOR: Yeah, uh-huh. COMPANY REPRESENTATIVE: So you take care of all of his bills? GAO INVESTIGATOR: Yeah. COMPANY REPRESENTATIVE: So you are the person who understands best, you know, what your parents, you know, what the family, you know, your father, your mother, your parents, what their requirements are. Now your mother has passed; is that correct? GAO INVESTIGATOR: Yes, uh-huh. COMPANY REPRESENTATIVE: Okay, so we’re just talking about your dad here. GAO INVESTIGATOR: Right. COMPANY REPRESENTATIVE: So here’s — the VA just increased the payment to the Veteran, single Veteran, to right around seventeen hundred a month, tax-free, so it’s a pretty substantial benefit; that’s over twenty thousand dollars a year. If you are looking at his — looking at what his Social Security is – GAO INVESTIGATOR: Right. COMPANY REPRESENTATIVE: — you add that, plus his VA, it may cover his long-term care facility. GAO INVESTIGATOR: Uh-huh, yeah, his Social Security is twelve hundred, so you’re talking about somewhere close to almost three thousand dollars a month. COMPANY REPRESENTATIVE: Yeah, exactly, so that’s not bad, that’s not bad. Now it depends on what kind of community he would be looking at, but you know, that’s. . . You know, the hardest part is getting started, and then once you get to a certain point, you’ll be like, yeah, I get it, I get it, now I understand, this is what we do. Let me ask you, is Tuesday the 3rd or is Wednesday the 4th a better day for you? GAO INVESTIGATOR: Well, probably Wednesday will be better for me. COMPANY REPRESENTATIVE: And you’re in ? GAO INVESTIGATOR: Yes. COMPANY REPRESENTATIVE: Okay. How does 10 a.m. work? GAO INVESTIGATOR: Are you talking about coming down to me or where? COMPANY REPRESENTATIVE: Yeah, absolutely, I’d come to you. GAO INVESTIGATOR: That sounds good, tentatively. I’ve got to check and make sure that — I have to check a couple things here, but I mean, it sounds good. COMPANY REPRESENTATIVE: What address is the best place to meet you? GAO INVESTIGATOR: Well, you know, I’m guessing that it might be just as good to do it at the office. I’ll tell you what, are you going to be around this afternoon? COMPANY REPRESENTATIVE: Yeah, do you want to give me a call back? GAO INVESTIGATOR: Yeah, let me give you a call back. Let me check the schedule and make sure it’s good. I think it would probably be easier just to do this at the office. COMPANY REPRESENTATIVE: Okay. What city is it? GAO INVESTIGATOR: I’m sorry? COMPANY REPRESENTATIVE: What city is your office in? GAO INVESTIGATOR: In . GAO INVESTIGATOR: Near . COMPANY REPRESENTATIVE: Near , okay, (unintelligible). All right, good. Give me a call back just to confirm if 10 a.m. works. If I don’t answer, just leave a message. I may go out and do some shopping (unintelligible). GAO INVESTIGATOR: All right. I’ll just leave a message on your voicemail. COMPANY REPRESENTATIVE: Yeah, and — good. GAO INVESTIGATOR: Sounds good. COMPANY REPRESENTATIVE: All right, . GAO INVESTIGATOR: Thanks for your time, I appreciate it. COMPANY REPRESENTATIVE: Nice talking to you. GAO INVESTIGATOR: Alright, Bye. Call 2: Caller is a GAO investigator phoning on behalf of his fictitious 86-year-old father who was a veteran, seeking VA pension benefits, who wants to learn about the services provided by the company. The company representative describes how his father can qualify for these benefits, despite having significant assets. (Whereupon, an outgoing call was placed by the GAO investigator to a company representative.) COMPANY REPRESENTATIVE: Hello? Hello? GAO INVESTIGATOR: Hi. COMPANY REPRESENTATIVE: Hi. This is . Did somebody call this number? VA benefits. COMPANY REPRESENTATIVE: Okay. What can I help you with? GAO INVESTIGATOR: Well, I’m just trying to figure out my — this is for my father. COMPANY REPRESENTATIVE: Uh-huh. GAO INVESTIGATOR: And, you know, he’s not currently getting benefits. He gets Social Security. COMPANY REPRESENTATIVE: Right. GAO INVESTIGATOR: But, you know, I was — I’m trying to see if maybe he could qualify for benefits. But the problem is he’s got, you know, some assets, and I’m not sure, you know, if that precludes him from getting benefits or not. So I wanted to talk to somebody – COMPANY REPRESENTATIVE: No. No. Is he — does he need some help around the house or has he got some medical or physical impairments – GAO INVESTIGATOR: Yeah. COMPANY REPRESENTATIVE: — right now? GAO INVESTIGATOR: Yeah. I mean, he’s 86. I mean, mentally he’s fine. But, you know, physically he needs a lot of help in just, you know, walking and getting in and out of bed. COMPANY REPRESENTATIVE: Okay. GAO INVESTIGATOR: And, I mean, yeah, he needs help. COMPANY REPRESENTATIVE: Sure. If you will — if you will do me a favor, I’m going to send you — do you have an e-mail address? GAO INVESTIGATOR: Well, not really. But, I mean, I can probably get something. But what do you need? COMPANY REPRESENTATIVE: Well, I was going to send you a little form, and if you can just spend a few minutes and fill it out, then I can tell you if your father is available for benefits or not. GAO INVESTIGATOR: Okay. I mean, I mean, basically I’m assuming he – COMPANY REPRESENTATIVE: I probably could do this — I could do this over the phone, too. But right now I’m just going and jumping on a conference call. So I can call you back and I can ask you the questions I need to ask you, if you want. Maybe in — I’d say within a couple of hours I can get back with you. GAO INVESTIGATOR: Okay. Yeah, that might work. COMPANY REPRESENTATIVE: Okay. It is a means-tested and an asset-tested – uh, benefit, but – um, essentially there are legal work-arounds. And if you know, it’s basically you have to put together a good presentation for the Veterans Administration. And that’s what we do. We help people um — position assets and coordinate the presentation effort to the VA. So there’s really not many kinds — if, in fact, your father has lost some of the activities of daily living, then we really can’t get him qualified. So I’ll just – GAO INVESTIGATOR: Yeah. COMPANY REPRESENTATIVE: — make a short explanation like that. GAO INVESTIGATOR: Yeah. And just to kind of make it short, I mean, his income isn’t the thing, because he’s only getting Social Security. But he’s got assets that are probably — between his house and some savings and stuff, he’s probably, you know, a little bit over $500,000. And I’m wondering if that precludes him from qualifying. COMPANY REPRESENTATIVE: No. No, it doesn’t, especially if he’s got a little bit of uh — flexibility. How much is the house worth? The house is really not an issue at all. GAO INVESTIGATOR: Yeah, that’s probably about 200,000. COMPANY REPRESENTATIVE: Oh, so you have 300 in other stuff? Okay. Yeah, I’ve qualified people with that — beyond $700,000 worth of liquid assets. So that’s not the issue. But sometimes the older folks, you know, your father being 82 – GAO INVESTIGATOR: Eighty-six. COMPANY REPRESENTATIVE: Eighty-six, I’m sorry. Sometimes they — oh, they’re not — what would be the word? They’re sometimes control freaks, meaning sometimes what we have to do is retitle assets. He would still be totally in control of them, but not under his direct purvey. So if he can understand the strategy, he could understand that, you know, he’s entitled to the benefit. It could be — he’s single right now? GAO INVESTIGATOR: Yeah, yeah. His wife is dead. COMPANY REPRESENTATIVE: Okay. So, you know, basically he performed for his country. If he was able to get aid and attendance, he would get real close to — actually, this year is $19,736 per year. And if that means something to him, then we can help him out. If it doesn’t, then he’ll just have to go through spend-down and spend it. So we can — we can make it work, but he’s got to be willing to help us. Okay? We can’t force people to do something that they’re not wanting to do. GAO INVESTIGATOR: I got it. COMPANY REPRESENTATIVE: Okay. It’s real simple – GAO INVESTIGATOR: What sorts of things are you talking about? I mean, where do we put it? COMPANY REPRESENTATIVE: Well, for instance, do you have power of attorney for him right now? GAO INVESTIGATOR: Well, I don’t. But, you know, he’s pretty — he’s pretty lucid. I mean, I — I can probably get it. COMPANY REPRESENTATIVE: Well, typically speaking, for people that have had a child or relative assigned a power of attorney, then they’ve kind of realized that, you know, if something happens, they may need some help. Somebody acting in their financial capacity if they get in a situation where they can’t perform or somebody to make some medical decisions for them. So at that point in time, they’ve kind of acquiesced to the fact that, you know, at this point in my life, I need a little bit of help. So I was going to say, if he’d already given you power of attorney, then essentially what he said is, you know, he trusts you. GAO INVESTIGATOR: Uh-huh. COMPANY REPRESENTATIVE: And if that’s the case, then essentially it’s going to be that type of a relationship where things may be put into special types of trusts where he is still — where you would have a fiduciary responsibility to him. So it’s a contractual obligation. It’s — all the money is for the benefit of him, but it’s not under his direct control. Now, that’s not necessarily the only way it can be done. There are also what we call care contracts where essentially he can kind of prepay in a contractual manner for his future care. It gets it out of his immediate possession and would help qualify for those types of benefits. So there’s a myriad of strategies. I work with an attorney. We’ll make sure it works for you. But he just has to understand that either he wants to get the benefit or he doesn’t. If he does, we can make it work. If he doesn’t, then that’s okay, too. GAO INVESTIGATOR: Right. Well, I mean – COMPANY REPRESENTATIVE: It’s up to him. GAO INVESTIGATOR: — I don’t think he wants to lose his assets. And, you know, you know, we don’t want him to lose his assets. And that’s — that’s the biggest concern now. COMPANY REPRESENTATIVE: Right. There’s — there’s a — yeah, well, he will if he needs the care. Then the other issue you have coming up, too, of course, is sometimes folks that mostly qualify for the benefit will essentially possibly qualify for Medicaid, too. And what that means is that, yes, I mean, if his expenses go to $6-7-8-9- 10,000 a month, then he will lose his assets unless he does something to protect them. So we can help out in that regard, too. GAO INVESTIGATOR: Okay. COMPANY REPRESENTATIVE: It’s — you really don’t have — you don’t have an e-mail address really? GAO INVESTIGATOR: Well, I can get one for you, yeah. I’m not real computer — I’m not a computer guy. That’s all. COMPANY REPRESENTATIVE: Okay. Well, I can appreciate that. We probably — I’m thinking here — is this your cell number? GAO INVESTIGATOR: Uh-huh. Uh-huh. COMPANY REPRESENTATIVE: You wouldn’t be able to print it if I gave it to you GAO INVESTIGATOR: Do you have something on your web site? COMPANY REPRESENTATIVE: I don’t have a — I don’t have the form embedded. Let’s just talk in a couple of hours. I’ll ask you the question. You know, I can tell pretty much right now that I can help you out. It’s just a matter to the extent where you need to ask your father — well, probably before you talk to me again. Say, Hey, Dad, you know, I talked to an accredited VA application guy, and he says that, you know, we can get you the benefit but there is some strategy involved. And if you want to hear it, fine. If not, that’s okay, too. You know, that’s just really what you need to do with this. GAO INVESTIGATOR: All right. What do you guys charge for that? COMPANY REPRESENTATIVE: Well, for the — there’s two ways. Let’s just leave it at a thousand — $1,050, okay? GAO INVESTIGATOR: Just a straight fee? COMPANY REPRESENTATIVE: Yeah, $1,050. Of course, what we really want to do is to be able to – we can give you the recommendations and turn you loose, and you go out there on the street and try to implement it. But, you know, you probably really want to kind of go through us and let us help you in the full way. I will send you enough information and the attorney’s information so that you’ll understand that we really are a full-fledged service organization and can really help you through this mix. And then, you know, once we help you out, you can either go down to the local VA office and have them fill out the paperwork. Is your father — is your father in the same town as you or is he — where is your father? GAO INVESTIGATOR: Yeah, he’s not that far away. About, you know, seven, eight miles away. COMPANY REPRESENTATIVE: Okay. Where — where are you located? GAO INVESTIGATOR: I’m actually in north . COMPANY REPRESENTATIVE: Oh, are you? What part? GAO INVESTIGATOR: Well, are you familiar with at all? COMPANY REPRESENTATIVE: Yes. I went to so – GAO INVESTIGATOR: Oh, no kidding. COMPANY REPRESENTATIVE: Yeah. And my sister lives in right now, actually. GAO INVESTIGATOR: Okay. COMPANY REPRESENTATIVE: And actually we lived in for a few years when I was real young. Yeah, I’m familiar with . I’m a native of . So, yeah, we can help out. So I wish you had some kind of an e-mail. GAO INVESTIGATOR: Well, let me see if I can do something. I mean, you know, my brother might have something that I can — I can use. COMPANY REPRESENTATIVE: Okay. Yeah, because, really, this is a family discussion. You know, all the kids — how many kids are there besides you? GAO INVESTIGATOR: Just me and my brother. COMPANY REPRESENTATIVE: All right. So you guys are really going to need to put your heads together and say, hey, this makes sense for us or it doesn’t. You know, dad is going to have to cooperate or he’s not. And, you know, sometimes, to be honest with you — I deal with older folks, you know — they just don’t give a rat’s fanny. And so you can’t make the horse drink, you know. GAO INVESTIGATOR: I don’t think – COMPANY REPRESENTATIVE: But if he wants to protect his asset – GAO INVESTIGATOR: Yeah, I mean, he’s — you know, mentally he’s fine. I don’t think that’s going to be a problem. I mean, he – COMPANY REPRESENTATIVE: All right. Well, if he wants to protect his assets – GAO INVESTIGATOR: Right. COMPANY REPRESENTATIVE: Most of the time — most of the time they want their kids to wind up with the money. And sometimes, you know, they don’t care as much. But I can’t get in your father’s head, so you need to kind of ask him if that’s the case. If he wants to protect the money, you can have him protect the money. GAO INVESTIGATOR: Okay. All right. COMPANY REPRESENTATIVE: Okay? GAO INVESTIGATOR: Sounds good. All right. I appreciate it. Well, let me see if I can get an e-mail address and give you a buzz back. COMPANY REPRESENTATIVE: All right. Hey, let me do this. Let me give you my cell number, please, so you should be — because I’m in and out so much. It’s – GAO INVESTIGATOR: Uh-huh. COMPANY REPRESENTATIVE: — . GAO INVESTIGATOR: Okay. COMPANY REPRESENTATIVE: . GAO INVESTIGATOR: Okay. All right. Got it. COMPANY REPRESENTATIVE: All right, buddy. Take care. GAO INVESTIGATOR: I’m sorry. What did you say your name — what did you say your name was again? I’m sorry. GAO INVESTIGATOR: . Oh, that’s right,. COMPANY REPRESENTATIVE: , . Yeah. All right. GAO INVESTIGATOR: Okay. Thank you. COMPANY REPRESENTATIVE: Okay. Thank you. Bye now. Call 3: Caller is a GAO investigator phoning on behalf of his fictitious 86-year-old father who was a veteran, seeking VA pension benefits, who wants to learn about the services provided by the company. The company representative describes how his father can qualify for these benefits, despite having significant assets. (Whereupon, an outgoing call was placed by the GAO investigator to a company representative.) SPEAKER ONE: , , can I help you? GAO INVESTIGATOR: Yeah, I hope so. I want to talk to somebody about possibly getting VA benefits for my father. COMPANY REPRESENTATIVE: Okay. And your name? GAO INVESTIGATOR: My name is . COMPANY REPRESENTATIVE: Hi, . Can you tell me a little bit about your dad’s situation? GAO INVESTIGATOR: Well, he’s a World War II veteran. COMPANY REPRESENTATIVE: Okay. GAO INVESTIGATOR: He’s 86 years old. Are you there? COMPANY REPRESENTATIVE: What is the nature of his illness? GAO INVESTIGATOR: I’m sorry? COMPANY REPRESENTATIVE: Can you tell me about his illness, please. GAO INVESTIGATOR: Well, you know, aside from getting old? COMPANY REPRESENTATIVE: Yeah. GAO INVESTIGATOR: He’s having a lot of — he can’t walk too well. He’s got a lot of, you know, joint problems and stuff like that. So he can’t — he needs a lot of help getting in and out of bed, taking baths and stuff like that. He’s also got — he doesn’t hear very well. COMPANY REPRESENTATIVE: And how old is your dad? GAO INVESTIGATOR: He’s 86. COMPANY REPRESENTATIVE: God bless him. I guess just wearing out. COMPANY REPRESENTATIVE: Where does he live? Is he living with you or is he in a facility? GAO INVESTIGATOR: No, he’s got a place, he’s got a house. COMPANY REPRESENTATIVE: Ok. Are you planning on leaving him at the house, staying at the house? Is he going to have any in-home health care coming in? GAO INVESTIGATOR: Yeah, I mean, in-home, I would think, because I mean, mentally he’s fine. COMPANY REPRESENTATIVE: Have you checked with an in-home health care agency to come to the house? GAO INVESTIGATOR: Well, yeah, he’s got people coming in already. COMPANY REPRESENTATIVE: He does. Okay. GAO INVESTIGATOR: I mean, that’s kind of why I’m – COMPANY REPRESENTATIVE: I see. The reason why I ask those questions is that in order to get VA benefits, called Aid and Attendance, which is a benefit that the government will pay up to nineteen fifty per month, tax-free, and the government usually pays that 9 months out from the time we apply. And you get also a retroactive, so it would be 8 months on top of that. GAO INVESTIGATOR: Okay. COMPANY REPRESENTATIVE: It’s that they need to have something in place like in-home health care already being used or about to be used, or he lives in an assisted-living facility or a nursing home. And those are key. One of those three things have to be in place or about to be in place. GAO INVESTIGATOR: He is getting help already at the house. I mean, that’s one of the things. I mean, we’re spending a lot of money. And you know, he’s got — he’s got some assets, but I mean, as far as income, all he’s got is his Social Security. COMPANY REPRESENTATIVE: Tell me about his Social Security. What is coming in per month, as far as income? GAO INVESTIGATOR: He’s got eleven fifty coming in a month. COMPANY REPRESENTATIVE: Okay. Anything else? GAO INVESTIGATOR: Well, no, because he’s got some — you know, he owns his own house. COMPANY REPRESENTATIVE: Right. I’m just asking; I don’t know your situation. But eleven fifty a month in Social Security. No other income is coming in. No savings? GAO INVESTIGATOR: No, he’s got some savings and stuff, but I mean, I’m concerned, again, he’s going to lose all that. COMPANY REPRESENTATIVE: Right. See, how we work — first of all, I’m accredited by the VA. And what we do is we plug into the software to see what dad qualifies for. And what we plug into the software is money going in, money going out, money saved, illnesses, what his illness issues are, in other words, what the home health care agency is doing for dad. All of that plays a major role in crunching the numbers to see what dad qualified for. And in most cases, , it’s not a matter of if he qualifies, it’s a matter of how much. GAO INVESTIGATOR: Okay. COMPANY REPRESENTATIVE: That’s going to have to be our next conversation. I’m just trying to get a little information to see if I can guide you in the right direction. My question to you regarding the home health care, do you have an idea what they’re charging you per month? GAO INVESTIGATOR: Well, you know, it’s probably around a little over two thousand, maybe twenty-five hundred a month. COMPANY REPRESENTATIVE: Okay. And so here’s what you have, . You have more money going out than coming in, as far as income. GAO INVESTIGATOR: Right. COMPANY REPRESENTATIVE: So you have a shortfall of about fourteen hundred dollars, thirteen fifty, a month going out for care. And that’s a good thing, when it comes to applying for the VA benefits. There’s other factors, I’m just giving you kind of an overview. GAO INVESTIGATOR: Okay. COMPANY REPRESENTATIVE: And it could be, you know, dad may qualify for up to the full nineteen fifty a month. I don’t care if twenty-five hundred is coming in, and twenty-five hundred is going out the door, the software, with all of the bells and whistles of what we have to plug into it, it may kick out that dad needs nineteen fifteen a month. GAO INVESTIGATOR: Okay, but here’s the problem. COMPANY REPRESENTATIVE: He may have a shortfall of fourteen hundred. GAO INVESTIGATOR: Yeah, but here’s my concern though, is that he’s got — he owns his own house, and then he’s got like a mutual fund and he’s got some savings. And of course, that’s not going to last very long with this negative, you know, income that he’s got going on. COMPANY REPRESENTATIVE: Correct. GAO INVESTIGATOR: But how is he going to qualify for anything with those assets? COMPANY REPRESENTATIVE: Well, the VA has different scenarios. For example, the VA will allow us to do estate planning to reposition the assets so he can qualify. The VA may be able to allow him to keep a certain amount. How much money are we talking about in savings or stocks or bonds or mutual funds total? Just off the top of your head. You don’t have to be exact. GAO INVESTIGATOR: I’m guessing he’s got maybe ninety thousand in savings and about two hundred — about a quarter of a million, probably, in mutual funds. A little over two fifty, two sixty maybe. COMPANY REPRESENTATIVE: All right. So if he’s not opposed, there’s like several scenarios. So let’s just talk about money. Those with assets of which we would call your dad. Is he opposed to repositioning the assets to where — are you the power-of- attorney, ? GAO INVESTIGATOR: Like I say, he’s got his mental facilities, so I’m not. I mean, I could be, but I mean, at this point, he’s still able to function for himself. COMPANY REPRESENTATIVE: Well, your issues here are you have about a quarter of a million dollars plus cash. The government is going to want him to use his money first, if we don’t do estate planning, which we’re allowed to do, according to the VA parameters. GAO INVESTIGATOR: Okay, so what does that mean? Where — what would you do? COMPANY REPRESENTATIVE: What that means is basically is repositioning the assets to where – it may – and I don’t — again, the software tells us what we can and can’t do. But I’m just going to give you a — kind of a hypothetical. Uh — For example, you may be able to reposition, reallocate those funds into a trust that , Jr. — if you’re a Jr. – I’ll just – , you – GAO INVESTIGATOR: . COMPANY REPRESENTATIVE: — would be the trustee of. And we’re allowed to apply for VA benefits the day after, by reallocating those funds, so that dad can qualify. And he may get nineteen fifty a month, tax-free, plus retroactive, for the 8 months waiting. So he may get a full check of about almost twenty something thousand dollars, and the funds thereafter come each month to you tax-free. Does he want that? I don’t know. Those are some of the scenarios that the software will kick out, and let us know what we can and can’t do. But the bottom line is, if you went to the VA directly and told them — because you would have to be forthright, and tell them that you had this money — they would reject you immediately, until you spend down to your last fifteen hundred dollars. Or there are options that you could do. And that’s where an accredited VA claims agent comes in, myself, because we work with attorneys that do estate planning that are able to do these type of things. So those are the questions you want to talk to your dad about, even though he may have his faculties, and he may be able to make decisions. Is he willing to pull the trigger and let you make the decisions? Because that’s what he may have to do. GAO INVESTIGATOR: Well, I’m just – COMPANY REPRESENTATIVE: I’m just giving you one of the scenarios. But our niche is that we deal with people with assets, if they are willing to let the power-of-attorney make those decisions, then we can apply for VA benefits without a hiccup. GAO INVESTIGATOR: Okay, all right. COMPANY REPRESENTATIVE: So those are the questions that I probably would talk to my dad about. GAO INVESTIGATOR: He’s pretty reasonable. COMPANY REPRESENTATIVE: Because quite frankly, there is a gap. And dad, who knows, can get worse, and then you may have to put him into a nursing — an assisted-living facility, which is twice what you’re paying now. Estate, or do you want to spend it down? And those are the questions — those are hard questions to ask. COMPANY REPRESENTATIVE: The economy. GAO INVESTIGATOR: Yeah, and we’re putting out more than, you know, he’s only got a little bit coming in with the social security. That’s not covering it. COMPANY REPRESENTATIVE: No, that’s right. And what these types of estate planning devices, that’s allowed, according to the VA, it’s real simple. I mean, they make it very clear that the — by the way, are you the power-of- attorney? GAO INVESTIGATOR: I don’t have a power-of-attorney, but I can probably get one. COMPANY REPRESENTATIVE: I would do that yesterday. I would — forget whether we met each other or not. You need to get that done. What I would suggest — can I make a suggestion? GAO INVESTIGATOR: Uh-huh. COMPANY REPRESENTATIVE: Go to Office Depot – GAO INVESTIGATOR: Yes. COMPANY REPRESENTATIVE: — get a general power-of-attorney in place. Have a notary notarize it, which will make it legally binding that day of notary. And now you have a power-of-attorney in place, so that if anything happens to dad, God forbid, he has a stroke and he becomes mentally incapacitated, you’ve already got something in writing where you can make decisions for him, and you don’t have to go through the court system. GAO INVESTIGATOR: That makes sense. COMPANY REPRESENTATIVE: So I would do that immediately. And I would tell dad. He wouldn’t be opposed to that, would he? COMPANY REPRESENTATIVE: That’s doesn’t change. I mean, he still makes his own decisions, even with the power-of-attorney in place. The power-of-attorney is only in case he does become incapacitated. GAO INVESTIGATOR: Uh-huh. COMPANY REPRESENTATIVE: Okay. You are basically doing preventive medicine. And that’s what we’re suggesting here. If the software kicks out — and I don’t know until I get a fact-finder filled out by you in detail, and it’s an 8-page fact-finder, it takes me about 7 hours with the attorney to go through all this. And we don’t charge to fill out the VA forms. We do not charge to represent dad for the VA benefits, but we do charge a flat rate to do the seven hours, eight hours of due diligence to figure out what is going to be the right avenue, because they are only going to have one scenario that’s going to fit dad’s situation, once we get that fact-finder in. Because once we get that fact-finder in, the software tells us exactly what we can and can’t do. GAO INVESTIGATOR: All right. And how much is that? What’s the cost of that? COMPANY REPRESENTATIVE: Fifteen hundred dollars. GAO INVESTIGATOR: Fifteen hundred, okay. COMPANY REPRESENTATIVE: But it’s not a matter of if dad qualifies, it’s a matter of how much. I will tell you, because he’s a living vet, our experience from the software, the software will kick back between sixteen to nineteen hundred dollars that he would qualify for, because he’s a living vet,. whereas if it was mom, and dad was dead, the surviving spouse always gets less. GAO INVESTIGATOR: Okay. COMPANY REPRESENTATIVE: Now if dad qualifies for the nineteen fifty — let’s just use that as an example — times, ah, he’ll get a check on the ninth month, if we apply for it yesterday, and got everything in place, he would get a check from the government for seventeen thousand five hundred fifty dollars, tax-free. And then, each month thereafter, he would get nineteen fifty coming in, each month, tax-free. GAO INVESTIGATOR: Whoa. COMPANY REPRESENTATIVE: That’s how that works. I’m here to tell you, that for fifteen hundred dollars, you’ll get your money back on the first month that you apply, basically. GAO INVESTIGATOR: Yeah, yeah. COMPANY REPRESENTATIVE: But once we do what we need to do, and if he’s not objected — objecting to the reallocating and repositioning of those funds, because quite frankly, at 86, I know he has two hundred and fifty thousand in mutual funds, but you know, that’s a concern to me right there, because of the loss and what’s going on in the economy. GAO INVESTIGATOR: Uh-huh. COMPANY REPRESENTATIVE: So is he willing to pull the trigger and get it out of harm’s way, so that he would get between 4 to 6 percent, and not — and not at any risk? Because if we do reposition the funds, it’s very likely that it has to be an account that cannot go backwards. GAO INVESTIGATOR: All right. So what type of thing are you talking about? COMPANY REPRESENTATIVE: It could be CDs, it could be annuities, but the point is, it has to be an account that’s protected, that can’t go backwards. There’s not an attorney, that I know that’s accredited, that would will take any case that’s going to be tied into stocks, bonds, or mutual funds, because they can lose their base, they can lose their principal, they can lose their gains. And the attorney signs off on that stuff, when he represents the VA. GAO INVESTIGATOR: Okay. And if he’s putting it into something, and he’s getting 4 to 6 percent, does that money go to him or where does that go? COMPANY REPRESENTATIVE: If it stays into the account, it goes to him. How it works, basically, , it’s that the power-of-attorney is the decision maker with dad. You become the trustee. GAO INVESTIGATOR: Okay. COMPANY REPRESENTATIVE: You are the pivot, you are the person we go to. Because everything has to be reallocated out of dad’s name, titled to the trust, so that controls it, cuts the checks. Dad’s allowed to keep money in his account, that’s not a question. It’s a question of how much is he allowed to keep in his account. That depends on the software coming back and telling us what he’s allowed to keep, what he’s not allowed to keep. GAO INVESTIGATOR: Uh-huh, uh-huh. COMPANY REPRESENTATIVE: Do you follow? GAO INVESTIGATOR: If I use any of that money for him or for me, I have to count that as income? COMPANY REPRESENTATIVE: Great question. Let’s talk about for him first. If you use the money for him — first of all, the Trust will Dad has what? — am I correct by saying he has over two fifty, combined, like three forty? GAO INVESTIGATOR: Yeah. Well, like I say, he’s got about ninety in savings and another two — maybe about two sixty in a mutual fund. COMPANY REPRESENTATIVE: So three fifty he has total. GAO INVESTIGATOR: Okay, yeah. COMPANY REPRESENTATIVE: So what would happen, in this Trust account, visualize it as there’s a checkbook access. In the checkbook access, you’re able — you’re going to have up to three fifty, what’s going to be liquid is going to be roughly about close to a hundred and fifty thousand dollars, or a hundred thousand minimum. GAO INVESTIGATOR: Okay. COMPANY REPRESENTATIVE: So that’s for incidentals, for dad’s needs, for whatever. It doesn’t make a difference. I don’t have to know what it’s for. GAO INVESTIGATOR: Okay. But if I use that, does somebody, either I or him, have to count that as income? COMPANY REPRESENTATIVE: Well, if he cashes in his – are these IRAs? Do you know? IRAs, non-401(k)s, non-retirement plans – GAO INVESTIGATOR: Right. COMPANY REPRESENTATIVE: — then, no, you could use them into the account —, and they could be taxable— for whatever, it’s not countable as income. But if they are IRAs, then you would have to cash in the IRAs and then it would become income. GAO INVESTIGATOR: Okay. COMPANY REPRESENTATIVE: But the other account, when it’s put into the put and keep account — let’s say you have three hundred and fifty thousand. A hundred to a hundred fifty goes into the checkbook access. The other two hundred or whatever goes into a put and keep account earning four to six percent. GAO INVESTIGATOR: Is that like an annuity or something? COMPANY REPRESENTATIVE: That doesn’t earn any interest. That’s accessible dollars, liquid dollars, when you need it for emergency. The other account will earn 4 to 6 percent. So it depends on what you want to put into that other account, and how much you want to keep liquid. GAO INVESTIGATOR: Okay. Well, the account that’s earning 4 to 6 percent, what is that in? Is that an annuity or what is that? COMPANY REPRESENTATIVE: It would be an annuity that has accessibility to it, but it’s tax-free, it’s not being — it’s not being taxed. GAO INVESTIGATOR: Okay. All right. But I wouldn’t have access to that money? COMPANY REPRESENTATIVE: You will have access to that money. Each year you have access to it, up to 10 percent free withdrawal, with no penalty. GAO INVESTIGATOR: Okay. COMPANY REPRESENTATIVE: And if — but that’s why we want to keep some of that money out in the Trust account checkbook, that is basically accessible, totally liquid. So the software will kick out what we can and can’t do. I’m projecting that probably a hundred and fifty of it, up to a hundred and fifty, could be liquid. Now you may not need a hundred and fifty liquid. So the more you put into the annuity, the more interest you’re going to earn on those funds. GAO INVESTIGATOR: Uh-huh. COMPANY REPRESENTATIVE: That’s a decision you have to make with dad. GAO INVESTIGATOR: Okay. COMPANY REPRESENTATIVE: But from my experience on the software, I’ve seen between, - a hundred and fifty or a hundred thousand go into the annuity — checking account, and the rest goes into the annuity. GAO INVESTIGATOR: Uh-huh. Well, hopefully, you’re talking this VA thing. Is that really — the nineteen or whatever that he would qualify for, is that a pension or what is that? COMPANY REPRESENTATIVE: It’s — it’s — I’m sorry. It’s going to be considered what is called Aid and Attendance. GAO INVESTIGATOR: Okay. COMPANY REPRESENTATIVE: It’s aiding him with him attendance for his care, and that is the home health care. Remember, I mentioned that there’s three things that have to be in place in order for us even to apply for the VA benefits called Aid and Attendance. And that he is already getting aid, you know, from a home health care, or assisted-living or in a nursing home. GAO INVESTIGATOR: Uh-huh. All right. COMPANY REPRESENTATIVE: And so we’re applying for specifically that. That’s all I deal with. I don’t deal with any of the other benefits that the VA has. GAO INVESTIGATOR: Okay. All right, all right. COMPANY REPRESENTATIVE: But you do have some obstacles. You have some issues that you need to discuss with dad. If you’re interested, I believe that it’s a fit. It’s not a matter of if he qualifies, it’s a matter of how much. But the computer will tell us what we can and can’t do. And then, if you like, I can e-mail you the fact-finder and the information. There’s two forms I would send to you that you would send back to me, signed, with a check, and the address is on the fact-finder. If you like, I can e-mail it to you, if you have an e-mail address. GAO INVESTIGATOR: I’ll tell you what, I want to talk to him about it first. COMPANY REPRESENTATIVE: Okay, great. Just keep us in mind. You have our number; give us a call. GAO INVESTIGATOR: Okay. I’m trying to think if I have any other questions for you. I was just trying to write down a couple things here. All right, I mean, I guess that’s it. COMPANY REPRESENTATIVE: The problem that you have right now is that you have assets. We have to definitely — I know, from experience, that if you have assets, there may be a strong possibility of repositioning some of those assets. And there’s a way to reposition some to the trust and there’s a way to reposition some to dad, and there’s a way to reposition to . GAO INVESTIGATOR: Right, right. COMPANY REPRESENTATIVE: That all comes from the software, once it kicks it out. GAO INVESTIGATOR: All right. And you said that the cost is fifteen hundred COMPANY REPRESENTATIVE: Flat rate, yeah, no extra costs. GAO INVESTIGATOR: All right. Well, let me talk to him and I’ll get back to you. COMPANY REPRESENTATIVE: All right. Nice meeting you, . GAO INVESTIGATOR: Thank you for your time. COMPANY REPRESENTATIVE: Bye bye. Daniel Bertoni, (202) 512-7215 or bertonid@gao.gov. VA Enhanced Monthly Benefits: Recipient Population Is Changing and Awareness Could be Improved. GAO-12-153. Washington D.C.: December 14, 2011. VA’s Fiduciary Program: VA Plans to Improve Program Compliance and Policies, but Sustained Management Attention is Needed. GAO-10-635T. Washington, D.C.: April 22, 2010. VA’s Fiduciary Program: Improved Compliance and Policies Could Better Safeguard Veterans’ Benefits. GAO-10-241. Washington, D.C.: February 26, 2010. Veterans’ Benefits: Improved Management Would Enhance VA’s Pension Program. GAO-08-112. Washington, D.C.: February 14, 2008. Medicaid Long Term Care: Few Transferred Assets before Applying for Nursing Home Coverage; Impact of Deficit Reduction Act on Eligibility Is Uncertain. GAO-07-280. Washington D.C.: March 26, 2007. Medicaid: Transfer of Assets by Elderly Individuals to Obtain Long-Term Care Coverage. GAO-05-968. Washington D.C.: September 2, 2005.
The VA pension program is intended to provide economic benefits to wartime veterans and survivors with financial need. GAO was asked to examine (1) how the design and management of VA’s pension program ensure that only those with financial need receive pension benefits and (2) what is known about organizations that are marketing financial products and services to enable veterans and survivors to qualify for VA pension benefits. GAO’s study included a review of VA’s policies and procedures, site visits to VA’s three Pension Management Centers, and online research and interviews of organizations that market financial and estate planning services to help veterans and survivors qualify for VA pension benefits. The Department of Veterans Affairs’ (VA) pension program design and management do not adequately ensure that only veterans with financial need receive pension benefits. While the pension program is means tested, there is no prohibition on transferring assets prior to applying for benefits. Other means-tested programs, such as Medicaid, conduct a look-back review to determine if an individual has transferred assets at less than fair market value, and if so, may deny benefits for a period of time, known as the penalty period. This control helps ensure that only those in financial need receive benefits. In contrast, VA pension claimants can transfer assets for less than fair market value immediately prior to applying and be approved for benefits. For example, GAO identified a case where a claimant transferred over a million dollars less than 3 months prior to applying and was granted benefits. Also, VA’s process for assessing initial eligibility is inadequate in several key respects. The application form does not ask for some sources of income and assets such as private retirement income, annuities, and trusts. As a result, VA lacks complete information on a claimant’s financial situation. Also, the form does not ask about asset transfers—information VA needs to determine whether these assets should be included when assessing eligibility. In addition, VA does not verify all the information it does request on the form. For example, VA does not routinely request supporting documents, such as bank statements or tax records, unless questions are raised. VA’s fiduciary program, which appoints individuals to manage the financial affairs of beneficiaries who are unable to do so themselves, collects financial information that may affect some pension recipients’ eligibility, but VA pension claims processors do not have access to all this information. Further, guidance on when assets should be included as part of a claimant’s net worth is unclear; and VA claims processors must use their own discretion when assessing eligibility for benefits, which can lead to inconsistent decisions. GAO identified over 200 organizations that market financial and estate planning services to help pension claimants with excess assets meet financial eligibility requirements for these benefits. These organizations consist primarily of financial planners and attorneys who offer products such as annuities and trusts. GAO judgmentally selected a nongeneralizable sample of 25 organizations, and GAO investigative staff successfully contacted 19 while posing as a veteran’s son seeking information on these services. All 19 said a claimant can qualify for pension benefits by transferring assets before applying, which is permitted under the program. Two organization representatives said they helped pension claimants with substantial assets, including millionaires, obtain VA’s approval for benefits. About half of the organizations advised repositioning assets into a trust, with a family member as the trustee to direct the funds to pay for the veteran’s expenses. About half also advised placing assets into some type of annuity. Some products and services provided, such as deferred annuities, may not be suitable for the elderly because they may not have access to all their funds for their care within their expected lifetime without facing high withdrawal fees. Also, these products and services may result in ineligibility for Medicaid for a period of time. Among the 19 organizations contacted, the majority charged fees, ranging from a few hundred dollars for benefits counseling to $10,000 for establishment of a trust. Congress should consider establishing a look-back and penalty period for pension claimants who transfer assets for less than fair market value prior to applying, similar to other federally supported means-tested programs. VA should (1) request information about asset transfers and other assets and income sources on application forms, (2) verify financial information during the initial claims process, (3) strengthen coordination with VA’s fiduciary program, and (4) provide clearer guidance to claims processors assessing claimants’ eligibility. In its comments on this report, VA concurred with three of GAO’s recommendations and concurred in principle with one, citing concerns about the potential burden on claimants and recipients of verifying reported financial information. VA agreed to study the issue further.
Methadone, a long-acting opioid medication, is available as a liquid, a solid tablet (5 and 10 mg), a rapidly dissolving wafer or diskette (40 mg), or a powder. Liquid methadone is most commonly used for addiction treatment, while the 5 and 10 mg tablets are most often prescribed for pain management. FDA considers methadone safe and effective for both pain management and addiction treatment, although not all forms of methadone are FDA approved for both of these purposes. OTPs offer methadone maintenance treatment, including counseling, for people addicted to heroin and certain prescription drugs. Daily doses of methadone help normalize the body’s neurological and hormonal functions that have been impaired by the use of heroin or misuse or abuse of other short-acting opioids. When starting treatment, individuals go to an OTP daily to take their methadone dose under observation, although patients may receive a single take-home dose for a day that the clinic is closed for business. After a few months, they may become eligible for unsupervised take-home doses. The National Institute on Drug Abuse notes that 1 year is generally the minimum for methadone maintenance treatment, and that some individuals will benefit from treatment over several years. Buprenorphine and levomethadyl acetate (LAAM) are also FDA-approved medications for treating opioid addiction. When used for addiction treatment, methadone must be dispensed by an OTP that is certified by SAMHSA and registered with DEA. As of February 2009, about 1,200 OTPs were operating nationwide, but not all states have OTPs. (See fig. 1.) OTPs are operated by private for-profit organizations, private nonprofit organizations, hospitals, or government agencies. FDA approved methadone for treating pain in 1947, but from the early 1970s until the late 1990s the drug was primarily used for treating addiction. In the mid-1990s, various national pain-related organizations began to issue guidelines for treating and managing pain, including using opioids to treat both cancer and noncancer pain. For example, the practice guidelines issued by the Agency for Health Care Policy and Research informed physicians and other health care professionals about the management of acute pain in 1992 and cancer pain in 1994. In 2001, health care providers and hospitals were required to ensure that their patients received appropriate pain treatment when the Joint Commission, a national health care facility standards-setting and accrediting body, implemented pain standards for hospital accreditation. At first, methadone was prescribed more for the treatment of cancer pain, but it has been increasingly prescribed for the treatment of chronic noncancer pain. Methadone’s advantages include that it costs less than other opioids used to treat pain, and it comes in multiple forms. Unlike methadone’s use in addiction treatment where it generally must be dispensed by OTPs, when used to treat pain methadone may be prescribed by an appropriately licensed and registered practitioner and dispensed by licensed and registered retail pharmacies. Licensed and registered practitioners may also dispense methadone directly to patients for pain management, but DEA officials said that it is not a common practice. DEA, on behalf of the Attorney General of the United States, is the agency primarily responsible for enforcing the Controlled Substances Act. Under the act, controlled substances are classified into five schedules based on the extent to which the drug has an accepted medical use, and its potential for abuse and degree of psychological or physical dependence. Schedule II controlled substances—which include opioids such as morphine, oxycodone, and methadone—have a currently accepted medical use and a high potential for abuse, and may lead to severe psychological or physical dependence. DEA’s regulation of the manufacturing, distribution, dispensing, and prescribing of controlled substances, including Schedule II drugs, encompasses the following: Manufacturing. DEA limits the quantity of Schedule II controlled substances that may be produced by each manufacturer in the United States each year. DEA determines these quotas based on a variety of factors, including disposal and inventories. DEA also sets aggregate production quotas that limit the production of bulk raw materials used to manufacture Schedule II controlled substances. Distribution. DEA regulates transactions involving the sale and distribution of Schedule II controlled substances by manufacturers and wholesale distributors. Manufacturers and distributors are required to report their inventories of controlled substances to DEA, and these data are available for monitoring the distribution of controlled substances throughout the United States and identifying retail registrants that received unusual quantities of controlled substances. Dispensing and prescribing. Practitioners who dispense, administer, or prescribe controlled substances must obtain a valid registration. SAMHSA is the lead federal agency addressing substance abuse and mental health services. Its mission is to build resilience and facilitate recovery for people with or at risk for substance abuse and mental illness. SAMHSA’s resources and programs are designed to expand service capacity and improve service and infrastructure to address prevention and treatment gaps. SAMHSA directly supports state and local service systems and funds activities to improve practice through grants and contracts. SAMHSA’s Center for Substance Abuse Treatment provides national leadership to expand the availability of effective treatment and recovery services for alcohol and drug problems, and to improve access, reduce barriers, and promote high-quality, effective treatment and recovery services for people with substance abuse problems and their families and communities. Under federal law and regulations, drugs must be approved by FDA before they can be marketed in the United States. The agency reviews new drug applications to determine whether they provide sufficient evidence that the drug is safe and effective for the proposed use. In approving a drug, FDA may require that the drug be dispensed only by a prescription by a licensed practitioner. Because some risks may not become known until after a drug’s approval and use in a wider segment of the population, FDA has certain postmarket oversight responsibilities once a drug is approved, such as assessing sponsors’ compliance with requirements for adverse event reporting. The agency compiles data from sponsor reports on adverse events, and voluntary reports submitted to its MedWatch program, a voluntary reporting program through which health professionals and consumers can report adverse reactions and other problems related to FDA-approved drugs. In addition, as of 2008, if FDA identifies postmarket safety concerns, the agency may take specific actions such as requiring drug manufacturers to make safety-related changes to a drug’s labeling and requiring drug manufacturers to implement a Risk Evaluation and Mitigation Strategy when necessary to ensure that the benefits of a particular drug outweigh the risks. Death investigations in the United States are typically conducted by a county, district, or state coroner system or a medical examiner system. These systems investigate deaths due to external causes, such as injury or poisoning; sudden and unexplained deaths; and deaths that occur under medical care. Most coroners are elected, and they may not be physicians. In contrast, medical examiners are usually appointed and are, with few exceptions, required to be physicians and are often pathologists or forensic pathologists. The registration of deaths varies by state. Death certificates can be completed by funeral directors, attending physicians, medical examiners, or coroners and contain such information as the deceased person’s age, sex, and race; the circumstances and cause of death; and the signature of the physician, medical examiner, or coroner. Each disease, abnormality, injury, or poisoning that the medical examiner or coroner believes contributed to the death generally is reported. The original records are filed in state registration offices. Statistical information is compiled in a national database through the National Vital Statistics System by CDC’s National Center for Health Statistics. From these data, monthly, annual, and special statistical reports are prepared for the United States and for cities, counties, states, and regions by various characteristics, such as sex, race, and cause of death. However, statistical data derived from death certificates can be no more accurate than the information provided on the certificate. For example, causes of death on the death certificate reflect a medical opinion that might vary among the individuals completing the certificates. Methadone is regulated as a controlled substance, under federal and state laws and regulations, when used for pain management and addiction treatment. When methadone is used for pain management, it is regulated under federal and state laws and regulations that apply to controlled substances generally and that do not impose requirements unique to methadone. For addiction treatment, however, federal and state laws and regulations impose additional requirements that are specific to the use of methadone. The use of methadone for pain management is regulated under federal and state laws and regulations that apply to controlled substances generally and that do not impose requirements unique to methadone. DEA has certain authorities, under the Controlled Substances Act, to regulate the use of methadone for pain management, as part of its oversight for controlled substances. For example, practitioners must register with DEA in order to dispense, administer, or prescribe Schedule II through V controlled substances. The Controlled Substances Act and implementing regulations also require that Schedule II controlled substances, including methadone, only be dispensed by pharmacists upon a written prescription, which must be issued for a legitimate medical purpose by registered practitioners acting in the usual course of professional practice. In addition, DEA uses its data on the distribution of methadone and other controlled substances to identify retail-level registrants, such as pharmacies, that receive and dispense unusual quantities of these drugs. Under these authorities, DEA may take action against practitioners that fail to prescribe or dispense controlled substances, including methadone, for legitimate medical purposes, one sanction of which is suspension or revocation of DEA registration. The use of methadone for pain management is also regulated under state law and regulations that apply to controlled substances generally. In the states we reviewed, some of these requirements were similar to provisions of the federal Controlled Substances Act. For example, these states require, in general, that Schedule II controlled substances may only be dispensed by pharmacists upon a written prescription of a practitioner. States also may impose requirements beyond what is required under the federal Controlled Substances Act. For example, officials in Maine said that they require the use of tamperproof prescription notepads when writing prescriptions for Schedule II drugs. However, none of the laws and regulations in the five states we reviewed had any provisions specific to methadone when used for pain management other than provisions that generally apply to all Schedule II controlled substances. Because states regulate the practice of medicine and pharmacy, controlled substances that are prescribed, administered, or dispensed by state- licensed practitioners are also generally regulated under these state laws and regulations. For example, in the states we reviewed, physicians must be licensed by their state boards of medicine in order to engage in the practice of medicine, which includes the prescribing of drugs. Similarly, pharmacists must be licensed by their state boards of pharmacy in order to engage in the practice of pharmacy, which includes the dispensing of prescription drugs. Under this authority, the state medical boards and state boards of pharmacy oversee the regulation of the practice of medicine and pharmacy, respectively. As part of this oversight, these boards or other related state agencies may investigate complaints about practitioners, discipline practitioners that violate applicable laws or regulations, and facilitate rehabilitation of practitioners when appropriate. States and professional licensing boards may further regulate the prescribing or dispensing of controlled substances for the treatment of pain. According to the Federation of State Medical Boards, a number of states have implemented standards for the use of controlled substances for pain, including the five states we reviewed. Some of these states have based these standards on the model policy for use of controlled substances for the treatment of pain published by the Federation of State Medical Boards. For example, Florida law expressly provides that physicians may prescribe or administer controlled substances for the treatment of intractable pain. Under its regulations, Florida’s Board of Medicine and Board of Osteopathic Medicine also impose standards that include steps physicians must take prior to prescribing controlled substances for pain. Although methadone is subject to federal and state requirements that apply to controlled substances, there are additional requirements specific to methadone when used for addiction treatment. For example, as part of its enforcement responsibilities under the Controlled Substances Act, DEA has the authority to regulate the use of methadone for addiction treatment. Under the act, OTPs must register with DEA in order to dispense or administer methadone for addiction treatment, and there are three conditions for this registration. Under the first condition of this registration, DEA must determine that the OTPs will appropriately secure stocks of methadone and maintain appropriate records. DEA officials informed us that they also inspect OTPs in order to ensure that OTPs are maintaining proper security, safety, and storage of methadone and other narcotic drugs used for addiction treatment. DEA officials said that inspections are conducted every 3 years, and there are a series of graduated penalties if OTPs are not in compliance, including suspension or revocation of OTP registration. If DEA suspends or revokes a registration, that OTP would be unable to purchase, administer, or dispense methadone to OTP patients for addiction treatment. As a second condition of DEA registration for OTPs, SAMHSA must determine that OTPs are qualified to engage in methadone maintenance treatment for addiction. Federal opioid treatment regulations define SAMHSA’s standards for determining whether OTPs are qualified. Specifically, such standards include requiring OTPs to obtain a current, valid certification from SAMHSA to dispense methadone for addiction treatment. To obtain certification, an OTP must have a current, valid accreditation by an accreditation body, such as the Joint Commission or other entity designated by SAMHSA. An OTP also must comply with a number of other requirements for certification established by SAMHSA. These other requirements include maintaining a diversion control plan that contains specific measures to reduce the possibility of diversion of methadone from legitimate treatment to illicit use and ensuring that all licensed professional care providers comply with the credentialing requirements of their respective professions. The third condition of DEA registration for OTPs requires SAMHSA to determine that OTPs will comply with standards regarding unsupervised take-home doses of methadone. SAMHSA has established specific criteria for unsupervised take-home doses of methadone under federal regulations for OTPs. (See table 1.) These criteria were established to limit the potential for diversion of methadone to illicit uses. OTPs are also required to maintain procedures for take-home doses of methadone that will allow identification of the theft or diversion of these doses, such as by labeling containers with the OTP’s name, address, and telephone number. For additional information on select aspects of the federal regulations relating to OTPs, see appendix II. States may also regulate the use of methadone for opioid addiction treatment under state laws and regulations, which may be equal to or stricter than federal standards. For example, while federal regulations do not specify the days or hours that OTPs must be open, regulations in three of the states we reviewed—Kentucky, Maine, and West Virginia—require that OTPs be open 7 days a week. Further, states may implement drug testing requirements that are stricter than the federal standard of at least eight random drug abuse tests per year for each patient in maintenance treatment. For example, in Maine, drug tests on OTP patients must be conducted at least every 30 days unless the individual treatment plan indicates that drug testing should be done more frequently. Appendix II compares OTP regulations in the five states we reviewed. Each state with OTPs also has a state agency or official designated to oversee opioid treatment in that state. Each of the five states we reviewed had an official designated as the state methadone authority or state opioid treatment authority, although responsibilities for this position varied and these officials had other duties in addition to responsibilities in overseeing the state’s OTPs. For example, a state official told us that the primary responsibilities of the State Methadone Authority in West Virginia were to approve or disapprove OTP patients’ requests for exceptions to methadone take-home policies and to receive and refer patient appeals and grievances to the designated state oversight agency. In contrast, in addition to approving or disapproving take-home exception requests, a state official explained that the State Opioid Treatment Authority in New Mexico has initiated activities such as site audits of the eight existing OTPs in the state to ensure compliance with state regulations. These officials also have quarterly conference calls with SAMHSA and their counterparts from other states to discuss issues regarding OTPs and best practices. Although information on methadone-associated overdose deaths is limited, available data suggest that methadone’s growing use for pain management has increased availability of the drug, therefore contributing to the rise in methadone-associated overdose deaths. Lack of knowledge about the drug’s unique pharmacological properties among practitioners and patients as well as abuse of diverted methadone also appear to have contributed to these deaths. State data and research support the idea that lack of knowledge and abuse of diverted methadone contributed to deaths, but also suggest that the specific circumstances of these deaths are variable. The growing availability of methadone through its increased use for pain management is a contributing factor to the rise in methadone-associated overdose deaths. DEA data show that from 2002 to 2007, distribution of methadone to business types associated with pain management— pharmacies and practitioners—almost tripled, rising from about 2.3 millions grams to about 6.5 million grams. In contrast, distribution to OTPs increased more slowly, from about 5.3 million grams to about 6.5 million grams. See table 2 for the numbers for methadone distribution to four business types from 2002 through 2007. Similarly, data from IMS Health, a private company that tracks prescription drug trends, showed that from 1998 through 2006 the number of annual prescriptions of methadone for pain increased by about 700 percent, from about 531,000 in 1998 to about 4.1 million in 2006. Most officials from federal and state agencies, as well as experts in addiction treatment and pain management that we spoke with, cited the increased availability of methadone due to its use for pain management as a key factor in the rise in deaths, while some added that addiction treatment in OTPs was not related to increased deaths. Federal officials and experts in epidemiology, pain management, and addiction treatment at SAMHSA’s National Assessment of Methadone-Associated Mortality in 2003 also acknowledged a correlation between the increased distribution of methadone through pharmacies for pain management with the increase in methadone-associated overdose deaths and reached consensus that the increase in these deaths was not associated with addiction treatment in OTPs. Additionally, in 2006 CDC researchers suggested that the increase in deaths involving methadone was related to physicians increasingly prescribing the drug for pain. The researchers reported that the increase in deaths tracked the increase in methadone used for pain management rather than its use in OTPs. To explain the increasing prescribing of methadone for pain management, many officials and experts we spoke with mentioned the publicity surrounding the increased abuse and diversion of the drug OxyContin in the early 2000s as a reason for the increased prescribing of methadone for pain. A November 2007 report by the National Drug Intelligence Center (NDIC) also noted that following increases in OxyContin addiction and death rates, many practitioners began using methadone instead to manage pain. Lack of knowledge about the unique pharmacological properties of methadone by both practitioners and patients has also been identified as a factor contributing to methadone-associated overdose deaths. Background information prepared for SAMHSA’s 2007 Methadone Mortality Reassessment meeting noted that physicians need to understand methadone’s pharmacology as well as specific indications and cautions before using it to treat pain or addiction. FDA has issued similar statements, adding that practitioners should closely monitor patients when starting treatment with methadone or converting patients to methadone from other opioids. Some experts we spoke with advocated a “start low and go slow” approach with methadone. Additionally, some pain management specialists we interviewed warned that practitioners following conversion tables, which are commonly used to switch patients to methadone from other drugs, may start some patients on too high a dose. The specialists explained that using these conversion tables for patients who have already developed a tolerance for other opioids may be lethal, because tolerance for other opioids is not equivalent to tolerance for methadone. Several sources have also cited inadequate training among some practitioners. NDIC reported in 2007 that some general practitioners and novice pain management specialists may lack the training to adequately monitor patients to whom they prescribe methadone. A 2005 survey by the National Center on Addiction and Substance Abuse found less than half of surveyed physicians (48 percent) received instruction in pain management while in medical school. DEA also noted that several of the top prescribers of methadone have been practitioners with specialties not generally associated with extensive training in pain management. Reports based on SAMHSA’s 2003 National Assessment of Methadone-Associated Mortality and 2007 Reassessment recommended that practitioners needed better training in how to manage pain and addiction. Many experts, representatives of national associations, and state officials we spoke with agreed that more training in both pain and addiction treatment and about methadone’s unique properties is needed for medical professionals. However, opinions varied about whether such training should be optional (e.g., offered for continuing education credit) or mandatory (e.g., required for license renewal). Insufficient patient education has also been cited as contributing to methadone-associated overdose deaths. Patients may not understand how methadone works, including that it can stay in the body long after the pain returns. As a result, these patients might take methadone more frequently than prescribed to manage their recurring pain, risking overdose as the drug builds to toxic levels in their bodies. Unaware of potentially lethal drug combinations, patients might also take methadone with other drugs, including antianxiety drugs and other opioids, or alcohol. Data suggest that abuse of diverted methadone is also contributing to the rise in methadone-associated overdose deaths. Increased thefts as well as seizures of methadone by law enforcement indicate that more diverted methadone is available for potential abuse. DEA tracks drug abuse, including the diversion of legally manufactured drugs such as methadone into the illegal market, through its National Forensic Laboratory Information System, which collects the results of state and local forensic laboratories’ analyses of drugs seized as evidence by law enforcement agencies. The DEA data on national estimates of the most frequently analyzed drugs seized by law enforcement from 2001 through 2007 showed that the number of methadone drug items analyzed by state and local labs increased 262 percent, though the estimated number was smaller than that of some of the other drugs (see table 3). In 2007, DEA reported that per prescription, methadone was more likely to be diverted and abused than either hydrocodone or oxycodone based on its analysis of data from the National Forensic Laboratory Information System and IMS Health. Likewise, DEA data on drug theft and loss showed that methadone thefts nationwide more than doubled, from 176 in 2000 to 393 in 2007. For the five states we reviewed, the data showed that most thefts were reported from pharmacies, while no thefts were reported from OTPs in four of these states during the same time period. Federal and state officials told us that abuse of prescription drugs, including methadone, has become more of a problem in recent years than abuse of illicit drugs, such as heroin or cocaine. Officials from ONDCP said that overall opioid drugs are being increasingly diverted and abused, while abuse of illicit drugs is decreasing. SAMHSA’s National Survey on Drug Use and Health provides some additional information about where those who are abusing prescription pain relievers, such as methadone, obtain their drugs. According to the 2007 survey, among the estimated 5.2 million persons aged 12 or older who reported using prescription pain relievers nonmedically in the past 12 months, 56.5 percent said they got the drugs from a friend or relative, another 18.1 percent reported that they got the drug from just one doctor, 4.1 percent reported that they got the pain relievers from a drug dealer or other stranger, and 0.5 percent reported buying the drug on the Internet. Data and research regarding methadone-associated overdose deaths in the five states we reviewed support the idea that lack of knowledge and abuse of diverted methadone contributed to deaths, but also suggest that the circumstances under which people are dying are variable. Specifically, state data and research show that death circumstances, such as the source of the drug and the most commonly detected other drugs, may vary by state. Furthermore, participants at SAMHSA’s 2007 Methadone-Associated Mortality Reassessment concurred that the circumstances of methadone- associated overdose deaths vary by state. While research suggests that the source of methadone for those who die from overdose deaths is often unknown, available information indicates that there are three distinct populations who are dying: individuals with a prescription for methadone; individuals undergoing methadone maintenance treatment in OTPs; and individuals who obtained methadone from some other source, such as diversion. However, generally more of those who died had a prescription for methadone or obtained it through diversion rather than receiving methadone for addiction treatment in an OTP. For example, a Kentucky study of deaths from 2000 to 2004 found that of the 95 deaths for which coroners documented methadone use, 48 percent of those who died had a physician’s prescription for methadone, 20 percent obtained methadone through illicit means, 22 percent obtained methadone through unknown means, and 10 percent had received treatment in OTPs. Coroners’ investigations also documented that one-third of the victims had been undergoing pain management. A New Mexico study of unintentional methadone-associated overdose deaths from 1998 to 2002 found that although a much larger percentage of deaths were related to methadone maintenance treatment than in the other states we reviewed, more deaths overall were linked with prescriptions for methadone. Specifically, of the 79 methadone-associated overdose deaths for which a source of methadone was available, 39 percent had methadone because they were undergoing methadone maintenance treatment, while 47 percent had a prescription for methadone. See appendix III for a summary of the findings of research studies in the five states we reviewed. In addition, data and research from the five states we reviewed show that methadone is often found in combination with other drugs or alcohol, suggesting a lack of knowledge about the dangers of combining methadone with other drugs or that people are abusing methadone. In Florida, for example, of the 1,095 methadone-associated overdose deaths in 2007, 124 deaths were caused by methadone alone, while 971 deaths, or about 89 percent, were caused by methadone in combination with other drugs. The Kentucky study found that only 6 percent of the 176 methadone-associated overdose deaths were caused by methadone alone; other frequently detected drugs included antidepressants, benzodiazepines, and other opioids. The New Mexico study showed somewhat different results, and found that of the 143 methadone- associated overdose deaths, 22 percent were due to methadone alone, 24 percent were due to methadone and prescription drugs (no illicit drugs), 50 percent were due to methadone and illicit drugs, and 4 percent were due to methadone and alcohol. Education, safety, and monitoring efforts have been implemented to prevent methadone abuse and methadone-associated overdose deaths— either specifically or as part of broader efforts to prevent prescription drug abuse and deaths—by various federal agencies, states, and other organizations. Educational efforts include physician training on the appropriate use of methadone to treat pain and opioid addiction, and public education campaigns about the dangers of methadone and other prescription drugs. Steps taken to improve the safety of using methadone include limiting the distribution of a high-dosage methadone tablet intended only for use in addiction treatment. In addition, states may monitor prescriptions of controlled substances as well as OTP patient enrollment through statewide registries. Because lack of knowledge of methadone’s unique properties has contributed to overdose deaths, federal and state officials and other experts agreed that more education is needed for practitioners and the public about how to safely use methadone and avoid its potential dangers. A number of efforts to educate practitioners and the public about how to use methadone and other prescription drugs safely have been initiated by federal agencies, states, and other organizations. Some of these efforts target methadone specifically while others are more broadly focused on using opioids to treat pain and preventing prescription drug abuse. Some officials and experts we spoke with cautioned that methadone is part of a larger problem of prescription drug abuse, and that prevention efforts focused on methadone alone might have the unintended consequence of shifting similar problems to a different drug—much like what occurred with methadone following reports of abuse and diversion of OxyContin. In August 2008, SAMHSA announced a 3-year grant of $1.5 million to the American Society of Addiction Medicine to educate physicians and other practitioners on the appropriate use of methadone to treat pain and opioid addiction. The grant is to establish the Physician Clinical Support System for methadone, offering free support to prescribing physicians and other practitioners. SAMHSA reported that this system would include mentoring support, observation of practice, and consultative services by phone and e- mail. The system would also inform prospective practitioners through a Web site and published resources about science-based best practice guidelines for treating opioid addiction. According to SAMHSA, this initiative would aim to address the rise in methadone-associated overdose deaths spurred by misuse and abuse. SAMHSA also has several current or planned educational initiatives focusing on OTPs. A risk management course for practitioners will focus on the safe use of methadone in OTPs, with a special emphasis on the beginning of treatment. The course will also educate OTP practitioners about using other drugs in conjunction with methadone, including benzodiazepines. In addition, SAMHSA is working with two work groups that include experts from academia, medical associations, medical researchers, and other federal agencies, such as FDA, to develop additional methadone-specific guidelines. One work group is reviewing available information to develop best practices on methadone and cardiac issues. The guidance will help practitioners identify patients at risk for cardiac arrhythmias that may be exacerbated by methadone, and provide information on how to monitor those patients for ongoing risks. Another work group is reviewing methadone interactions with other common drugs, including HIV drugs. The ensuing guidelines will help practitioners safely treat patients who may be receiving several drugs simultaneously. SAMHSA is also collaborating with FDA on a consumer education campaign designed to increase awareness of the potential for serious, life- threatening side effects in patients taking methadone for pain management or addiction treatment. FDA reports that the multimedia educational campaign would target OTPs and patients, pharmacies dispensing methadone, practitioners, and the public. According to FDA, materials developed for the campaign would include a brochure or flyer, fact sheets, podcasts, and online information. SAMHSA reported that the materials were scheduled to be finished by April 2009. SAMHSA, along with the American Academy of Pain Medicine, the American Academy of Family Practitioners, and other medical organizations, has developed a continuing medical education course on how to safely use prescription pain relievers to treat pain. As of February 2009, 20 courses had been taught in 16 states. SAMHSA reported that 12 courses would be taught in 2009 and that priority would be given to states or regions with high per capita rates of opioid-associated overdoses and deaths. A brief Web-based version of the course has been developed in collaboration with MedScape, an online medical education company. Further, a five-module online version of the course will be posted on SAMHSA’s Web site and disseminated through medical organizations and medical schools that offer continuing education credits. The Federation of State Medical Boards developed a book, Responsible Opioid Prescribing: A Physician’s Guide, in collaboration with a national pain expert. The book includes strategies for treating chronic pain and reducing the risk of addiction, abuse, and diversion. The federation intends to distribute the book through state medical boards, and reports that medical boards can customize the book to include state-specific statutes, regulations, and guidelines. The federation reported that more than 60,000 copies of the book had been distributed in 13 states as of October 2008, and it intends to expand distribution in 2009 as more funds are raised. Officials in the five states we reviewed told us about their states’ efforts to educate practitioners and the public about controlled substances and drug abuse. For example, Kentucky’s Operation UNITE (Unlawful Narcotics Investigations, Treatment, and Education) works to rid communities of illegal drug use, coordinate treatment and support for substance abusers, and educate the public on the dangers of drug use. New Mexico’s Board of Pharmacy participates in the New Mexico Pain Policy Initiative, which educates practitioners on requirements for prescribing controlled substances and for New Mexico’s Prescription Drug Monitoring Program. In West Virginia, law enforcement officials, physicians, and others have formed a controlled substance advisory board to address prescription drug abuse. One of its projects is to educate patients on the dangers of diversion by providing them with information when they pick up their prescriptions from pharmacies and to educate physicians about how to reduce doctor shopping. In addition to the five states we reviewed, Utah has also experienced rising prescription drug overdose deaths and has implemented two notable campaigns intended to reduce deaths and other harm from prescription drugs. The Use Only as Directed campaign educates the public about protecting themselves from unintentional overdose deaths. The Zero Unintentional Deaths campaign educates physicians, chronic pain sufferers, and communities about unintentional overdose deaths from prescription drugs. Federal agencies have also begun educating the public about prescription drug abuse. For example, in January 2008, the ONDCP through its National Youth Anti-Drug Media Campaign launched a national television, print, and online advertising campaign to educate parents about teen prescription drug abuse. The campaign includes tips for preventing teen prescription drug abuse, such as safeguarding all drugs at home, monitoring drug quantities, and properly concealing and disposing of old or unused medicines in the trash. SAMHSA also piloted a public education program called SMART Rx in which participating pharmacies inform customers about controlled substances when they fill prescriptions. The information provided covers the risks and dangers associated with the opioid or benzodiazepine medication, steps to keep the medication from adolescents, and safe disposal of unused medications. SAMHSA reported that evaluations suggested that consumers found the content useful and kept the information for future reference or shared it with someone else. Both DEA and FDA have taken steps to improve the safety of using methadone. DEA officials told us that the agency formed a methadone mortality working group in 2006 to review information related to the increases in methadone-associated overdose deaths, including DEA methadone distribution data and data from other federal agencies, such as FDA and CDC. The data showed that methadone 40 mg diskettes had been increasingly prescribed for pain, despite only being FDA approved for addiction treatment in OTPs—a practice described as off-label prescribing. Some experts said that if prescribed for pain to a person without a tolerance to opioid drugs, an initial dose of 40 mg could potentially be deadly. DEA data show that distribution of the methadone 40 mg diskettes to retail pharmacies increased almost sixfold from about 350,000 grams in 2002 to about 2 million grams in 2007. At the same time, distribution to OTPs fell slightly from about 1.4 million grams to 1.2 million grams. Following their review, DEA officials told us that they became concerned about the diskette’s increased use for pain management. DEA then worked with methadone manufacturers, which agreed to voluntarily restrict distribution of the diskettes to only OTPs and hospitals. Because the restriction began January 1, 2008, it is too soon to determine any effect on methadone-associated overdose deaths. DEA reported that it would continue monitoring methadone distribution and prescription data to evaluate the impact of the initiative. In November 2006, FDA approved a revised label for methadone 5 mg and 10 mg tablets that included new safety information regarding using methadone for pain, such as warnings about life-threatening adverse events and modified dosage instructions. The revised label states that methadone can cause slow or shallow breathing and dangerous changes in heartbeat that may not be felt by the patient. The new dosage instructions for methadone prescribed for pain state that the usual initial dose should be 2.5 mg to 10 mg taken every 8 to 12 hours, or a maximum daily dose of 30 mg, slowly adjusted for effect. The previous instructions allowed initial total daily doses up to 80 mg a day, which several experts said could be hazardous or even deadly. As FDA approved the revised methadone label, it also issued a public health advisory for health care professionals and patients, stating that prescribing methadone is complex and that it should only be prescribed for patients with moderate to severe pain when their pain is not improved with other non-narcotic pain relievers. The advisory noted that FDA had received reports of life-threatening side effects and death in patients taking methadone, both those newly starting methadone for pain control and those who have switched to methadone after being treated for pain with other strong narcotic pain relievers. Additionally, in February 2009, FDA sent letters to manufacturers of certain opioid drugs, including methadone, indicating that these drugs will be required to have a Risk Evaluation and Mitigation Strategy to ensure that the benefits of the drugs continue to outweigh the risks. In the first of a series of meetings, FDA invited those companies that market the affected opioid drugs to a meeting with the agency in March to discuss strategy development. Additional steps will include discussions with other federal agencies, patient and consumer advocates, representatives of the pain and addiction treatment communities, health care professionals, and other interested parties. FDA is planning a public meeting in late spring or early summer to allow for broader public input and participation in this process. States may monitor prescriptions for controlled substances and OTP patient enrollment to prevent abuse and diversion. Prescription drug monitoring programs facilitate the collection, analysis, and reporting of information about the prescribing, dispensing, and use of controlled substances such as methadone. DEA reported that as of February 2009, 31 states had operational prescription drug monitoring programs to help prevent abuse and diversion of controlled substances, including methadone, and 4 of the 5 states we reviewed had operational programs. These programs may provide information to practitioners on patients and to other entities, such as licensing boards, on prescribing and dispensing practices of practitioners, or state law enforcement and regulatory agencies, to assist in identifying and investigating activities potentially related to the illegal prescribing, dispensing, and procuring of controlled substances. According to the Alliance of States with Prescription Monitoring Programs, states have found that these programs are among the most effective tools to identify and prevent drug diversion at the practitioner, pharmacy, and patient levels. CDC officials told us that they have a study under way to evaluate the impact of state prescription drug monitoring programs on drug overdose deaths. Prescription drug monitoring programs may vary in ways such as what data must be submitted and who has access to the information. For example, with respect to access to prescription monitoring data, West Virginia allows authorized agents of the state police and federal law enforcement agencies to have access to prescription monitoring data. In contrast, in Maine, access by law enforcement is more limited as law enforcement officials can access prescription monitoring data only by grand jury subpoena for cases they are currently investigating. See appendix IV for a comparison of some of the characteristics of the four prescription drug monitoring programs we reviewed. However, prescription drug monitoring programs have limitations. Their usefulness depends on practitioners using the programs, and in the four states we reviewed with prescription drug monitoring programs, practitioners’ use was not widespread, according to state officials. In addition, not every state has a prescription drug monitoring program, and state officials we spoke with said that people would sometimes cross state borders to obtain prescription drugs in a state without a program. Furthermore, while DEA reports that several states’ programs have the capability of generating reports on out-of-state prescribers or patients, they do not routinely disseminate this information to other states. Another limitation of prescription drug monitoring programs mentioned by state officials was the lack of patient data on methadone dispensed in OTPs and from federal facilities such as Department of Veterans Affairs’ hospitals or Indian Health Service facilities. Therefore, in the four states we reviewed with a prescription drug monitoring program, data on any prescription drugs received by patients at these types of facilities would not be captured by the program. States may also create systems to monitor the population of patients enrolled in OTPs. To monitor their patients, OTPs in Florida created a central registry designed to ensure that patients do not enroll in multiple OTPs within the state, thus preventing patients from receiving unsafe levels of methadone or additional methadone that could be diverted. Officials said that each OTP patient is given a unique identifier and a picture is taken and entered into the registry. All Florida OTPs, both nonprofit and for-profit, use the registry, according to state officials. We provided a draft of this report to HHS and the Department of Justice for their review. We received general comments from HHS. (See app. V.) HHS provided clarification that in 1947 when federal law only required that new drugs be shown to be safe, FDA approved methadone as safe for pain management. When the law was amended in 1962 to impose additional requirements for the approval of new drugs, FDA retrospectively reviewed the efficacy of methadone for the treatment of pain. In addition, HHS stated that an attempt by FDA to restrict the distribution of methadone for pain was struck down by a court in the 1970s. HHS also reiterated that FDA recently sent letters to the manufacturers of certain opioid drug products, including methadone, indicating that these drugs will be required to have a Risk Evaluation and Mitigation Strategy to ensure that the benefits of the drug continue to outweigh the risks. Both HHS and the Department of Justice provided technical comments on a draft of this report, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Acting Secretary of Health and Human Services, the Attorney General, and others. The report also will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. To examine the regulation of methadone for pain management and addiction treatment, we reviewed relevant codified federal statutes and regulations pertaining to the prescribing, administering, or dispensing of methadone for pain management and addiction treatment. Our review was limited to relevant provisions of the Controlled Substances Act and implementing regulations, including Department of Justice, Drug Enforcement Administration (DEA), and Substance Abuse and Mental Health Services Administration (SAMHSA) regulations. We also examined federal case law and relevant federal agency policies, including DEA’s policy statement on dispensing controlled substances for the treatment of pain. We interviewed officials at relevant federal agencies, including SAMHSA and DEA. We also interviewed officials and reviewed information from relevant national associations, including the Federation of State Medical Boards. In addition, we interviewed officials and examined relevant codified statutes and regulations in five selected states. The states we reviewed were Florida, Kentucky, Maine, New Mexico, and West Virginia. Each of the states met four or more of the following criteria: a top 10 rate of increase in methadone-associated overdose deaths from a top 10 number of methadone-associated overdose deaths per 1,000,000 a state or district medical examiner system, an operational prescription drug monitoring program, and state-focused, methadone-specific research. We examined how the prescribing, administering, or dispensing of methadone for pain management and addiction treatment were regulated in our five selected states. We only examined codified state statutes and regulations containing requirements relating to the administering or dispensing of methadone in opioid treatment programs (OTP) and the prescribing, administering, or dispensing of controlled substances to individuals for medical purposes, including pain management. In each state we interviewed officials from the state agency with oversight of OTPs, state boards of medicine and pharmacy, and law enforcement officials. The findings from our review of these five states cannot be generalized to all states. To determine the factors contributing to the increase in methadone- associated overdose deaths in recent years, we interviewed officials from the Centers for Disease Control and Prevention (CDC), DEA, the Food and Drug Administration (FDA), the National Drug Intelligence Center (NDIC), the National Institutes of Health, the Office of National Drug Control Policy (ONDCP), and SAMHSA. We also interviewed professional association officials from the American Association for the Treatment of Opioid Dependence, the Federation of State Medical Boards, HARMD (Helping America Reduce Methadone Deaths), the National Alliance of Methadone Advocates, the National Association of Boards of Pharmacy, the National Association of Medical Examiners, the National Association of State Alcohol and Drug Abuse Directors, and the National Association of State Controlled Substances Authorities. In addition, we interviewed pain management, addiction treatment, and forensic science experts. We reviewed national reports, including reports based on the 2003 SAMHSA Methadone Mortality Assessment and 2007 Reassessment and a November 2007 NDIC report on methadone mortality. We reviewed CDC methadone poisoning death data from the National Vital Statistics System, which tabulates information reported on death certificates. We interviewed CDC officials to obtain information about the reliability of their methadone mortality data, including how CDC ensures the quality of the data and any data limitations. In addition, we interviewed officials in our five selected states’ medical examiners’ offices about the factors contributing to the increase in methadone-associated overdose deaths in their states. We also reviewed state data and studies from our five selected states and interviewed researchers about their efforts to investigate methadone-associated overdose deaths in their states. However, because there is no standard definition for what constitutes a methadone- associated overdose death, there may be some variation in how states define this term and how they report these numbers. For example, data from Florida distinguish whether methadone was simply present or was the cause of death, but not all states make this distinction. Defining methadone’s role in a death can also be complicated by inconsistencies in determining and reporting causes of death, by the presence of other drugs, and by the absence of information about the source of methadone and the deceased person’s level of opioid tolerance. Results from these five state studies cannot be generalized to other states. We reviewed relevant DEA data, including Automation of Reports and Consolidated Orders System (ARCOS) data, DEA National Forensic Laboratory Information System (NFLIS) data, and DEA Theft and Loss data. ARCOS is an automated drug reporting system that monitors the flow of DEA controlled substances from the point of manufacture through commercial distribution channels to the point of retail sale or distribution, including hospitals, retail pharmacies, practitioners, midlevel practitioners, and teaching institutions. ARCOS summarizes these transactions into reports, which give federal and state government investigators information that can then be used to identify the diversion of controlled substances into illicit channels of distribution. NFLIS systematically collects results from solid dosage drug analyses conducted by state and local forensic labs across the country. NFLIS provides information for monitoring and understanding drug abuse and trafficking involving both controlled and noncontrolled substances in the United States, including the diversion of legally manufactured drugs into illegal markets. As of March 2007, 44 state lab systems and 94 local lab systems, comprising 274 individual labs, were participating. Because NFLIS is a voluntary reporting system and the number of participating state and local laboratories has changed over time, DEA officials recommended that we report the NFLIS national estimates of analyzed drug items, instead of the actual numbers, to show trends over time, which they said would not be affected by the number of labs participating each year. DEA’s national estimates are calculated every year based on a national sample model of state and local laboratories. DEA officials told us they began producing national estimates in 2001. DEA’s Theft and Loss database collects data on theft and loss of controlled substances by number of thefts; drug and dosage forms; business type, including pharmacies, hospitals, and manufacturers; and type of theft, such as night break-in or armed robbery. DEA registrants are required to report theft and loss of controlled substances to DEA. Although the database contains information on the forms of controlled substances lost or stolen, DEA officials told us there is no standard liquid dosage unit that would allow us to provide the total volume of liquid methadone stolen; therefore, we did not report thefts by form of methadone. We interviewed DEA officials to learn about data collection; quality control, such as edit checks; and any limitations of these databases. We determined that these three sources of data were sufficiently reliable for use in this report, and included any limitations identified. To determine steps taken to prevent methadone-associated overdose deaths, we interviewed officials at relevant federal agencies, including CDC, DEA, FDA, ONDCP, and SAMHSA. We also interviewed officials from relevant national associations, including the Federation of State Medical Boards, National Association of Medical Examiners, and National Association of State Controlled Substances Authorities, and experts in pain management, addiction treatment, and forensic science. In addition, we reviewed relevant studies and reports about efforts to prevent methadone and other prescription drug overdose deaths and interviewed officials in our five selected states to learn about initiatives in their states. To obtain additional information about prescription drug monitoring initiatives, we examined relevant codified statutes and regulations in our five selected states. We only reviewed codified state statutes or regulations relating to systems that monitor the prescribing of controlled substances. The findings from our review of these five states cannot be generalized to all states. Because many efforts under way to prevent methadone-associated overdose deaths are new, their effectiveness has not yet been evaluated. Also, because we interviewed experts, officials from select organizations, and state officials, our findings do not represent all efforts to prevent these deaths. Finally, because methadone is part of a larger problem of prescription drug abuse and overdose deaths, many of the efforts are not focused on methadone alone. We conducted our work from November 2007 through February 2009 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions. The following table describes selected requirements of the federal OTP regulations, as well as selected requirements of the OTP regulations in Florida, Kentucky, Maine, New Mexico, and West Virginia. Before a client receives an initial dose of methadone or other medication, physician must document current physiological addiction, history of addiction and exemptions from criteria for admission. (§ 65D- 30.014(4)(e)(1)) -Dose means a 1-day quantity of an approved controlled substance, administered on site, in not less than 1 fluid ounce of an oral solution, formulated to minimize misuse by injection. (1:340.1(7)) Initial doses of methadone must not exceed 30 mg unless the physician documents the need for a higher dose. (§ 19.8.6.5) -Initial dose must not exceed 30 mg. -If 30 mg does not reduce withdrawal symptoms, may provide additional 10 mg only if documented. -The initial full- day dose of medication shall be based on the physician’s evaluation of the history and condition of the patient. -If 40 mg does not reduce withdrawal symptoms, may provide additional dose only if documented. -Usual initial dose of methadone should be 20 to 30 mg. Reasons for exceeding an initial dose of 30 mg must be documented. -Proposed programs must include in their applications initial and daily dosage levels and daily dosage levels. (1:340.4(3)(x), (y)) -Subsequent doses are based on the patient’s needs. (§ 7.32.8.21(D)) -Medical record must indicate reason for dose changes and must be signed by the medical director or program physician. (1:340.6(5)) -Initial dose should not exceed 40 mg unless physician or prescribing professional documents that symptoms were not suppressed after a 3-hour period of observation. -Justification for daily doses above 100 mg must be documented. (§ 64-90-35) Must consider the following criteria in determining patient eligibility: -Phase 1: No program infractions for 90 consecutive days. -No evidence of recent drug abuse. -Absence of recent drug abuse, including alcohol abuse. -Cessation of illicit drug use. -Phase 2: No program -Results of drug tests must be reviewed and considered as part of the treatment planning process and decisions for take-home -Regularity of program attendance. -Regular attendance at OTP. dosing. (§ 19.8.6.6) -No serious behavioral problems at the OTP. -Length of time in comprehensive maintenance treatment. -No recent criminal activity. -Stable home environment and social relationships. infractions for 180 consecutive days; pursuing one of the following: gainful employment, vocational training, higher education, volunteer opportunities, or parenting classes if a stay- at-home parent. -All decisions regarding take- home privileges shall be documented in the client record and shall comply with the requirements cited in 42 C.F.R. pt. 8. (§ 19.8.10.1) -Length of time and level of treatment in medication therapy (ability to self-medicate). -Absence of known criminal activity. -Absence of recent criminal activity. -Sufficient length of time in treatment. -Absence of serious behavioral problems at the program. -Absence of serious behavioral problems. -Assurances that take-home medication can be stored safely. -Phase 3: No program infractions for 270 consecutive days; same entry requirements as for phase 2. -Special needs such as physical health needs. -Absence of abuse of drugs, including excessive use of alcohol. -Assurance that medication can be safely stored in patient’s home. -Satisfactory progress in treatment to warrant decreasing the frequency of attendance. -Stability of patient’s home environment and social relationships. -Other special needs of the patient, such as split dosing, physical health needs, pain treatment, etc. -Capacity to safely store take- home medication. -Verifiable source of legitimate income. (§ 65D- 30.014(5)(d)) -Phase 4: Successful completion of phase 3 and adhered to program requirements for 2 consecutive years. (1:340.11) -Patient’s work, school, and daily activity schedule. -Hardship traveling to and from the program. -Stability of the home environment and social relationships. -Rehabilitative benefit outweighs the potential risk of diversion. (§ 7.32.8.23(B)) -Patient’s work, school, or other daily life activity schedule. -Hardship in traveling to and from the program. Program physician may approve temporary unsupervised take-home doses for documented emergencies or other exceptional circumstances. (§ 64-90-41) -Days 1-90: No take-home doses. -Must be available to all methadone clients during holidays, but only if clinically advisable. (§ 65D- 30.014(4)(g)) -Phase 1: One take-home dose per week. -Phase 2 and 3: Up to two take- home doses per week. -Take-home methadone shall be dispensed in liquid form only in single dose containers, or in dry form only in multiple dose containers. -A patient in comprehensive maintenance treatment may receive a single dose of take- home medication for each day that a provider is closed. -For the first 90 days of treatment: A single take-home dose for the week of each holiday that the clinic is closed. -Phase 4: Up to three take-home doses per week. -No take-home doses permitted during the first 30 days in treatment unless approved by the state authority. -No take-home privileges during the first 90 continuous days of treatment. -During the first 90 days, One take-home dose per week maximum. -First 30 days of treatment: No take-home doses except the holiday dose. -Clients in continuous treatment may qualify with negative drug screens as follows: -91-180 days: One take-home dose per week. -91-180 days: Two take-home doses per week maximum. -31-90 days: One take-home dose per week plus the holiday dose. -91-180 days: Two take-home doses per week. -181-270 days: Two take-home doses per week. -181-270 days: Three take-home doses per week maximum. -Phase I: Days 31- 90 – one take- home dose per week. -181-270 days: Three take-home doses per week. -Remainder of the first year: Maximum 6-day supply. -Phase II: Days 91-180 – two take- home doses per week. -Second year of treatment: Maximum 13-day supply. -Phase III: Days 181-1 year – three take-home doses per week with no more than a 2-day supply at any one time. -Under emergency conditions, a program may issue 14 consecutive days of take- home doses without notification of Center for Substance Abuse Treatment (CSAT); must notify state narcotic authority and request an exception to dosing procedures. -271-360 days: Three take- home doses per week. -361 days onward: Six take-home doses per week. (§ 19.8.10) -For the remaining months of the year, 6 days of medication maximum per week. -Certain exceptions for emergencies. (§ 19.8.12) -After 1 year of continuous treatment, maximum 2-week take-home supply. -After 2 years of continuous treatment: Maximum 1-month supply with monthly visits. -Phase IV: After 1 year – four take- home doses per week with no more than a 2-day supply at a time. -After 2 years, maximum of 1-month take- home supply, but must make monthly visits. -Phase V: After 2 years – five take-home doses per week with no more than a 3-day supply at a time. -Medical director or program physician may grant an exception, subject to written approval from state narcotic authority for clients with serious physical disabilities or subject to exceptional hardship. -Exceptions made only as provided by federal OTP regulations and as approved by the state methadone -State authority may approve exceptional unsupervised doses on a case- by-case basis if the program physician applies. (§ 64-90-41.6, 41.7) -Phase VI: After 3 years in treatment – six take-home doses per week. (§ 65D- 30.014(5)(e)) -State narcotic authority may grant additional exceptions for medical emergency or natural disaster. (1:340.11, 1:340.16) authority. (§ 7.32.8.23(C), (D)) -Must be open Monday through Saturday. Must be open 7 days a week with the optional exception of nine specified holidays. (1:340.6(16)) Must be open 7 days weekly, including all holidays. (§ 19.8.4.2) -Must have medicating hours and counseling hours that accommodate clients, including 2 hours of medicating time daily outside of 9:00 a.m. to 5:00 p.m. Must be open every day of the week except for federal and state holidays, and Sundays, and be closed only as allowed in advance in writing by CSAT and the state methadone authority. (§ 7.32.8.18(C)) -Must provide 24-hour, 7-day a week access to designated program staff so that patient emergencies may be addressed and dosage levels verified. (§ 64-90-20.1.c) -Must medicate on Sundays according to client needs. -Must be open 7 days per week, but may close for eight holidays and two training days per year. (§ 64-90-41.6.a) -Must give a minimum of a 7-day notice for observed holidays. -No two holidays can occur in immediate succession unless provider is granted an exemption by the federal authority. -On days when provider is closed, services must be accessible to clients for whom take-out methadone is not clinically advisable. (§ 65D- 30.014(4)(g)) -Must participate in regional registry activities for the purpose of sharing client identifying information with other providers located within a 100-mile radius, to prevent the multiple enrollment of clients at more than one provider. Proposed programs must include in their applications a system to prevent client’s multiple program registration. (1:340.4(3)(k)) -Must make and document good faith efforts to determine that a patient seeking admission is not receiving opioid dependency treatment medication from any other source, within the bounds of all applicable patient confidentiality laws and regulations. -Must have a procedure for ensuring that patients are not enrolled in more than one OTP. Prior to admitting a client, must confirm using the Office of Substance Abuse’s system that client is not currently enrolled in another OTP. If system is unavailable, must check with all OTPs within 3 calendar days of admission. (§ 19.8.4.4-5) -When practicable, must obtain a release of information from the patient in order to check the records by telephone or fax of every OTP within 100 miles to ensure that the patient is not currently enrolled in other programs. -A record of violations by individual clients shall become part of the record maintained in an automated system that may be accessed by all participating providers. (§ 65D- 30.014(4)(f)(1), (7)) -The release must state that only prior admissions may be the subject of inquiry, not contacts without admission. -Must confirm that the patient is not receiving treatment from any other OTP, except under exceptional circumstances, within a 50-mile radius of its location, by contacting any such program or by using the central registry, when established. -Results of the check must be placed in the clinical record. -The Department of Health may establish an Internet-based registry of all current patients of a New Mexico OTP for the purpose of creating a system that prevents patients from receiving medication from more than one OTP. Each OTP as a condition of -The check shall be duplicated if the patient is discharged and readmitted at any time. (§ 64-90-30) approval to operate shall participate in the central registry as directed by the Department of Health. (§ 7.32.8.19(F), (G)) in its entirety by reference and provides that to the extent there is a conflict between federal regulations or standards and the standards set forth in this rule, the more stringent standard applies. W. Va. Code St. R. § 64-90-2.. Dosing refers to standards for doses of methadone. The following information on methadone-associated overdose deaths in Florida, Kentucky, Maine, New Mexico, and West Virginia was taken from data and research in these states. History of substance abuse: 78% Used diverted pharmaceuticals: 63% Doctor shopped (Five or more prescribing clinicians in the year before death): 21% Florida Department of Law Enforcement, Medical Examiners Commission, Drugs Identified in Deceased Persons by Florida Medical Examiners: 2007 Report (June 200). 5 cases. L.B.E. Shields et al., “Methadone Toxicity Fatalities: A Review of Medical Examiner Cases in a Large Metropolitan Area,” Journal of Forensic Sciences, vol. 52, no. 6 (2007). Marcella H. Sorg and Margaret Greenwald, Maine Drug-Related Mortality Patterns: 1997-2002, a special report prepared in cooperation with the Maine Office of the Attorney General and Maine Office of Substance Abuse, December 2002. N. Shah, S. L. Lathrop, and M. G. Landen, “Unintentional Methadone-Related Overdose Death in New Mexico (USA) and Implications for Surveillance, 199–2002,” Addiction, vol. 100, no. 2 (2005). A. J. Hall et al., “Patterns of Abuse Among Unintentional Pharmaceutical Overdose Fatalities,” JAMA, vol. 00, no. 22 (200). This references the effective year of the state law or regulation that provided the authority to establish the state’s prescription drug monitoring program. hours of treatment. -CA-000027-OA, 200 WL 2671 (Ky. Ct. App. June 1, 200). A Kentucky court also recently determined that a criminal defendant had the right to obtain KASPER data for discovery purposes during a criminal proceeding. The court found that the defendant’s right to due and compulsory process took precedence over any limitations in access authority under state law. See Commonwealth, Cabinet for Health & Family Services v. Bartlett, No. 200-CA-000046-OA, 200 WL 2690 (Ky. Ct. App. June 1, 200). In addition to the contact named above, key contributors to this report were Bonnie Anderson, Assistant Director; Lisa A. Lusk; Lisa Motley; Christina Ritchie; Hemi Tewarson; and Timothy Walker.
Prescription drug abuse is a growing public health problem. In particular, methadone-associated overdose deaths--those in which methadone may have caused or contributed to the death--have risen sharply. Before the late 1990s, methadone was used mainly to treat opioid addiction but has since been increasingly prescribed to manage pain. Taken too often, in too high a dose, or with other drugs or alcohol, methadone can cause serious side effects and death. Methadone-associated overdose deaths can occur under several different scenarios, including improper dosing levels by practitioners, misuse by patients who may combine methadone with other drugs, or abuse--using the drug for nontherapeutic purposes. This report examines the regulation of methadone, factors that have contributed to the increase in methadone-associated overdose deaths, and steps taken to prevent methadone-associated overdose deaths. GAO reviewed documents, laws and regulations, data, and research from relevant state and federal agencies, including the Drug Enforcement Administration (DEA) and the Substance Abuse and Mental Health Services Administration (SAMHSA). GAO also interviewed federal officials, officials in five selected states, officials from professional associations and advocacy groups, and experts in pain management, addiction treatment, and forensic sciences. Methadone is regulated as a controlled substance, under federal and state laws and regulations, when used for pain management and addiction treatment. When methadone is used for pain management, it is regulated under federal and state laws and regulations that apply to controlled substances generally and that do not impose requirements unique to methadone. For addiction treatment, however, federal and state laws and regulations impose additional requirements that are specific to the use of methadone in opioid treatment programs (OTP), which treat and rehabilitate people addicted to heroin or other opioids. GAO, however, only reviewed relevant state laws and regulations for five selected states. Although information on methadone-associated overdose deaths is limited, available data suggest that methadone's growing use for pain management has made more of the drug available, thus contributing to the rise in methadone-associated overdose deaths. Methadone prescriptions for pain management grew from about 531,000 in 1998 to about 4.1 million in 2006--nearly eightfold. Methadone has unique pharmacological properties that make it different from other opioids, and as a result, a lack of knowledge about methadone among practitioners and patients has been identified as a factor contributing to these deaths. DEA data suggest that abuse of methadone diverted from its intended purpose has also contributed to the rise in overdose deaths as the number of methadone drug items seized by law enforcement and analyzed in forensic laboratories increased 262 percent, from 2,865 in 2001 to 10,361 in 2007. Nonetheless, data and research from five states GAO reviewed suggest that the specific circumstances of these deaths are variable because of drug combinations and unknown sources of methadone. GAO identified selected efforts to prevent methadone abuse and overdose deaths that focused on education, safety, and monitoring. For example, to educate practitioners about using methadone for pain management and addiction treatment, SAMHSA is establishing a physician clinical support system for methadone. To improve safety, in 2006, the Food and Drug Administration (FDA) approved a revised label for methadone tablets that included new safety information regarding the use of methadone for pain and modified dosage instructions for those beginning pain management treatment with methadone. Additionally, to prevent diversion and abuse of controlled substances such as methadone, DEA reports that as of February 2009, 31 states have established prescription monitoring programs. Some officials and experts cautioned that any prevention efforts focused on methadone alone might unintentionally shift similar problems to a different drug. GAO received comments from the Department of Health and Human Services stating that FDA recently notified manufacturers of certain opioid drug products, such as methadone, that they must take certain steps to ensure that the benefits of these drugs continue to outweigh the risks. The Department of Justice provided GAO with technical comments.
As of September 2007, the Iraqi government included 34 ministries responsible for providing security and essential government services. U.S. capacity development programs target 12 key ministries: State and USAID focus on 10 civilian ministries while DOD is responsible for the Ministries of Defense and Interior. These 12 ministries employ 67 percent of the Iraqi government workforce and are responsible for 74 percent of the 2007 budget (see table 1). U.S. efforts to help build the capacity of the Iraqi national government are characterized by (1) multiple U.S. agencies leading individual efforts without overarching direction from a lead entity or a strategic approach that integrates their efforts with Iraqi government priorities and (2) shifting time frames and priorities in response to deteriorating conditions in Iraq. As of May 2007, six U.S. agencies were implementing about 53 projects at individual ministries and other national Iraqi agencies. State, USAID, and DOD lead the largest number of programs and provide about 384 U.S. military, government, and contractor personnel to work with the ministries. DOD provides over half (215) of the personnel to the Ministries of Defense and Interior to advise Iraqi staff in developing plans and policies, building ministry budgets, and managing personnel and logistics. State and USAID together provide an additional 169 advisors to the 10 key civilian ministries. Although State, USAID, and DOD have improved the coordination of their capacity-building efforts since early 2007, there is no lead agency or strategic plan to provide overarching guidance. Two factors explain the lack of a lead agency. First, from their inception in 2003, U.S. ministry capacity-building efforts evolved without an overall plan or the designation of a lead entity. U.S. agencies provided distinct assistance to four successive governments in response to Iraq’s immediate needs, according to U.S. officials. This approach first began under the Coalition Provisional Authority whereby U.S. advisors ran the ministries using U.S. and Iraqi funds and made personnel and budget decisions. Attempts to create an overall capacity development plan were dropped in late 2003 after the United States decided to transfer control of the ministries to an interim government. A second factor has been the delay in implementing recommendations from a 2005 State assessment that characterized U.S. capacity development programs as uncoordinated, fragmented, duplicative and disorganized. State recommended a unified effort among State, DOD, and USAID, with the latter providing overall coordination and leadership. The recommendations were not implemented. However, in July 2007, State named an ambassador to direct civilian capacity-building programs, including USAID efforts. Shifting priorities also have affected U.S. capacity development efforts, particularly in response to continued security problems. In early 2007, the U.S. mission refocused its capacity development program as part of the surge strategy associated with the administration’s New Way Forward. Rather than focusing on 12 civilian and security ministries, State and DOD targeted 6 key ministries (Interior, Defense, Planning, Finance, Oil, and Electricity) and focused on short-term improvements to address immediate problems with budget execution, procurement, and contracting. Accordingly, U.S. capacity development efforts shifted from long-term institution building to immediate efforts to help Iraqi ministries spend their capital budgets and deliver better services to the Iraqi people. Improvements were expected by September 2007. U.S. efforts to develop Iraqi ministerial capacity face four key challenges that pose a risk to their success and long-term sustainability. First, Iraqi government institutions have significant shortages of personnel with the skills to perform the vital tasks necessary to provide security and deliver essential services to the Iraqi people. When the Coalition Provisional Authority (CPA) removed Ba’athist party leaders and members from upper-level management in government, universities, and hospitals in 2003, most of Iraq’s technocratic class was forced out of government. A September 2006 U.S. embassy assessment noted that the government had significant human resource shortfalls in most key civilian ministries. The majority of staff at all but 1 of the 12 ministries surveyed was inadequately trained for their positions, and a quarter of them relied heavily on foreign support to compensate for their human and capital resource shortfalls. The lack of trained staff has particularly hindered the ability of ministries to develop and execute budgets. For example, in 2006, the Iraqi government spent only 22 percent of its capital budget. For January through July 2007, spending levels have improved with about 24 percent of capital budgets spent. However, as we reported in early September 2007, it is unlikely that Iraq will spend the $10 billion it allocated for 2007 for capital budgets by the end of this year. Second, Iraq’s government confronts significant challenges in staffing a nonpartisan civil service and addressing militia infiltration of key ministries. In June 2007, DOD reported that militias influenced every component of the Ministry of Interior. In particular, the Ministry has been infiltrated by members of the Supreme Islamic Council of Iraq and its Badr Organization, as well as Muqtada al-Sadr’s Mahdi Army. Furthermore, the Iraqi civil service remained hampered by staff whose political and sectarian loyalties jeopardized the civilian ministries’ abilities to provide basic services and build credibility among Iraqi citizens, according to U.S. government reports and international assessments. DOD further found that government ministries and budgets were sources of power for political parties, and staff ministry positions were rewarded to party cronies. The use of patronage hindered capacity development because it led to instability in the civil service as many staff were replaced whenever the government changed or a new minister was named, according to U.S. officials. Third, according to State, widespread corruption undermines efforts to develop the government’s capacity by robbing it of needed resources, some of which are used to fund the insurgency; by eroding popular faith in democratic institutions seen to be run by corrupt political elites; and by spurring capital flight and reducing economic growth. According to a State assessment, one-third of the 12 civilian ministries surveyed had problems with “ghost employees” (that is, nonexistent staff listed on the payroll). In addition, the procedures to counter corruption adopted at all but one of the civilian ministries surveyed were partly effective or ineffective. Similar problems existed in the security ministries, according to DOD. Finally, the security situation remains a major obstacle to developing capacity in areas vital to the government’s success. The high level of violence hinders U.S. advisors’ access to their counterparts in the ministries, increases absenteeism among ministry employees, and contributes to “brain drain” as ministry employees join the growing number of Iraqis leaving the country. According to a UN report, between March 2003 and June 2007, about 2.2 million Iraqis left the country and 2 million were internally displaced. According to U.S. and international officials, the flow of refugees exacerbates Iraqi ministry capacity shortfalls because those fleeing tend to be disproportionately from the educated and professional classes. A November 2006 UN report stated that an estimated 40 percent of Iraq’s professional class had left since 2003. In February 2007, State officials provided GAO with a three-page, high- level outline proposing a U.S. strategy for strengthening Iraqi ministerial capacity. This document was a summary with few details and no timeline. A senior USAID official indicated that it is uncertain whether the high-level summary will be developed into a strategy, although the administration received $140 million in funding for its capacity development efforts in fiscal year 2007 and requested $255 million for fiscal year 2008. GAO has previously identified the desirable elements of a strategy: a clear purpose, scope, and methodology; a delineation of U.S. roles, responsibilities, and coordination; desired goals, objectives, and activities tied to Iraqi priorities; performance measures; and a description of costs, resources needed, and risks. Table 2 summarizes the key elements of a strategy and provides examples of the status of the U.S. approach as of September 2007. As table 2 shows, U.S. agencies have developed some of these elements in their programs for capacity building at individual ministries, but not as part of an overall U.S. strategy. For example: We found little evidence that the U.S. government has clearly defined the purpose, scope, and methodology for developing an overall strategy. Agencies have provided some limited information on why an overall strategy is needed, what it will cover, and how it will be developed. A Joint Task Force on Capacity Development, established in October 2006, has helped U.S. agencies better delineate roles and responsibilities and coordinate their efforts. However, we found no plans on how the capacity development programs of State, USAID, and DOD will be unified and integrated. While U.S. agencies have clearly identified the overall goals of capacity development at the Iraqi ministries, most U.S. efforts lack clear ties to Iraqi priorities for all ministries. While DOD is developing measures to assess progress at the security ministries, such measures have not been developed for Iraqi civilian ministries. U.S. agencies have not identified the costs and resources needed to complete capacity development programs beyond the budget for fiscal year 2007 and the 2008 budget request. Agencies have not provided information on how future resources will be targeted to achieve the desired end-state or how the risks we identified will be addressed. In addition, efforts to improve cooperation with the UN and other international donor nations and organizations have encountered difficulties. For example, U.S. efforts are to be coordinated with the Iraqi government and the international donor community through the Capacity Development Working Group. However, the group did not meet for about a year after forming in late 2005 and did not meet from February through May 2007. Current U.S. efforts to build the capacity of the Iraqi government involve multiple U.S. agencies working with Iraqi counterparts on many issues. GAO, for example, is working with the Iraqi Board of Supreme Audit to enhance its auditing skills and capacity. However, U.S. efforts to improve the capacity of Iraq’s ministries must address significant challenges if they are to achieve their desired outcomes. U.S. efforts lack an overall strategy, no lead agency provides overall direction, and U.S. priorities have been subject to numerous changes. Finally, U.S. efforts confront shortages of competent personnel at Iraqi ministries, and sectarian ministries contend with pervasive corruption. The risks are further compounded by the ongoing violence in Iraq as U.S. civilian advisors have difficulties meeting with their Iraqi counterparts and skilled Iraqi professionals leave the country. Congress appropriated $140 million in May 2007 for capacity building and the administration requested up to $255 million for fiscal year 2008. We believe that future U.S. investments must be conditioned on the development of an overall integrated U.S. strategy that clearly articulates agency roles and responsibilities, establishes clear goals, delineates the total costs needed, and assesses the risk to U.S. efforts. The strategy would also need to consider any expanded role of multilateral organizations, including the United Nations and World Bank. GAO recommends that State, in consultation with the Iraqi government, complete an overall integrated strategy for U.S. capacity development efforts. Key components of an overall capacity development strategy should include a clear purpose, scope, and methodology; a clear delineation of U.S. roles, responsibilities, and coordination, including the designation of a lead agency; goals and objectives based on Iraqi-identified priorities; performance measures based on outcome metrics and milestones; and a description of how resources will be targeted to achieve the desired end- state. Given the absence of an integrated capacity development strategy, it is unclear how further appropriations of funding for ministry capacity development programs will contribute to the success of overall U.S. efforts in Iraq. Congress should consider conditioning future appropriations on the completion of an overall integrated strategy. In commenting on a draft of the report accompanying this testimony, State and USAID noted (1) their concern over our recommendation to condition future appropriations for capacity development on the completion of a strategy; (2) the recent appointment of an ambassador to supervise all short- and medium-term capacity development programs; and (3) the need to tailor capacity development needs to each Iraqi ministry. In response to the agencies’ first comment, we do not recommend stopping U.S. investment in capacity development; the $140 million in supplemental funding appropriated in fiscal year 2007 remains available for the agencies to continue their efforts. Rather, we recommend that Congress condition future funding on the development of an overall integrated strategy. We acknowledge that State named an ambassador to coordinate the embassy’s economic and assistance operations. However, this action occurred in July 2007, underscoring our point that U.S. capacity development efforts have lacked overall leadership and highlighting the need for an overall integrated strategy. Finally, our recommendation does not preclude U.S. agencies from tailoring capacity development efforts to meet each ministry’s unique needs. A strategy ensures that a U.S.-funded program has consistent overall goals, clear leadership and roles, and assessed risks and vulnerabilities. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other Members have at this time. For questions regarding this testimony please me on (202) 512-5500, or Mr. Joseph A. Christoff, Director, International Affairs and Trade, on (202) 512-8979 or christoffj@gao.gov. Other key contributors to this statement were Tetsuo Miyabara, Patrick Hickey, Lynn Cothern, Lisa Helmer, Stephen Lord, and Judith McCloskey. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The development of competent and loyal government ministries is critical to stabilizing and rebuilding Iraq. The ministries are Iraq's largest employer, with an estimated 2.2 million government workers. U.S. efforts to build the capacity of Iraqi ministries include programs to advise and help Iraqi government employees develop the skills to plan programs, execute budgets, and effectively deliver services. The administration received $140 million in fiscal year 2007 to fund U.S. capacity-building efforts and requested an additional $255 million for fiscal year 2008. This testimony discusses (1) U.S. efforts to develop ministry capacity, (2) the key challenges to these efforts, and (3) the extent to which the U.S. government has an overall integrated strategy. This statement is based on GAO-08-117 . To accomplish our report objectives, we reviewed reports from and interviewed officials of U.S. agencies, the Iraqi government, the United Nations, and the World Bank. We conducted fieldwork in Washington, D.C.; New York City; Baghdad, Iraq; and Amman, Jordan. Over the past 4 years, U.S. efforts to help build the capacity of the Iraqi national government have been characterized by (1) multiple U.S. agencies leading efforts without overarching direction from a lead agency or a strategic plan that integrates their efforts; and (2) shifting timeframes and priorities in response to deteriorating conditions in Iraq. As of May 2007, six U.S. agencies were implementing about 53 projects at individual ministries and other national Iraqi agencies. Although the Departments of State and Defense and the U.S. Agency for International Development (USAID) have improved the coordination of their capacity-building efforts, there is no lead agency or strategic plan to provide overarching guidance. U.S. efforts to develop Iraqi ministerial capacity face four key challenges that pose risks to their success and long-term sustainability. First, Iraqi government institutions have significant shortages of personnel with the skills to perform the vital tasks necessary to provide security and deliver essential services to the Iraqi people. Second, Iraq's government confronts significant challenges in staffing a nonpartisan civil service and addressing militia infiltration of key ministries. Third, widespread corruption undermines efforts to develop the government's capacity by robbing it of needed resources, some of which are used to fund the insurgency. Finally, violence in Iraq hinders U.S. advisors' access to Iraqi minstries, increases absenteeism among minstry employees, and contributes to the growing number of professional Iraqis leaving the country. The U.S. government is beginning to develop an overall strategy for ministerial capacity development, although agencies have been implementing separate programs since 2003. GAO's work in this area shows that an overall strategy for capacity development should include (1) a clear purpose, scope, and methodology; (2) a delineation of U.S. roles and responsibilities and coordination with other donors including the United Nations; (3) goals and objectives linked to Iraqi priorities; (4) performance measures and milestones; and (5) costs, resources needed, and assessment of program risks. U.S. ministry capacity efforts have included some but not all of these components. For example, agencies are working to clarify roles and responsibilities. However, U.S. efforts lack clear ties to Iraqi-identified priorities at all ministries, clear performance measures, and information on how resources will be targeted to achieve the desired end-state. State and USAID noted concerns over our recommendation to condition further appropriations and cited the appointment of an ambassador to supervise civilian capacity development programs. GAO does not recommend stopping U.S. investment in capacity development. The $140 million in fiscal year 2007 funds remains available to continue efforts while developing an integrated strategy. In addition, the U.S. ambassador arrived in Iraq in July 2007 underscoring our point that U.S. efforts lacked overall leadership and highlighting the need for an overall integrated strategy.
The department’s Unified Command Plan sets forth basic guidance to all combatant commanders and establishes the missions, responsibilities, and areas of geographic responsibility among all the combatant commands. There are currently nine combatant commands—six geographic and three functional. The six geographic combatant commands have responsibilities for accomplishing military operations in regional areas of the world. The three functional combatant commands operate worldwide across geographic boundaries and provide unique capabilities to the geographic combatant commands and the military services. In addition, each geographic combatant command is supported by multiple service component commands that help provide and coordinate service-specific forces, such as units, detachments, organizations and installations, to help fulfill the commands’ current and future operational requirements. Figure 1 is a map of the areas of responsibility and headquarters locations of the geographic combatant commands, to include some of their subordinate unified commands and their respective service component commands. According to DOD Directive 5100.03, Support of the Headquarters of Combatant and Subordinate Unified Commands, the military departments—as combatant command support agents—are responsible for programming, budgeting, and funding the administrative and logistical support of the headquarters of the combatant commands and subordinate unified commands. On an annual basis the three military departments assess needs and request funding as part of their respective operation and maintenance budget justification to meet this requirement to support the combatant commands and subordinate unified commands. The directive assigns each military department responsibility for specific combatant commands and subordinate unified commands. Table 1 provides a listing of the combatant commands, their subordinate unified commands, and the military departments that support them. Unless otherwise directed by the President or the Secretary of Defense, the commanders of these combatant commands are given authority to organize the structure of their commands as they deem necessary to carry out assigned missions and maintain staff to assist them in exercising authority, direction and control over subordinate unified commands and other assigned forces. The commands’ structure may include a principal staff officer, personal staff to the commander, a special staff group for technical, administrative, or tactical advice and other groups of staff that are responsible for managing personnel, ensuring the availability of intelligence, directing operations, coordinating logistics, preparing long-range or future plans, and integrating communications systems. The commands may also have liaisons or representatives from other DOD agencies and U.S. government organizations integrated into their staffs to enhance the command’s effectiveness in accomplishing their missions. While the commands generally conform to these organizational principles, there may be variations in a command’s structure based on its unique mission areas and responsibilities. The staff of a combatant command, subordinate unified command, or a joint task force is generally composed of military and civilian personnel drawn from the Air Force, Army, Navy and Marine Corps, personnel from other DOD components, interagency personnel, and other personnel associated with contracted services. Chairman of the Joint Chiefs of Staff Instruction 1001.01A, Joint Manpower and Personnel Program, outlines the process for determining and documenting requirements for manpower at joint organizations, including the combatant commands. The instruction states that commands should be structured to the minimum essential size required to meet approved missions and average workload expected for at least the next 36 months. The commands are to consider a number of factors when determining manpower requirements, including the total number of positions needed and the mix of military, civilian, and contractor support needed. After the commands’ manpower requirements have been determined and validated, the requirements are documented on each command’s manning document, called the Joint Table of Distribution, which contains permanent authorized positions for military, civilians and other personnel responsible for managing the day- to-day operations of the command. Other processes exist to identify additional manpower that commands’ may require to shift to a wartime, contingency or mobilization footing, and that may be required to fill temporary organizations that are established to meet short-term mission requirements. Since fiscal year 2001, the number of authorized military and civilian positions and mission and headquarters-support costs devoted to the five geographic combatant commands that we reviewed substantially increased. In our analysis of data provided by the commands, we found considerable increases in the number of authorized military and civilian positions—about 50 percent from fiscal year 2001 through fiscal year 2012—and in the costs for mission and headquarters-support—more than doubling from fiscal year 2007 through fiscal year 2012—at the five combatant commands that we reviewed. Data on the service component commands also indicated that authorized military and civilian positions increased by more than 30 percent from fiscal years 2008 through 2012 and mission and headquarters-support costs increased by more than 40 percent from fiscal years 2007 through 2012. In addition to data on authorized military and civilian positions, we found that the data on the number of personnel performing contract services across the combatant commands and service component commands varied or was unavailable, and thus trends could not be identified. The authorized number of military and civilian positions for the five geographic combatant commands that we reviewed rose from about 6,800 in fiscal year 2001 to more than 10,100 in fiscal year 2012, primarily due to the addition of new organizations and missions. Our analysis of data showed that the establishment of U.S. Northern Command in fiscal year 2003 and U.S. Africa Command in fiscal year 2008 drove the increase in the total number of authorized military and civilian positions since fiscal year 2001. U.S. Northern Command was established in fiscal year 2003 to provide command and control over DOD’s homeland defense mission and to coordinate defense support of civil authorities. U.S. Africa Command was established in fiscal year 2008 to focus U.S. security efforts within the African continent and strengthen security cooperation with African countries, which had been primarily the responsibility of U.S. European Command. At the remaining combatant commands, our analysis showed growth in the number of authorized positions in each command’s theater special operations command, which further drove overall position increases. For example, from fiscal years 2001 through 2012 the number of authorized positions at U.S. Pacific, European, and Southern Commands’ theater special operations commands increased by almost 400 positions, largely to fulfill increased mission requirements. Figure 2 shows the increases or changes in total authorized military and civilian positions at the five geographic combatant commands that we reviewed. The geographic combatant commands have also become much more reliant on civilian personnel. We found that the number of authorized civilian positions at the combatant commands almost doubled from about 2,370 in fiscal year 2004 to about 4,450 in fiscal year 2012. In contrast, the number of authorized military positions decreased about 9 percent from approximately 6,250 to 5,670 in the same period. This changed the composition of the commands markedly. In fiscal year 2004, military positions made up about three-quarters of total authorized positions supporting the combatant commands that we reviewed; however, due to the substantial increase in the number of authorized civilian positions the proportion of military positions at the combatant commands is now just over half. According to DOD officials, the increase in authorized civilian positions is due in part to DOD-directed efforts to convert positions filled by military personnel or contractors to civilians. As part of the Secretary of Defense’s 2010 efficiency initiative, baselines were established for the number of authorized civilian positions at the combatant commands for fiscal years 2011 through 2013. In June 2011, the Secretary of Defense directed a series of initiatives designed to more effectively manage combatant command manpower and funding, which further set baselines for civilian manpower at the combatant commands for fiscal years 2013 through 2017. Any growth above these baselines in fiscal years 2013 through 2017 has to be revalidated by the Joint Staff and military services, and must be based on workload and funding considerations. This baseline in civilian positions is reflected in our analysis of the number of authorized positions at the five geographic combatant commands, with growth in civilian positions slowing significantly from fiscal years 2011 through 2012. Figure 3 shows changes in the combatant commands’ number of authorized military and civilian positions from fiscal years 2004 through 2012. The availability of data on the number of contractor personnel or full-time equivalents varied across the combatant commands, and thus trends could not be identified. DOD officials stated the department generally tracks and reports expenditures for contract services, and that the combatant commands were not required to maintain historical data on the number of contractor personnel. We found that the combatant commands had taken initial steps to collect data on contractor full-time equivalents and reliance on personnel performing contractor services varies across the combatant commands. For example, U.S. Northern Command reported having 460 contractor full-time equivalents at its command in fiscal year 2012, whereas U.S. European Command reported having 169 contractor full-time equivalents supporting the command in fiscal year 2012. Our work over the past decade on DOD’s contracting activities has noted the need for DOD to obtain better data on its contracted services and personnel to enable it to make more informed management decisions, ensure department-wide goals and objectives are achieved, and have the resources to achieve desired outcomes. In response to GAO’s past work, DOD has outlined its approach to document contractor full-time equivalents and collect manpower data from contactors. However, DOD does not expect to fully collect contractors’ manpower data until fiscal year 2016. The Secretary of Defense, as part of his 2010 efficiency initiative, directed the department to reduce funding for service-support contracts by 10 percent per year across the department for fiscal years 2011 through 2013. In June 2011, the Secretary of Defense established limits on service support contract expenditures at the combatant commands in fiscal years 2011 through 2013. Our analysis of data provided by the military services showed that total authorized military and civilian positions at the service component commands supporting the geographic combatant commands we reviewed increased by about one-third from about 5,970 in fiscal year 2008 to about 7,800 in fiscal year 2012. The increases in authorized military and civilian positions at the service component commands supporting U.S. European Command account for more than one-third of the total increase in authorized positions across all the service component commands. Among the services, the Army’s service component commands saw the greatest increase in authorized positions, accounting for about 85 percent of the total increase in authorized positions. The service component commands fulfill dual roles: organizing, training and equipping assigned service-specific forces while also assisting the combatant commands in their employment during military operations. According to DOD officials, service component commands with assigned forces such as Pacific Air Forces and Army Europe, are likely to have larger staffs than service component commands that do not have assigned forces, such as Marine Forces Africa. Figure 4 shows the increase in authorized positions at the service component commands that we reviewed. Similar to the data on the number of personnel performing contract services at the combatant commands, we found that the data on the number of personnel performing contact services at the service component commands varied or was unavailable, and thus trends could not be identified. We found that some service component commands do not maintain data on the number of personnel performing contract services and others used varying methodologies to track these personnel, counting the number of contractors on hand or the number of identification badges issued. When adjusted for inflation, total mission and headquarters-support costs from fiscal years 2007 through 2012—including costs for civilian pay, contract services, travel, and equipment—more than doubled at the five geographic combatant commands we reviewed. The cost growth, from about $500 million in fiscal year 2007 to about $1.1 billion in fiscal year 2012, primarily was due to increases in contract services and civilian pay. For example, U.S. Southern Command’s mission and headquarters-support costs more than quadrupled from about $45 million in fiscal year 2007 to about $202 million in fiscal year 2012; more than half of the increase was attributable to contract services, and 20 percent of the increase was attributable to civilian pay. In addition, U.S. Pacific Command’s mission and headquarters-support costs increased from about $175 million in fiscal year 2007 to about $246 million in fiscal year 2012; about 65 percent of these cost increases was attributable to civilian pay. Figure 5 shows the overall increase or changes in the mission and headquarters-support costs at the five geographic commands that we reviewed from fiscal years 2007 through 2012. When adjusted for inflation, total mission and headquarters-support costs increased by more than 40 percent at the service component commands we reviewed from fiscal years 2007 through 2012. The costs grew from about $430 million dollars in fiscal year 2007 to about $605 million in fiscal year 2012. The increase primarily was due to the establishment of U.S. Africa Command’s supporting service component commands, which first reported costs in fiscal year 2009. U.S. Africa Command’s mission and headquarters-support costs were $71 million in fiscal year 2012. In addition, the service component commands at U.S. European and Pacific Commands experienced cost increases from fiscal years 2007 through 2012 of about $53 million and $54 million respectively, which accounted for the majority of the remaining increase in mission and headquarters- support costs. Figure 6 shows the increase in the mission and headquarters-support costs at the service component commands that we reviewed from fiscal years 2007 through 2012. The Army’s service component commands accounted for more than half of the total increase in mission and headquarters-support costs across all service component commands over the period. Across the service component commands the Air Force components account for the majority of the mission and headquarters support–costs. Air Force officials explained that some of their service component commands have assigned forces and that the higher costs at these commands reflect support for the military service’s organize, train, and equip mission. While DOD has taken some steps to review the combatant commands’ size and structure and to identify the commands’ resources, DOD’s processes have four primary weaknesses that challenge its ability to make informed decisions: (1) the absence of a comprehensive, periodic review of the size and structure of the combatant commands, (2) inconsistent use of personnel management systems to identify and track assigned personnel across the combatant commands, (3) lack of visibility by the combatant commands and Joint Staff over authorized manpower and personnel at the service component commands, and (4) lack of transparent information identifying each combatant command’s personnel or mission and headquarters-support funding in the military departments’ budget documents for operation and maintenance. Our prior work on strategic human capital management found that high-performing organizations periodically reevaluate their human capital practices and use complete and reliable data to help achieve their missions and ensure resources are properly matched to the needs of today’s environment. Without regularly assessing the size and structure of the combatant commands and without complete information on all of the resources supporting the combatant commands, DOD cannot ensure that the combatant commands are properly sized and structured to meet their assigned missions and cannot ensure that commands are managing resources efficiently. Recognizing that there has been significant growth in the size of the combatant commands since 2001 due to increases in their assigned missions, DOD has taken some steps to slow the growth in command personnel and associated mission and headquarters-support costs. In November 2007, to improve the links between mission, manpower requirements, and resource decisions, the Joint Staff was tasked by the Deputy Secretary of Defense with reviewing the authorized positions supporting the combatant commands. The review resulted in the establishment of baselines in the number of major DOD headquarters activity positions at each of the geographic and functional combatant commands that could be adjusted based only on the approval of new missions. However, these baselines apply only to positions performing major DOD headquarters activity functions, and our prior work found that DOD’s major headquarters activity data is not always complete and reliable. In addition, as part of the Secretary of Defense’s 2010 efficiency initiative, the combatant commands, along with other organizations within the department, were asked to identify efficiencies in headquarters and administrative functions, support activities, and other overhead. Specifically, the combatant commands were directed to perform organizational assessments to identify any disconnects between the commands’ priorities and their allocation of resources. Based on these assessments and the direction of the Secretary of Defense, the combatant commands reduced seven Standing Joint Force Headquarters to two global Standing Joint Force Headquarters, decreased their reliance on individual augmentees at two commands, and consolidated joint task forces at several commands. These changes eliminated a total of about 530 authorized positions and about 470 temporary personnel in total. In addition, some commands, such as U.S. European Command and U.S. Northern Command, were directed to reduce personnel and consolidate staff functions to better align their available resources with their current missions. Other commands, however, such as U.S. Africa Command, did not make specific reductions as part of the Secretary of Defense’s efficiency initiative. In 2011, DOD announced that it was studying the regional structure of the combatant commands, estimating that it could save $900 million from fiscal years 2014 through 2017 by considering alternatives to the current construct of regional geographic combatant commands. To achieve the estimated cost savings, the Joint Staff considered several alternatives to the current structure that involved merging geographic combatant commands. However, the Joint Staff has since reviewed and rejected the proposed options, in part because they believed that merging commands would not achieve the estimated savings. According to Joint Staff officials, DOD has reduced funding across all of the geographic and functional commands by about $881 million during fiscal years 2014 through 2018, as part of the President’s budget request for fiscal year 2014. As part of the reductions, officials stated the department plans to reduce civilian positions at the combatant commands and Joint Staff by approximately 400 positions over 5 years through fiscal year 2018. DOD officials stated that they would continue to seek additional savings at the combatant commands in any subsequent directed reviews. DOD has an ongoing process to assess the combatant commands’ requests for additional positions, but does not periodically evaluate whether authorized positions at the combatant commands’ are still needed to support their assigned missions. Chairman of the Joint Chiefs of Staff Instruction 1001.01A, Joint Manpower and Personnel Program, outlines a process for determining and validating requirements for additional manpower at the combatant commands. According to the instruction, requests for additional positions should be mission driven, supported by the combatant commands’ priorities, and based on studies of the capabilities and readiness of the commands’ personnel. Furthermore, the combatant commands should consider other options prior to requesting additional positions, such as internally realigning personnel, using internal funding for contract positions, and utilizing temporary personnel. As part of their request, the combatant commands may also rely on manpower studies to support their requests for additional positions. For example, U.S. Africa Command and U.S. Pacific Command are currently undergoing manpower studies to review their size and structure as part of DOD’s emphasis on the Asia-Pacific and Middle East regions of the world. According to Chairman of the Joint Chiefs of Staff Instruction 1001.01A, requests for additional positions are to be drafted by the commands and submitted to the Director of the Joint Staff. A team made up of representatives from the Joint Staff and the military services is then convened to evaluate the request, based on the command’s mission drivers, capability gaps, internal offsets, and manpower assessments. This team makes recommendations to the Director of the Joint Staff and the operations deputies of the military services, who decide whether or not to endorse the request for additional positions to support each command’s mission. If the request for additional positions is endorsed, the authorized positions are initially documented on the combatant command’s joint table of distribution and the positions get evaluated, along with DOD’s other resource requirements, to determine whether or not it will be funded. The combatant commands may also submit requests to the Joint Staff for minor changes in their authorized structure to meet changing missions as long as they do not affect the total number of authorized positions at the command. Figure 7 describes the process for reviewing and validating proposed increases in authorized positions at the combatant commands. While DOD and some military services have policies on manpower management, we found that the Chairman of the Joint Chiefs of Staff Instruction 1001.01A does not specify a process for reviewing the combatant commands’ size and structure and focuses on requests for additional positions or nominal changes in authorized positions. Specifically, some military service regulations that guide manpower requirements at the service component commands require manpower to be periodically evaluated to ensure they still meet assigned missions. For example, Army Regulation 570-4, Manpower and Equipment Control, Manpower Management, suggests that Army components’ manpower requirements be reevaluated by Army commanders and agency heads every two to five years, and optimally every three years. In addition, DOD Directive 1100.4, Guidance for Manpower Management states that manpower policies, procedures, and structures should be periodically evaluated to ensure efficient and effective use of manpower resources. In contrast, Chairman of the Joint Chiefs of Staff Instruction 1001.01A suggests that manpower requirements at the combatant commands be set based on the projection of workload over three years, but it has no provisions for reevaluating this determination. DOD officials confirmed that there is no periodic evaluation of the commands’ authorized positions, in part because there is no process in place to review authorized positions when there is a change in roles or missions. Our prior work on strategic human capital management found that high- performing organizations stay alert to emerging mission demands and remain open to reevaluating their human capital practices in light of their demonstrated successes or failures in achieving the organization’s strategic objectives. In addition, Chairman of the Joint Chiefs of Staff Instruction 1001.01A does not address personnel associated with contracted services. As previously stated, DOD Instruction 1100.22, Policy and Procedures for Determining Workforce Mix, requires DOD’s workforce, which includes military and civilian manpower and contractor support, to be structured to execute missions at a low-to-moderate level or risk. The purpose of DOD Instruction 1100.22, among other things, is to establish policy, assign responsibilities, and prescribe procedures for determining the appropriate mix of manpower within the department. While DOD is aware of the growth in missions and authorized positions at the combatant commands since 2001 and has undertaken some efforts to manage and assess the size and structure of the combatant commands, these efforts did not constitute a comprehensive and periodic bottom-up review of the combatant commands’ total workforce. Without a comprehensive, periodic evaluation of the commands’ authorized positions, DOD will not be able to ensure that the combatant commands are properly sized and structured to meet their assigned missions or ensure that the commands identify opportunities for managing personnel resources more efficiently. DOD has an electronic system to document and review information about authorized positions and assigned military and civilian personnel at the combatant commands, but the commands are not consistently inputting complete information on all assigned personnel. All of the combatant commands that we reviewed use to some extent the Electronic Joint Manpower and Personnel System (e-JMAPS) to manage their commands’ manpower and personnel; however, we found differences across the commands in how they use the system to manage their assigned personnel. DOD has identified e-JMAPS as the system of record to document the combatant commands’ organizational structure and according to Chairman of the Joint Chiefs of Staff Instruction 1001.01A, Joint Manpower and Personnel Program, the system should be used to track the manpower and personnel required to meet the combatant commands’ assigned missions. The instruction states e- JMAPS should provide visibility over joint personnel by allowing the Joint Staff and combatant commands to maintain, review, modify, and report all personnel actions in the system, to include changes in authorized positions or updates to personnel arriving at or departing from the command. In January 2012, the Vice Director of the Joint Staff issued a memo identifying e-JMAPS as the authoritative data source for DOD and for congressional inquiries of joint personnel, stating that e-JMAPS must accurately reflect the manpower and personnel allocated to joint organizations, such as the combatant commands, to provide senior leaders with the necessary data to support decision making in a fiscally constrained environment. According to Standards for Internal Control in the Federal Government, policies, procedures, and mechanisms to effectively manage an organization—including accurate and timely documentation of an organization’s transactions and resources and effective management of an organization’s workforce—are important factors in enabling an organization to improve accountability and achieve their missions. Our review found that the commands vary in the types of personnel that each enters into e-JMAPS and that some commands exclude certain personnel from the system when managing personnel who support the command. All of the combatant commands we reviewed said that they use e-JMAPS to manage and track authorized military and civilian positions within the command, and where appropriate, other temporary personnel. To fulfill temporary or short-duration mission requirements, additional personnel—such as activated reservists, civilian overhires, and interagency personnel—may be needed to support the commands’ authorized manpower. However, the commands varied in their use of e- JMAPS to track these additional personnel because Chairman of the Joint Chiefs of Staff Instruction 1001.01A does not clearly state that temporary personnel, such as civilian overhires and activated reservists, should be accounted for in e-JMAPS, resulting in the differences in what is tracked by the commands. While U.S. Northern Command and U.S. Southern Command track their civilian overhires and activated reservists in e- JMAPS, U.S. European Command does not, and U.S. Pacific Command tracks only activated reservists in e-JMAPS. In addition, officials at most of the combatant commands that we reviewed noted that they do not account for temporary personnel, such as interagency personnel, in eJMAPS, and that they primarily use the system to manage personnel filling authorized positions at the command, which may not include all command personnel. For example, during the course of our review, U.S. European Command officials identified approximately 172 civilian overhires and activated reservists supporting the command in fiscal year 2012 that are not accounted for in e-JMAPS. U.S. Africa Command is the only command that inputs all assigned personnel, to include civilian overhires, activated reservists, and interagency personnel, into e-JMAPS regardless of whether they are filling an authorized position, reflecting about 250 additional personnel at the command in fiscal year 2012. Our review also found that 4 out of the 5 geographic combatant commands do not account for personnel performing contract services in e-JMAPS. According to DOD officials, personnel performing contract services are not required to be accounted for in e-JMAPS and those personnel would be included in the costs reported for contract services. As part of a department-wide plan to account for contractor services, DOD has begun efforts to collect contractor-manpower data directly from contractors, but DOD does not expect to fully collect data on personnel performing contract services until fiscal year 2016. Furthermore, according to Joint Staff officials, the combatant commands do not always input personnel information in a timely manner and civilian personnel may not be tracked as diligently in e-JMAPS as military personnel. While Joint Staff officials stated that the accuracy of personnel data in e-JMAPS has improved, there is no specific guidance requiring the combatant commands to periodically review and update data on personnel assigned to the command to ensure that data in e-JMAPS is accurate and up-to-date. According to some combatant command officials, command staffs input personnel information in e-JMAPS when personnel arrive at the command. However, our review confirmed that there are differences across the combatant commands in how often they update and review personnel information to ensure its accuracy, and officials at one command confirmed that while e-JMAPS is their primary personnel management system, they also rely on military service personnel systems to track personnel because the service systems are more accurate and capture more personnel information than e-JMAPS. While the combatant commands use e-JMAPS to manage some of their assigned personnel and review personnel data periodically, there is no DOD guidance requiring that all personnel supporting the commands be tracked in e-JMAPS or that reviews of personnel occur within specific timeframes to ensure assigned personnel data is accurate. Without guidance to require complete and accurate information on all personnel supporting the combatant commands, and the consistent and timely review of assigned personnel data in e-JMAPS, DOD and the combatant commands cannot be assured that e-JMAPS will provide comprehensive data to inform their personnel decisions. In addition to not having complete information on the personnel assigned to the combatant commands, DOD and the combatant commands do not have oversight or visibility over authorized manpower or the number of assigned personnel at the service component commands. As stated previously, the service component commands often fulfill dual-roles: organizing, training, and equipping assigned service-specific forces while also assisting the combatant commands in their employment during military operations. Some service component commands, such as U.S. Air Forces in Europe-Air Forces Africa manage large numbers of assigned forces and operational units, while others, such as Marine Forces South, manage few, if any, assigned forces. While these service component commands provide support to the combatant commands, the service component commands use service-specific personnel management systems to account for their authorized manpower and personnel, and DOD does not have a formal process to gather this information. A Chairman of the Joint Chiefs of Staff publication identifies the importance of having reliable data on all personnel within a geographic combatant command’s area of responsibility for visibility of personnel and for effective planning. Further, our previous work has highlighted the need for agencies to have valid, reliable data and be aware of the size of their workforce, its deployment across the organization and the knowledge, skills and abilities needed for the agency to pursue its mission. Even though the combatant commands rely on the service component commands’ personnel to support their missions and operational requirements, they do not have oversight or visibility into the service component commands’ authorized manpower or how the components determine the size and structure of their staff to support the combatant commands’ missions. Based on our analysis of data that we gathered, in fiscal year 2012 there were 7,795 authorized positions at the headquarters of the service component commands, which was more than double the 3,817 authorized positions at the headquarters of the combatant commands. Moreover, the service component commands are generally structured to perform staff functions that are similar to those of the combatant commands, such as collecting intelligence, coordinating operations, performing strategic planning and policy, and supporting communications. For example, at U.S. Pacific Command, there are about 650 authorized positions dedicated to gathering, analyzing, and performing intelligence support, while there are about 175 additional authorized positions within U.S. Pacific Command’s service component commands dedicated to the same staff function. According to DOD officials, since headquarters personnel at service component commands with large numbers of assigned forces are more likely to be focused on the components’ organize, train, and equip function rather than on solely supporting the combatant commands missions, these positions may not necessarily be redundant. However, given the similarities in mission requirements and staff functions at the combatant and service component commands, it is important for the combatant commands to have visibility over the service component command’s authorized manpower to be able to determine whether these similarities at the combatant and service component commands are necessary or duplicative. Moreover, the combatant commands do not have complete information on personnel assigned to their service component commands. Officials at the combatant commands and Joint Staff stated that they do not have visibility over personnel at the service component commands or access to the service-specific personnel management systems that the service component commands use, and if they need information to determine whether personnel at the service component commands can support the combatant commands’ mission requirements they have to request it from the service component commands. For example, several combatant commands we spoke with did not identify any specific processes that they use to regularly gather information on personnel at their service components, while officials at U.S. Africa Command stated that they had only recently begun requesting this information on a monthly basis. However, as part of the process for validating new manpower requirements, the combatant commands are required to identify whether the functions and tasks that they are requesting additional positions for can be fulfilled by personnel at the service component commands. Without access to the service-specific personnel data or a process to regularly gather personnel information, it is unclear how this validation process can occur expeditiously. According to a Joint Staff official, officials in that office discuss the personnel and capabilities available within the service component commands when reviewing the combatant commands’ requests for additional positions, but they also do not have direct access to the service component commands’ personnel data systems to review the personnel assigned to the service component commands’ headquarters staff. Without a formal process to gather information on the authorized manpower and assigned personnel at the service component commands, the combatant commands may not have the visibility that is necessary to appropriately size themselves to meet their assigned missions, and are at risk for unnecessarily duplicating functions between the combatant commands and their service component commands. While each military department annually submits budget documents for operation and maintenance to Congress—including the total authorized military positions, civilian and contractor full-time equivalents, and the funding required to support the missions of the combatant commands that they are the combatant command support agent for—these documents do not provide transparency into the authorized positions, the full-time equivalents, or the funding directed to each combatant command. According to DOD Directive 5100.03, Support of the Headquarters of the Combatant and Subordinate Unified Commands, the military departments are responsible for funding the mission and headquarters-support costs of the combatant commands and their subordinate unified commands. Also, volume 2A, chapter 1 of DOD’s Financial Management Regulation, 7000.14.R states that the military departments must ensure adequate visibility over the resources of combatant command-directed missions and other costs for each command. DOD guidance for the submission of the President’s budget justification materials for Fiscal Year 2013, which include the military departments’ budget documents for operation and maintenance, also states that components should report details on contractor manpower/full-time equivalents in addition to military, civilian, and reserve manpower data. According to Standards for Internal Control in the Federal Government, reliable financial reporting, including reports on budget execution, financial statements, and other reports for internal and external use, is important for determining whether agencies’ objectives are achieved. The military departments’ budget documents for operation and maintenance identify the overall authorized military positions, civilian and contractor full-time equivalents, and mission and headquarters support funding to support the combatant commands, but do not provide details on authorized positions, full-time equivalents, or mission and headquarters-support costs by command. For example, the Air Force’s fiscal year 2013 budget document shows that the combatant commands it supports have about 6,550 authorized military positions and civilian and contractor full-time equivalents, but it does not separate the data to indicate the number of authorized positions or full-time equivalents at each of the five combatant commands that it supports. Similarly, the Army’s comparable budget document shows about 3,200 total authorized military positions and civilian and contractor full-time equivalents at all the commands that it supports, but does not separate the data to display authorized positions and full-time equivalents at each of the three commands that it is responsible for. In addition, while the military departments’ budget documents provide information on the total funding that all the combatant commands receive for expenses such as civilian pay, contract services, travel, transportation and other supply costs, they do not separate these expenses and display them for each command. For example, the Air Force’s fiscal year 2013 budget document noted a lump sum of about $285 million for civilian pay for all five combatant commands that the Air Force supports, and the Army’s comparable budget document showed a total of about $38 million for travel costs for all of the commands that it supports. Neither of the military departments’ budget documents displayed this cost data for each individual combatant command. In reviewing DOD budget and policy documents, we found that the military departments, as the combatant commands support agents, do not provide detailed information for each combatant command in their budget documents for operation and maintenance because DOD’s Financial Management Regulation does not require the military departments to identify individual combatant commands’ authorized military positions and civilian and contractor full-time equivalents or the mission and headquarters-support funding required for civilian pay, contract services, travel, and other transportation and supply costs. Without detailed information identifying each combatant commands’ authorized positions, full-time equivalents, and mission and headquarters-support funding, decision makers within DOD and Congress may not have complete and accurate data to conduct oversight of the combatant commands’ resources. Given the substantial increase in authorized positions and mission and headquarters-support costs at the combatant commands and evolving security challenges facing DOD, effective management and oversight over the combatant commands’ resources is essential as the department balances limited resources with future defense priorities. If DOD performed a comprehensive, periodic evaluation of the combatant commands’ authorized positions, that review would help the department to efficiently manage the combatant commands in its efforts to meet the goals and priorities of the 2012 strategic guidance. Moreover, if DOD had complete information on all the authorized manpower and personnel assigned to support the combatant commands and service component commands, the department would have additional visibility into the universe of manpower and personnel dedicated to supporting the combatant command’s assigned missions. This information would aid DOD officials in decisions on requests for additional manpower, reducing the potential for overlap and duplication of functions. Further, detailed information to identify the authorized military positions, civilian and contractor full-time equivalents, and mission and headquarters-support funding that each combatant command receives would help decision makers in DOD and Congress to balance resource priorities in a fiscally challenging environment. As the department realigns itself to address new challenges, full awareness of the combatant commands’ authorized manpower, assigned personnel, and mission and headquarters-support costs would help the department to provide congressional decision makers with the information needed for effective oversight and help ensure the efficient use of resources. We recommend that the Secretary of Defense take the following four actions. To ensure that the geographic combatant commands are properly sized to meet their assigned missions and to improve the transparency of the commands’ authorized manpower, assigned personnel, and mission and headquarters-support costs, we recommend that the Secretary of Defense direct: The Chairman of the Joint Chiefs of Staff to revise Chairman of the Joint Chiefs of Staff Instruction 1001.01A to require a comprehensive, periodic evaluation of whether the size and structure of the combatant commands meet assigned missions. The Chairman of the Joint Chiefs of Staff to revise Chairman of the Joint Chiefs of Staff Instruction 1001.01A to require the combatant commands to identify, manage, and track all personnel, including temporary personnel such as civilian overhires and activated reservists, in e-JMAPS and identify specific guidelines and timeframes for the combatant commands to consistently input and review assigned personnel in e-JMAPS. The Chairman of the Joint Chiefs of Staff, in coordination with the combatant commanders and the secretaries of the military departments, to develop and implement a formal process to gather information on authorized manpower and assigned personnel at the service component commands. The Under Secretary of Defense (Comptroller) to revise volume 2A, chapter 1 of DOD’s Financial Management Regulation 7000.14R to require the military departments, in their annual budget documents for operation and maintenance, to identify the authorized military positions and civilian and contractor full-time equivalents at each combatant command and provide detailed information on funding required by each command for mission and headquarters support, such as civilian pay, contract services, travel, and supplies. In written comments on a draft of this report, DOD concurred with three of our four recommendations and non-concurred with one recommendation. DOD’s comments are reprinted in their entirety in appendix IX. DOD also provided technical comments, which we incorporated into the report as appropriate. DOD did not concur with our first recommendation that the Secretary of Defense require the Chairman of the Joint Chiefs of Staff to revise Chairman of the Joint Chiefs of Staff Instruction 1001.01A, Joint Manpower and Personnel Program, to require a comprehensive, periodic evaluation of whether the size and structure of the combatant commands meet assigned missions. DOD stated that the combatant commands have been baselined twice since 2008, and that the commands have already been reduced during previous budget reviews. We describe in our report several actions taken by DOD to manage the growth in personnel and costs at the combatant commands, including establishing manpower baselines and identifying manpower and personnel reductions. However, as stated in our report, these actions do not constitute a comprehensive, periodic, bottom-up review of the combatant commands’ total workforce in part because DOD’s actions have not included all authorized positions at the combatant commands. For example, as noted in our report, the baseline levels established for the combatant commands apply only to positions in major DOD headquarters activities, and our prior work has found that the data on such headquarters positions is incomplete and not always reliable. Furthermore, not all commands have been included in previous efficiency initiatives. The department also noted that any periodic review of the combatant commands’ size and structure could only be triggered by review of the mission of the command, and stated that requiring a periodic review was not appropriate for inclusion in Chairman of the Joint Chiefs of Staff Instruction 1001.01A. However, the department’s response does not fully explain why the Instruction should not require periodic reviews to ensure that the resources meet constantly- evolving missions, and we continue to believe that institutionalizing a periodic evaluation of all authorized positions would help to systematically align manpower with those missions and add rigor to the requirements process. The department concurred with three of our recommendations that the Secretary of Defense: (1) direct the Chairman of the Joint Chiefs of Staff to revise Chairman of the Joint Chiefs of Staff Instruction 1001.01A to require the combatant commands to identify, manage, and track all personnel, including temporary personnel such as civilian overhires and activated reservists, in e-JMAPS and identify specific guidelines and timeframes for the combatant commands to consistently input and review personnel data in the system; (2) direct the Chairman of the Joint Chiefs of Staff, in coordination with the combatant commanders and secretaries of the military departments, to develop and implement a formal process to gather information on authorized manpower and assigned personnel at the service component commands; and (3) direct the Under Secretary of Defense (Comptroller) to revise volume 2, chapter 1 of DOD’s Financial Management Regulation 7000.14R to require the military departments, in their annual budget documents for operation and maintenance, to identify the authorized military positions and civilian and contractor full-time equivalents at each combatant command and provide detailed information on funding required by each command for mission and headquarters-support, such as civilian pay, contract services, travel, and supplies. In its response to our recommendations, DOD noted that it plans to issue guidance to require all DOD components to identify, manage, and track all personnel data, including temporary personnel like civilian overhires and activated reservists, in e-JMAPS. The planned guidance will also identify specific guidelines and timeframes for DOD organizations to consistently input and review personnel data in e-JMAPs. DOD agreed with our last recommendation regarding DOD’s Financial Management Regulation, but requested that we revise the language to require the military departments to capture or delineate the type of civilians, such as general schedule, foreign service nationals/locally employed staff, or principal staff assistants, provided by the military services within each combatant command. DOD’s response also indicated that the military services suggested the creation of another budget exhibit to capture contract and full-time equivalent data in lieu of the current depiction in their annual budget documents for operation and maintenance. We did not modify our recommendation because, in our view, our recommended revision to DOD’s Financial Management Regulation reflects our findings and captures the information needed to improve visibility over resources devoted to each combatant command, which is now unavailable. Taking additional steps to require more detailed reporting, such as delineating the type of civilians authorized, would be at the department’s discretion but could help to further transparency and improve oversight. We are sending a copy of this report to the Secretary of Defense, the Chairman of the Joint Chiefs of Staff, and the secretaries of the military departments. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3489 or pendletonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix X. We conducted this work in response to direction from the congressional committees to review the resources of the combatant commands. This report (1) identifies the trends in the resources devoted to the Department of Defense’s (DOD) geographic combatant commands and their service component commands, and (2) assesses the extent to which DOD has processes in place to manage and oversee the resources of the combatant commands. To conduct this work and address our objectives, we identified sources of information within DOD that would provide data on the resources at the geographic combatant commands, to include their subordinate unified commands and other activities, and corresponding service components commands. We focused our review on five of the geographic combatant commands within the department: U.S. Africa Command; U.S. European Command; U.S. Northern Command; U.S. Pacific Command; and U.S. Southern Command. Our review excluded U.S. Central Command and its corresponding service component commands due to their responsibilities to support ongoing military operations in Afghanistan during the past several years, which would have inhibited uniform comparisons across the commands. To identify trends in the resources devoted to DOD’s geographic combatant commands, to include their subordinate unified commands and other activities, and their service component commands we obtained and analyzed available authorized positions and actual assigned personnel (military, civilian, and contractors) data and operation and maintenance obligations data from each of the five geographic combatant commands and their corresponding service component commands from fiscal years 2001 through 2012. We focused our review on operation and maintenance obligations—as these obligations reflect the primary mission and headquarters-support costs of the combatant commands, their subordinate unified commands and other activities, and corresponding service component commands—to include the costs for civilian personnel, contract services, travel, and equipment, among others. Our review excluded obligations of operation and maintenance funding for DOD’s overseas contingency operations not part of DOD’s base budget. Since historical data was unavailable in some cases, we limited our analysis of trends to authorized military and civilian positions at the combatant commands from fiscal years 2001 through 2012 and authorized military and civilian positions at the service component commands from fiscal years 2008 through 2012. Due to the availability of data, we similarly limited our analysis of trends in operation and maintenance obligations at the combatant commands and service component commands to fiscal years 2007 through 2012. To assess the reliability of the data, we interviewed DOD officials and analyzed relevant manpower and financial management documentation to ensure that the authorized positions and data on operation and maintenance obligations that the commands provided were tied to mission and headquarters- support. We also incorporated data reliability questions into our data collection instruments and compared the multiple data sets received from DOD components against each other to ensure that there was consistency in the data that the commands provided. We determined the data was sufficiently reliable for our purposes. To determine the extent to which DOD has processes in place to manage and oversee the resources of the combatant commands, we obtained and analyzed documentary and testimonial evidence from DOD, the military departments, the Joint Staff, the combatant commands and their subordinate unified commands on the policies, procedures and systems used to manage command resources. We interviewed officials and obtained documentation on the policies, procedures, and systems used for determining and validating the commands’ manpower requirements. We also interviewed officials and obtained documentation on any steps DOD had taken or planned to take to reexamine the size and structure of the combatant commands. We obtained documentation on the systems used to track their authorized manpower and assigned military and civilian personnel, and contractor full-time equivalents, and also interviewed officials on how often assigned personnel within the combatant commands, subordinate unified commands, and other activities are reviewed to ensure that the data are accurate and up to date. In addition, we reviewed relevant documentation and interviewed officials from the Joint Staff, geographic combatant commands, and service component commands on their processes for sharing information on command authorized manpower and assigned personnel. We also obtained and analyzed data included in the military departments’ budget exhibits for operation and maintenance detailing combatant commands’ authorized positions and mission and headquarters-support funding. We interviewed officials, or where appropriate, obtained documentation at the organizations listed below: Office of the Secretary of Defense Office of the Under Secretary of Defense (Comptroller) Manpower and Personnel Directorate Force Structure, Resources, and Assessment Directorate Strategic Plans and Policy Directorate Department of the Air Force Office of the Secretary of the Air Force, Manpower and Personnel Office of the Secretary of the Air Force, Financial Management Pacific Air Forces Office of the Assistant Secretary of the Army for Financial Management and Comptroller, Army Budget Office Office of the Assistant Secretary of the Army Manpower and Reserve Office of the Assistant Secretary of the Army Manpower and Reserve Office of the Assistant Secretary of the Army Manpower and Reserve Affairs, Training Readiness and Mobilization Office of the Deputy Chief of Staff for Personnel G-1, Plans and Office of the Deputy Chief of Staff for Operations and Plans G-3/5/7; Operations and Plans, Force Management Directorate Office of the Deputy Chief of Staff for Operations and Plans G-3/5/7; Strategy, Plans, and Policy Directorate Office of the Deputy Chief of Staff for Programs G-8, Program U.S. Army Force Management Support Agency Army Pacific Office of the Assistant Secretary of the Navy, (Manpower and Reserve Affairs) Office of the Assistant Secretary of the Navy, (Financial Management and Comptroller), Office of Budget Office of the Deputy Chief of Naval Operations (Manpower, Personnel, Education, and Training) Office of the Deputy Chief of Naval Operations, (Integration of Capabilities and Resources) Fleet Forces Command Headquarters, U.S. Marine Corps Navy Pacific Fleet Marine Forces, Pacific Marine Forces, South Unified Combatant Commands and Subordinate Unified Commands U.S. Africa Command U.S. European Command U.S. Northern Command U.S. Pacific Command U.S. Special Operations Command Special Operations Command, Pacific Special Operations Command, South U.S. Southern Command We conducted this performance audit from May 2012 to May 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Resources at U.S. Africa Command and its Service Component Commands Other includes authorized military and civilian positions in Special Operations Command Africa and security cooperation organizations. Mission and headquarters-support costs reflect obligations for operation and maintenance and are represented in constant fiscal year 2012 dollars. These costs include civilian pay, contract services, and travel, among other costs. The mission and headquarters-support costs for AFRICOM intelligence support, security cooperation organizations, and some costs for Special Operations Command Africa are programmed and budgeted for by other organizations and those costs are not reflected in this appendix. Other includes mission and headquarters-support costs for Special Operations Command Africa, Operation Enduring Freedom Trans Sahara, and Special Operation Command and Control Element Horn of Africa. While Operation Enduring Freedom Trans Sahara is a contingency operation, its costs are funded out of DOD’s base budget. Air Forces Africa was disestablished in April 2012, but reported some mission and headquarters- support costs prior to its disestablishment. The Navy and Air Force each have one service component command that supports both AFRICOM and U.S. European Command. Authorized military and civilian positions and mission and headquarters-support costs for these two service component commands are represented in Appendix IV. Appendix IV: Resources at U.S. European Command and its Service Component Commands Other includes authorized military and civilian positions in Special Operations Command Europe, security cooperation organizations, the Commander’s Communications Activities, and other organizations that have since been disestablished. travel, and equipment, among other costs. EUCOM could not distinguish which mission and headquarters-support costs were specific to headquarters or other organizations from fiscal years 2001 through 2007, so these costs are not broken out across these fiscal years. The mission and headquarters-support costs for EUCOM intelligence support, security cooperation organizations, and some costs for Special Operations Command Europe are programmed and budgeted for by other organizations and those costs are not reflected in this appendix. commands, joint task forces, and other activities, each with their own staff, which support the combatant commands in conducting their operational missions. Appendix V: Resources at U.S. Northern Command and its Service Component Commands Mission and headquarters-support costs reflect obligations for operation and maintenance and are represented in constant fiscal year 2012 dollars. These costs include civilian pay, contract services, and travel, among other costs. The mission and headquarters-support costs for NORTHCOM intelligence support and security cooperation organizations are programmed and budgeted for by other organizations and those costs are not reflected in this appendix. Other includes mission and headquarters-support costs for Joint Task Force Alaska, Joint Task Force North, Joint Task Force Civil Support, and Joint Task Force Headquarters National Capital Region. According to Navy officials, Fleet Forces Command has military personnel dedicated to support NORTHCOM, but no dedicated civilian support, and its mission and headquarters-support costs primarily consist of travel costs that cannot be distinguished from its other costs. As a result, Navy officials said they could not provide mission and headquarters-support costs for Fleet Forces Command. commands, joint task forces, and other activities, each with their own staff, which support the combatant commands in conducting their operational missions. Appendix VI: Resources at U.S. Pacific Command and its Service Component Commands Other includes authorized military and civilian positions in Special Operations Command Pacific, security cooperation organizations, U.S. Forces Korea, Joint Prisoner of War/Missing in Action Accounting Command, U.S. Forces Japan, Alaskan Command, Joint Interagency Task Force West, Center for Excellence in Disaster Management, U.S. PACOM Representative to Guam, and other organizations that have since been disestablished. According to PACOM officials, a portion of the command’s authorized military and civilian positions support the unique mission of the Joint Prisoner of War/Missing in Action Accounting Command to account for all Americans missing as result of past conflicts. In fiscal year 2012, the Joint Prisoner of War/Missing in Action Accounting Command accounts for approximately 15 percent or 511 of PACOM’s total authorized military and civilian positions. Mission and headquarters-support costs reflect obligations for operation and maintenance and are represented in constant fiscal year 2012 dollars. These costs include civilian pay, contract services, travel, and equipment, among other costs. DOD was unable to provide obligations for PACOM prior to fiscal year 2007. The mission and headquarters-support costs for PACOM intelligence support, security cooperation organizations, U.S. Forces Korea and some costs for Special Operations Command Pacific are programmed and budgeted for by other organizations and those costs are not reflected in this appendix. Appendix VII: Resources at U.S. Southern Command and its Service Component Commands Other includes authorized military and civilian positions in Special Operations Command South, security cooperation organizations, Joint Interagency Task Force South, and Joint Task Force Bravo. SOUTHCOM intelligence support, security cooperation organizations, and some costs for Special Operations Command South are programmed and budgeted for by other organizations and those costs are not reflected in this appendix. Other includes mission and headquarters-support costs for Special Operations Command South, Joint Interagency Task Force South, and costs for the 7th Special Forces Group for travel, supplies, and transportation of equipment. Appendix VIII contains information presented in Figure 1 in noninteractive format. In addition to the contact named above, key contributors to this report include Marie A. Mak (Assistant Director), Richard K. Geiger, Cynthia L. Grant, Oscar W. Mardis, Tobin J. McMurdie, Meghan C. Perez, Carol D. Petersen, Richard S. Powelson, John Van Schaik, Michael C. Shaughnessy, Amie M. Steele and Sabrina C. Streagle.
To perform its missions around the world, DOD operates geographic combatant commands each with thousands of personnel. In response to direction from the congressional committees to review the resources of the combatant commands, GAO (1) identified the trends in the resources devoted to DOD's geographic combatant commands and their service component commands, and (2) assessed the extent that DOD has processes in place to manage and oversee the resources of the combatant commands. For this review, GAO obtained and analyzed data on resources, to include authorized positions and mission and headquarters-support costs, for five regional combatant commands' and their service component commands, excluding U.S. Central Command. GAO also interviewed officials regarding commands' manpower and personnel policies and procedures for reporting resources. GAO's analysis of resources devoted to the Department of Defense's (DOD) geographic combatant commands shows that authorized military and civilian positions and mission and headquarters-support costs have grown considerably over the last decade due to the addition of two new commands and increases in authorized positions at theater special operations commands. Data provided by the commands shows that authorized military and civilian positions increased by about 50 percent from fiscal years 2001 through 2012, to about 10,100 authorized positions. In addition, mission and headquarters support-costs at the combatant commands more than doubled from fiscal years 2007 through 2012, to about $1.1 billion. Both authorized military and civilian positions and mission and headquarters-support costs at the service component commands supporting the combatant commands also increased. Data on the number of personnel performing contract services across the combatant commands and service component commands varied or was unavailable, and thus trends could not be identified. DOD has taken some steps to manage combatant commands' resources, but its processes to review size and oversee the commands have four primary weaknesses that challenge the department's ability to make informed decisions. DOD considers the combatant commands' requests for additional positions, but it does not periodically evaluate the commands' authorized positions to ensure they are still needed to meet the commands' assigned missions. DOD tracks some assigned personnel; however, all personnel supporting the commands are not included in DOD's personnel management system and reviews of assigned personnel vary by command. The service component commands support both service and combatant command missions. However, the Joint Staff and combatant commands lack visibility and oversight over the authorized manpower and personnel at the service component commands to determine whether functions at the combatant commands can be fulfilled by service component command personnel. Each military department submits annual budget documents for operation and maintenance to inform Congress of total authorized positions, full-time equivalents, and mission and headquarters-support funding for all combatant commands that they support. However, these documents do not provide transparency into the resources directed to each combatant command. GAO's work on strategic human capital management found that high-performing organizations periodically reevaluate their human capital practices and use complete and reliable data to help achieve their missions and ensure resources are properly matched to the needs of today's environment. Until DOD effectively manages the resources of the combatant commands, it may be difficult to ensure that the commands are properly sized to meet their assigned missions, or to identify opportunities to carry out those missions efficiently. GAO recommends DOD: require a periodic evaluation of the combatant commands' size and structure; use existing systems to manage and track all assigned personnel; develop a process to gather information on authorized manpower and assigned personnel at the service component commands; and require information in the budget on authorized positions, full-time equivalents, and funding for each combatant command. DOD nonconcurred with GAO's first recommendation, but GAO believes it is still needed to add rigor to the manpower requirements process. DOD concurred with GAO's three other recommendations.
Most Medicare beneficiaries elect to enroll in Part B insurance, which helps pay for certain physician, outpatient hospital, laboratory, and other services; DME, such as oxygen, wheelchairs, hospital beds, and walkers; prosthetics and orthotics; and certain supplies. Medicare, under Part B, pays for most DMEPOS based on a series of state-specific or regional- specific fee schedules. Under the schedules, Medicare pays 80 percent, and the beneficiary pays the balance, of either the actual charge submitted by the supplier or the fee schedule amount, whichever is less. To review and process DMEPOS claims, CMS contracts with four insurance companies, known as DME regional carriers. The DME regional carriers review and pay DMEPOS claims submitted by outpatient providers and suppliers on behalf of beneficiaries residing in specific regions of the country. CMS contracts with Palmetto Government Benefits Administrators to serve as the National Supplier Clearinghouse. In fiscal year 2004, NSC received $11.4 million for these activities, and for fiscal year 2005, its approved budget was $11.5 million. Palmetto also serves as the DME regional carrier for Region C. In addition, Palmetto serves as the Statistical Analysis Durable Medical Equipment Regional Carrier, which analyzes claims and reports to the DME regional carriers and CMS on trends in DMEPOS payment and areas of potential fraud. Medicare’s 21 supplier standards were introduced primarily to deter individuals intent on committing fraud from entering the program and to safeguard Medicare beneficiaries by ensuring that suppliers were qualified. The 21 standards apply to a variety of business practices and establish certain requirements. (See app. II for a list of the 21 standards.) For example, the standards require suppliers to have a physical facility on an appropriate site that is accessible to beneficiaries and to CMS, with stated business hours clearly posted. CMS established the requirement for having an appropriate physical facility in December 2000 after investigators discovered fraudulent suppliers without fixed locations claiming vans or station wagons as their place of business or using mail drop boxes to receive Medicare payments for items they billed but never delivered. Among other things, the standards also require suppliers to: comply with applicable federal and state regulatory requirements, including state licensure, when providing DMEPOS items or services; maintain inventory on site or off site, or available through valid contracts with other companies not excluded from doing business with the federal government or its health care programs; and obtain comprehensive liability insurance. The 21 supplier standards also prohibit certain practices. For example, one standard generally prohibits suppliers from using telephone calls to solicit new business, because the Social Security Act prohibits this type of marketing to Medicare beneficiaries. NSC verifies compliance with the supplier standards primarily during enrollment and reenrollment, through on-site inspections and desk reviews conducted by NSC analysts. (App. II lists the standards and how NSC verifies them during enrollment and reenrollment.) For example, the on-site inspections are used to check the compliance with the standards for whether the supplier: has a physical facility on an appropriate site that is accessible to beneficiaries and to CMS, with a clearly visible sign with hours posted; has its own inventory in stock on site, off site at another location, or has a contract with another company for the purchase of inventory; maintains records that document delivery of items to beneficiaries and information provided to beneficiaries on warranties, including how repairs and exchanges will be handled, and how to contact the supplier in case of questions or problems; and has a written beneficiary complaint resolution policy and maintains records on beneficiary complaints and their resolution. NSC’s analysts are expected to follow procedures to review information provided by the on-site inspection and take other steps to verify suppliers’ compliance with the standards. For example, when on site the inspectors are expected to check that the supplier has all the valid occupation and business licenses required by its state and has a comprehensive liability insurance policy. The NSC analyst is expected to check that the supplier has all the state licenses that it would need to provide the items it disclosed in its application. The NSC analyst also is expected to contact the insurance underwriter to ensure that the supplier’s policy is valid, and the post office to make sure the supplier’s address is listed. NSC also has a procedure to match data from its supplier database with computerized lists maintained by the federal government to ensure that supply company owners are not prohibited from participating in federal health care programs or debarred from federal contracting. NSC does not specifically verify adherence to 4 of the 21 standards at enrollment and reenrollment, because violations would generally be apparent through its verification of other standards. For example, the standard that requires suppliers to furnish NSC with complete and accurate information on the application and notify NSC of any changes within 30 days is verified through checking the accuracy of the suppliers’ disclosures of information for other standards—such as ownership and the appropriateness of the physical facility. The majority of on-site inspections are conducted by more than 380 field representatives of Overland Solutions, Inc. (OSI), a company that performs this work as a subcontractor to NSC. In addition, NSC uses its own personnel, who are located in six cities, to conduct on-site inspections. NSC and OSI conducted over 20,000 on-site inspections in fiscal year 2004. In performing their reviews, the site inspectors follow certain procedures. NSC requires that site inspectors arrive unannounced for any inspection. Before the inspection, NSC provides the inspectors with briefing information on the supplier, including information on whether the supplier is enrolling or reenrolling and the type of state licenses to verify. While on site, inspectors are expected to take photographs of the supplier’s sign with its business name, posted hours of operation, complete inventory in stock, and facility. NSC also expects site inspectors to obtain copies of relevant documents, such as state licenses, comprehensive liability insurance, contracts with companies for inventory, and contracts for the service and maintenance of DME. As long as suppliers can demonstrate that they comply with the standards and have not been excluded from participating in any federal health care program, NSC must enroll or reenroll them in Medicare. Enrolled suppliers are issued a Medicare billing number. If NSC discovers that a new applicant or enrolled supplier is not in compliance with any of the 21 supplier standards, NSC can deny the application or, with CMS’s approval, revoke the supplier’s billing number. Suppliers whose applications have been denied or whose numbers have been revoked can submit a plan to NSC to correct the noncompliance or appeal the denial or revocation by requesting a hearing or both. If a supplier requests a hearing, the first level of appeal is conducted by a carrier hearing officer who was not involved in the original determination. The supplier can submit new information to address the compliance problems identified by NSC. If dissatisfied with the carrier hearing officer’s ruling, either NSC or the supplier can request a review by an administrative law judge, which became the second level of appeal as of December 8, 2004. Prior to that date, second level appeal hearings were conducted by a CMS review official. At both levels of the hearing process, if the supplier can demonstrate that it is currently in compliance with the standards, the supplier will be given a billing number. NSC’s Supplier Audit and Compliance Unit (SACU) also has responsibility to help verify suppliers’ compliance with the 21 standards and identify fraudulent activity. The SACU supervises NSC’s site inspectors and oversees the OSI on-site inspections. It also analyzes supplier billing and enrollment patterns. Based on billing or other irregularities, the SACU can help NSC identify suppliers for additional on-site inspections. For example, the SACU might discover that several new suppliers are owned by the same individuals as other companies that are under investigation for fraudulent billing. Based on this information, the SACU could target the new suppliers for additional on-site inspections or refer the suppliers for investigation by federal law enforcement, such as the OIG and the Federal Bureau of Investigation (FBI). NSC’s verification procedures have weaknesses that leave the Medicare program without assurance that suppliers billing the program are meeting the 21 standards, and thus, are qualified and legitimate. NSC’s procedures to verify state licenses have gaps that have allowed suppliers to be paid for DMEPOS items they are not licensed to supply in their states. In part, this is because CMS has not set requirements for a stronger licensure verification effort. Further, although on-site inspections play a key role in verifying suppliers’ compliance with the 21 standards, we estimate that NSC did not conduct more than 600 required on-site inspections and its inspection procedures have limitations. NSC does not have an effective means of identifying suppliers that violate the standard to have appropriate state licensure for the items they provide to beneficiaries. This is partly because CMS’s requirements are inadequate to assure an effective process and partly because NSC does not have effective procedures that are consistently followed. To determine whether it needs to verify a supplier’s license, NSC relies on the information the supplier provides—in enrollment or reenrollment applications—regarding the items or services the supplier intends to provide to Medicare beneficiaries. Suppliers are required to certify on their applications that they will notify CMS of any changes to the information they provided on the form. However, if the supplier fills out the application incorrectly or dishonestly and does not provide a license during an on-site inspection, NSC would not verify whether the supplier has all the licenses needed in its state. We also found that NSC did not consistently resolve discrepancies or omissions in the information provided by suppliers— such as not forwarding a copy of a needed state license–-before issuing suppliers billing numbers. Further, even though suppliers may change the items they supply, CMS’s contract requires NSC to verify licensure only during enrollment and does not require verification at any later time, such as during reenrollment. Thus, even if a supplier begins to bill for items that require a state license and discloses this information during reenrollment, CMS does not require NSC to check the supplier’s state licenses. Further, CMS does not require NSC to recheck suppliers prior to reenrollment to ensure that the supplier’s license has not lapsed. Finally, CMS has not required NSC to verify licensure after enrollment by routinely comparing a supplier’s actual billing history against the DMEPOS items and services originally disclosed on the supplier’s application. Without such a check, CMS lacks assurance that suppliers are billing only for items they disclosed to NSC and for which NSC has verified a license. As a result of these gaps, Medicare paid suppliers when NSC had not verified their licenses, including some suppliers that lacked the appropriate license. As table 1 shows, by analyzing 2004 DMEPOS claims data, we found 121 suppliers in Florida, Louisiana, and Texas that were each paid at least $1,000 by Medicare for oxygen services, even though they should not have billed for them. These suppliers either had not informed NSC that they would be billing for oxygen, did not provide NSC with the appropriate state license to verify, or both. Therefore, these suppliers were not in compliance with the 21 standards. In total, these suppliers were paid almost $6 million by Medicare. When we checked with the three states, we found that 22 of these suppliers did not have a license to provide oxygen in their states in 2004. These unlicensed suppliers were paid $231,730 in 2004 by Medicare for oxygen on behalf of beneficiaries. In addition, we verified licensure with the respective states for a sample of the suppliers that had disclosed to NSC their intention to bill for oxygen and had been paid at least $1,000 by Medicare for this service. Through this process, we identified 7 more suppliers that did not have the required state license to provide oxygen services in 2004. Similarly, in 2003 and 2004, Medicare paid prosthetics and custom- fabricated orthotics claims submitted by suppliers that did not both disclose to NSC that they would supply these items and provide a copy of their licenses. Thus, they should not have been allowed to bill Medicare for these items. We found 28 suppliers in Illinois and Texas that were paid a total of about $197,000 in 2004 for prosthetics and custom-fabricated orthotics even though they should not have been billing for these items. Routinely comparing suppliers’ billing to the information they report on the enrollment or reenrollment application regarding the items and services they intend to provide might have avoided some of the improper prosthetics and orthotics payments that occurred in Florida. In this state, Medicare payments for prosthetics and custom-fabricated orthotics inexplicably tripled in 1 year—from about $32.5 million in 2003 to almost $107.0 million in 2004. As figure 1 shows, most of the increase was in payments to suppliers that did not disclose to NSC that they intended to provide these items. In 2004, the 73 suppliers that did not disclose the intention to provide prosthetics or orthotics were paid more than $56.3 million. These 73 suppliers were paid more than the amount paid to the 262 suppliers that had informed NSC that they would provide these items. The DME regional carrier has established about $16.3 million as overpaid to 70 of the 73 suppliers, but has collected less than $2.3 million plus interest payments of $60,820, as of April 21, 2005. Investigative staff at the Region C DME regional carrier informed us that at least 46 of the 73 suppliers are currently under active investigation for health care fraud. When NSC reviewed each case we identified of suppliers that billed for oxygen or prosthetics and custom-fabricated orthotics without disclosing the intention to do so, its analysis revealed several types of problems with its processing of suppliers’ applications. For example, in Florida, for one case that we identified, the supplier had not correctly filled out the application to disclose the intention of providing prosthetics and custom- fabricated orthotics but had given NSC a copy of its state license. In two cases, the supplier disclosed the intention of providing prosthetics and custom-fabricated orthotics, but did not give NSC a copy of its state license to review. Despite the discrepancies in the information provided by suppliers, NSC enrolled or reenrolled these suppliers. In three cases, the supplier disclosed the intention to provide prosthetics and custom- fabricated orthotics and gave NSC a copy of its license, but NSC staff did not update their information appropriately in the supplier database. During this engagement, we discussed with CMS NSC’s weaknesses in verifying suppliers’ licenses. CMS officials acknowledged that the law requires CMS to restrict Medicare payment of prosthetics and certain custom-fabricated orthotics to those supplied by a qualified practitioner and fabricated by a qualified practitioner or supplier. The law defines qualified practitioners as a physician; an orthotist or a prosthetist who is licensed, certified, or has credentials and qualifications approved by the Secretary of Health and Human Services; or a qualified physical therapist or occupational therapist. The law defines qualified suppliers as entities accredited by the American Board of Certification in Orthotics and Prosthetics, Inc., the Board for Orthotist/Prosthetist Certification, or a program approved by the Secretary of Health and Human Services. CMS is in the process of developing proposed regulations that would further define qualified practitioners and suppliers of prosthetics and certain custom-fabricated orthotics on a national level. As an interim step, as of October 3, 2005, CMS will be requiring its DME regional carriers to put edits in their payment systems to deny claims for prosthetics and certain custom-fabricated orthotics submitted by any suppliers that are not qualified, or do not have qualified practitioners on staff, in the states that currently require licensure or certification. CMS indicated that these two actions should help address the problem of unlicensed suppliers billing for prosthetics and custom-fabricated orthotics. However, if NSC does not resolve discrepancies in the information provided by suppliers to have an accurate supplier database, the DME regional carriers will not have accurate information for approving or denying prosthetics and certain custom-fabricated orthotics claims. Further, the agency has not restricted payments for any other items that require state licensure—such as oxygen. Nor has it taken action to prevent payments to suppliers that have violated the standard for accurate disclosure of application information by billing for items they have not disclosed to NSC—whether or not a license is required in their states to provide these items. CMS has recently added another requirement for verifying licensure and other certifications. During this evaluation, we pointed out to CMS staff that the agency’s contract with NSC was not specific about whether a license close to its expiration date when submitted to NSC should be rechecked to ensure the supplier had renewed it. CMS was developing a new statement of work for NSC and as a result of our discussion, the new statement of work requires NSC to follow up to ensure renewal of licenses, insurance policies, and certifications submitted within 60 days of expiration. NSC has not conducted the routine on-site inspections to verify supplier standards for all the DMEPOS suppliers that CMS requires it to inspect. We estimate that 605 enrolled suppliers that NSC was required to inspect never received an on-site inspection. We also estimate that NSC conducted on-site inspections for another 3,079 suppliers, but did not properly record the date of these inspections in its supplier database. As a result, the database–-with inaccurate or missing information—is not a reliable management tool for CMS to use in overseeing NSC’s activities. NSC may not have conducted all of the required on-site inspections because of its procedures for determining which suppliers to inspect. According to NSC’s written procedures, NSC staff use discretion to decide if an on-site inspection should be conducted prior to the enrollment or reenrollment of a supplier. In contrast, while CMS’s contract with NSC exempts certain types of suppliers from routine on-site inspection, it does not state that NSC should use its discretion to choose whether to inspect the nonexempt suppliers. CMS staff informed us that NSC is required to inspect suppliers on initial enrollment and reenrollment, with some exceptions, and they were unaware that NSC was not conducting all of the required on-site inspections. Furthermore, because CMS’s statements of work in its fiscal year 2004 and 2005 contracts with NSC were not clear about what constitutes a supplier chain, NSC was not inspecting other suppliers that could be eligible for on- site inspections. NSC did not have to inspect supplier chains with 25 or more locations. However, the contract did not clearly state whether all 25 locations in the chain have to have active billing numbers. As a result, NSC was exempting some suppliers in chains that currently have fewer than 25 locations with active billing numbers. We found 484 active suppliers included in chains with 24 or fewer locations with active billing numbers as of May 31, 2004. Of these 484 active suppliers, 257 did not have any on- site inspections recorded. For example, NSC indicated to us that no on-site inspection was needed for Responsive Home Health Care, because it was included in a chain with 50 locations. However, it was part of a chain with 24 active locations, one location whose billing number had been revoked, and 25 inactive locations. We recently informed CMS that its contract language on chain suppliers was not clear, because CMS was developing a new statement of work for the next NSC contract. As a result, CMS revised its contract language for fiscal year 2006 to clarify that a chain consisted of 25 or more active supplier locations. Even if NSC had conducted all of its on-site inspections, the contractor’s procedures for conducting them limit their effectiveness as a means of verifying compliance with the supplier standards in several ways. Thus, the procedures cannot assure suppliers’ legitimacy and qualifications to serve beneficiaries. First, NSC does not explicitly require its site inspectors to review a specific number of suppliers’ beneficiary files during their inspections. NSC told us that inspectors reviewed beneficiary files, but OSI told us that its inspectors were not required to review the contents of any beneficiary files. Without reviewing beneficiary files, it is unclear how inspectors can verify suppliers’ compliance with the standard that requires suppliers to maintain several forms of documentation—including proof of delivery and evidence of their efforts to educate beneficiaries on how to use the equipment. Further, reviewing beneficiary files is also helpful to provide support beyond a written supplier policy that other standards are being met. For example, a record of equipment maintenance is better proof that the supplier repairs equipment than a written policy alone. Reviewing beneficiary files can also enable an inspector to identify potentially fraudulent patterns of behavior and fabrications designed to cover up lack of compliance with the 21 standards. For example, NSC investigators told us that when many beneficiaries using one supplier have the same physician’s signature on certificates that are required by Medicare to affirm the medical necessity of certain DMEPOS items, this can be a sign of fraudulent certifications designed to falsify compliance with Medicare’s rules. The Region C DME regional carrier is currently investigating a group of suppliers using the same set of physicians on their certificates. Second, NSC does not routinely provide its site inspectors with the dollar amounts and specific DMEPOS items a supplier billed to Medicare. Knowing a supplier’s billing history would enable inspectors to determine whether the supplier’s submitted claims coincide with its inventory, invoices, delivery tickets, and other documentation in beneficiary files. When we accompanied NSC inspectors to the physical facilities of several suppliers about which NSC had suspicions—based on the suppliers’ billing patterns or their association with other companies under investigation— the site inspectors did not have data on the billing histories for the suppliers being inspected. As a result, the inspectors did not know what types and amounts of inventory, delivery tickets, or invoices they should expect to find. Third, neither CMS nor NSC explicitly requires the site inspectors to verify a supplier’s inventory when it is stored at, or purchased from, another location. The inventory standard does not preclude a supplier from storing inventory off site or relying on another supplier—even a competitor—to provide its inventory. However, when this occurs, without taking additional verification steps, NSC would not know whether the off-site inventory exists or whether the source of inventory is legitimate. According to the inventory standard, suppliers cannot contract with companies that are currently excluded from the Medicare program, any state health programs, or from any other federal procurement or nonprocurement programs. However, without investigating the companies that are cited as sources of inventory, NSC would not know if this standard was being met. NSC’s procedures suggest, but do not require, its site inspectors to verify off-site inventory locations. Because CMS does not require NSC to conduct verification of off-site inventory or an assessment of the company cited as the source of inventory, the current procedures do not fully verify the inventory standard. Inspecting off-site inventory or assessing the validity of inventory contracts can help pinpoint violations of the standard for inventory and can also identify potentially fraudulent activities. For instance, when NSC inspected an address of a company that a supplier gave as its source for inventory, it discovered an auto body shop at that address. In another instance, NSC found a vacant building at the address given as a supplier’s inventory source. These suppliers violated the standards for disclosing accurate information to NSC and for having inventory or a contract to procure it. Further, citing a nonexistent source of inventory suggests the possibility that these suppliers were engaging in fraud. Similarly, groups of suppliers under investigation for fraud in Houston in 2003 and 2004 were using the same company as their fictitious source of inventory. SACU investigators were able to identify other suppliers participating in the same fraud scheme because the suppliers claimed they were obtaining inventory from a source that was under investigation. Through examining sources of inventory, our investigators identified companies with questionable financial transactions or owners involved with suppliers engaged in potentially fraudulent billing. For example, we identified and investigated one distribution company in Florida that six suppliers had cited as one of their main sources of inventory. CMS had denied or revoked the billing numbers for the six suppliers, in part because they did not appear to have inventory, but five of them were able to obtain or regain their billing numbers after providing contracts for inventory from this distribution company. Our investigators found that the distribution company’s bank had filed 27 separate reports identifying cash withdrawals from company accounts in amounts ranging from $10,000 to more than $98,000 over a period of 20 months—almost $1 million in total. Such cash withdrawals are suspicious because they can indicate attempts to disguise illicit funds and make them more difficult to track. Even more suspicious, our investigators found that this distribution company did not appear to be an active business. Through on-site inspections conducted in March 2005, we found that two of the addresses given for it were vacant office/storage units and one was a custom woodworking shop. In June 2005, we investigated a fourth possible address for the company. This address had been leased by an individual who identified himself in leasing paperwork as being associated with a “Medical Equipment” business and was found to be a storage unit littered with debris and a pile of boxes, many of which were crushed and broken. The investigators saw no posted signs or activities that would indicate an active business. In addition, of the five suppliers currently reenrolled in Medicare that cited this source of inventory, three were under investigation in March 2005 by the Region C DME regional carrier’s fraud control unit. Out-of-cycle on-site inspections have been effective in identifying suppliers that are not complying with Medicare’s standards. For example, during the April 2004 hearing before the Senate Committee on Finance on the Medicare power wheelchair benefit, the attendees watched a video of law enforcement surveillance that showed individuals bringing office equipment and DMEPOS items into an office suite in order to appear to meet the standards for having an appropriate physical facility and inventory to pass an on-site inspection. Because the timing of enrollment and reenrollment inspections are predictable, a supplier intent on committing fraud can anticipate an enrollment on-site inspection and create the illusion of legitimacy, fully understanding that an inspector is not likely to return for 3 years. Out-of-cycle on-site inspections can be so valuable that we previously recommended that CMS direct NSC to routinely conduct them for suppliers suspected of billing improperly. CMS agreed with the recommendation and pointed out the number of out- of-cycle inspections that were being completed. In 2003, NSC conducted over 600 out-of-cycle inspections and found 306 DMEPOS suppliers not complying with Medicare’s standards. NSC continued this practice in fiscal year 2004, conducting over 400 out-of-cycle on-site inspections targeted specifically at high-volume suppliers that were not part of chains. CMS has also requested NSC to conduct out-of-cycle on-site inspections in fiscal year 2005. Nevertheless, NSC’s contract does not explicitly require it to conduct out-of-cycle on-site inspections. Although NSC has conducted out-of-cycle on-site inspections in the last several years, without becoming an explicit part of its contract, this activity could be curtailed at any time. We discussed our concerns about this with CMS staff writing the revised statement of work for a new contract that is scheduled to be awarded in December 2005. As a result, CMS included language in the revised statement of work that will explicitly require the contractor for NSC to conduct random, out-of-cycle on-site inspections as resources permit. However, the change in the statement of work does not require NSC to conduct a minimum number of out-of-cycle on-site inspections as a routine part of its activities. Medicare’s standards are currently too weak to be used effectively for screening DMEPOS suppliers that want to enroll in the program. The 21 standards focus on certain operational characteristics. However, they do not include standards related to supplier integrity and capability analogous to those that federal agencies generally apply to prospective contractors or those used by at least two state Medicaid programs for their suppliers. For example, federal agencies do not have to contract with companies that have demonstrated poor performance in the past. In contrast, CMS has reenrolled suppliers whose billing numbers have been revoked, after they have demonstrated compliance with the standards—no matter how many standards they had previously violated. We found cases of suppliers that had billed improperly and violated standards, reentered the program, and then began to bill improperly for other items. CMS is currently developing more specific guidance for applying some of its 21 standards. In addition, to implement provisions in the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA), CMS is introducing a competitive bidding process for DME, off-the-shelf orthotics, and supplies, and is developing quality standards that would supplement the existing ones. When implemented, these steps could help ensure that DMEPOS suppliers are legitimate businesses and qualified to bill Medicare. Although a federal agency primarily pays for items provided by DMEPOS suppliers, these businesses are not held to standards analogous to those that apply to companies that seek to contract with the federal government. Under federal procurement regulations, agencies are generally required to determine whether a potential contractor is “responsible”—that is, whether it has a satisfactory record of performance, integrity, and business ethics, as well as the financial, technical, and managerial ability to provide the specified products and services. Federal agencies can consider a contractor’s past performance as an indicator of future performance and require a disclosure of financial and management information to make their assessment. In addition, after a contract is awarded, federal agencies can terminate the contract for default or convenience. Further, for committing certain crimes or not meeting certain federal requirements, a company may be debarred from receiving federal contracts, generally for up to 3 years. Some state governments have requirements to ensure that Medicaid suppliers are responsible. For example, California’s Medicaid program requires DME suppliers to have the administrative and fiscal foundation to survive as a business, demonstrated by financial records, such as a business plan, bank statements, and contractual agreements. California state officials told us that a DME supplier in their state could not meet the definition of being an established business for the Medicaid program if it sold power wheelchairs out of a residence, as some Medicare DME suppliers have done. Similarly, Florida’s Medicaid program requires suppliers to provide evidence of being a viable, ongoing business. Florida also requires anyone with 5 percent or greater ownership, and the manager of the supplier, to be fingerprinted and undergo a criminal background investigation, because the state will not enroll suppliers with owners convicted of several types of crimes, such as health care fraud or patient abuse. In contrast, suppliers are not CMS contractors, and CMS’s standards do not require suppliers to demonstrate that they are responsible based on their financial, technical, and managerial ability, their integrity, and their past performance. As a result, suppliers that are not legitimate DMEPOS businesses have enrolled in Medicare and have been paid millions of dollars in improper payments without having to demonstrate that they have the ability and integrity to serve beneficiaries, as the following examples show. For example, in sworn testimony before the Senate Committee on Finance in April 2004, a witness who pleaded guilty to fraud explained her part in a $25 million fraud scheme that she and a group of 19 others committed against the Medicare program. She explained how she was able to set up a sham company—Mercury Medical Supplies—with $3,000 and obtain a Medicare billing number, even though she had no prior experience, expertise, or discernable resources for providing DMEPOS items or services. From September 2000 to December 2001, when its billing number was revoked, Medicare paid Mercury Medical Supplies $1,158,482 for providing DMEPOS items that were falsely billed based on forged physicians’ prescriptions and were generally not supplied to beneficiaries. While the Medicare program paid Mercury Medical Supplies over $1 million but did not inquire into its financial ability to supply DMEPOS items, one federal agency refused to award a $230,000 contract to a company with $32,500 in working capital, in part because the agency’s contracting officer did not think that the company was financially strong enough to fulfill the contractual obligations. Like Mercury Medical Supplies, All-Divine Health Services in Lufkin, Texas was not a legitimate DMEPOS business, but managed to enroll in Medicare in December 2002. NSC’s inspector noted on an initial site inspection report that the owner explained that she was awaiting inventory, which was why she had none in her storage area prior to enrollment in the Medicare program. Once enrolled, All-Divine Health Services began to bill for power wheelchairs, an item for which Medicare pays over $5,000. However, because of concerns about inappropriate power wheelchair billing, NSC conducted out-of-cycle on-site inspections of All-Divine and other power wheelchair suppliers in the area. The site inspector found evidence of potential fraud, such as altered certificates from physicians attesting to the beneficiaries’ medical need for the items to be supplied, as well as violations of Medicare’s standards. Following the out-of-cycle inspection, CMS found that All-Divine was in violation of four standards, because it lacked comprehensive liability insurance, lacked a state license to provide bedding, did not have adequate contracts for inventory, and did not have adequate provision to repair and service DME. All-Divine’s billing number was revoked effective August 6, 2003. After the owner pleaded guilty to conspiracy to commit health care fraud on June 25, 2004, her lawyer testified that All-Divine’s owner had not understood the intricacies of proper Medicare billing and had no experience managing a DMEPOS company. The owner told her lawyer that she did not think she was committing a crime, although she admitted purchasing paperwork certifying beneficiaries as needing power wheelchairs and then submitting claims on their behalf. Her lawyer also testified that the owner stated that her firm lacked the operational controls to ensure that beneficiaries actually received the power wheelchairs for which the company billed and was paid by Medicare. Before its billing number was revoked, All-Divine was paid over $1.8 million by the program, predominantly for power wheelchairs not provided as billed. While federal agencies, including CMS, may choose not to conduct business with companies that lack integrity or perform poorly, and may disqualify companies from competing for federal contracts, suppliers that have failed to comply with Medicare’s standards have not lost their billing privileges for any substantial length of time. Federal agencies can terminate contracts at their convenience or for default—which is when a contractor fails to perform the contract. For certain serious violations, contractors can be debarred from receiving any federal contract, generally for up to 3 years. Willful failure to perform the terms of a government contract is a basis for debarment. In addition, apart from debarment, agencies can refuse to offer new contracts to companies exhibiting previous performance problems or a lack of integrity in the past. This may occur after conviction for criminal charges, but sometimes the refusals follow allegations of wrongdoing. For example, one agency refused to offer a new contract to a company that had allegedly provided false certifications in the past. Another agency used the results of criminal investigative reports as a basis for refusing to offer contracts to companies. Compared with Medicare, the Medicaid programs of California and Florida put more barriers to reenrollment of problematic suppliers into Medicaid. For example, California provisionally enrolls new Medicaid providers for 12 to 18 months. During this period, if the provider fails to meet state requirements, the state agency disenrolls the provider from Medicaid. In addition, if a provider fails to accurately disclose information, such as the ownership of the company, California can disenroll the provider from Medicaid and keep it from reenrolling for 3 years. The California Medicaid program denies applications from providers under investigation for criminal offenses. Florida will not reenroll suppliers that have been excluded from the program. When NSC identifies suppliers that violate Medicare’s standards, CMS may revoke their billing privileges. However, in contrast to California and Florida Medicaid, if a supplier can demonstrate compliance with the 21 standards, CMS readmits it into Medicare unless it has been otherwise excluded from participating in the program. DMEPOS suppliers that have their billing privileges revoked and then later reenter Medicare are not uncommon. We identified 1,038 DMEPOS suppliers that lost their billing privileges in 2003, generally for violating multiple standards. Of these suppliers, 192 were reenrolled in Medicare as of May 31, 2004, with the average period of suspension lasting about 3 months. None of these suppliers encountered any barrier to enrollment for violating the standards. Further, when some suppliers that had billed improperly because they were unlicensed reentered the program, they resumed improper billing for different types of items. See table 2 for two examples. According to NSC and CMS officials, strengthening the supplier standards by increasing their specificity is an important step in preventing enrollment of suppliers that are intent on committing fraud. NSC and CMS officials agreed that the inventory and physical facility standards are not specific enough. These standards do not specify the characteristics of an inventory, or the amount, type, or source of inventory that should be required for the items or services the supplier intends to provide to Medicare beneficiaries. According to these officials, the lack of specificity in the standards has allowed suppliers that were not legitimate companies to acquire Medicare billing numbers and then defraud the program. NSC and OIG officials investigating enrolled suppliers with potentially fraudulent billing reported that many had physical facilities not conducive to conducting a legitimate DMEPOS business. For example, these investigators have found multiple suppliers located in close proximity in small suites in the same building. In addition, they found suppliers in buildings that were not located where beneficiaries were likely to come and purchase DMEPOS items. The investigators also reported finding DMEPOS suppliers operating out of their houses and garages. These suppliers had few DMEPOS items in stock, but claimed that they had contracts for acquiring inventory. These documents sometimes lacked the usual elements of a contract, such as the clear signature of authorized individuals from both companies and the time period for the contract. Nevertheless, these suppliers met the current standards. In early 2004, based on NSC proposals, CMS drafted new guidance on the current supplier standards to make them more specific. For example, CMS added more details to describe what constituted a reasonable amount of inventory, the elements of an acceptable contract for inventory, and an appropriate physical facility from which to provide items and services to Medicare beneficiaries. As of June 2005, CMS had not issued the new guidance. According to an agency official, some of the revisions have been incorporated into a proposed regulation under review within the agency. The official told us that CMS plans to issue other changes through revisions of Medicare guidance manuals, once the proposed regulation had been issued. In addition to the new guidance, provisions of the MMA that require CMS to develop quality standards for DMEPOS suppliers and competitive bidding, when implemented, could enhance the agency’s ability to screen suppliers. The MMA requires CMS to develop quality standards for all DMEPOS suppliers and to select one or more independent accreditation organizations that will apply these standards to determine if suppliers are meeting them. CMS has not finished its development of the quality standards, so it is not clear whether the standards will incorporate requirements for suppliers to demonstrate that they have the integrity and capability to perform their functions analogous to the standards for federal contractors. In addition, the MMA requires CMS to establish competitive bidding among suppliers for DME, supplies, off-the-shelf orthotics, and enteral nutrients and related equipment and supplies in at least 10 of the largest metropolitan areas by 2007 and in 80 of these areas by 2009. The MMA will require suppliers chosen by competitive bidding to comply with the quality standards that are being developed for all DMEPOS suppliers as well as new financial standards to be specified by the Secretary. However, competitive bidding will be limited to certain DMEPOS items and localities, so not all Medicare DMEPOS suppliers will be held to the new financial standards. CMS anticipates issuing a proposed rule in the fall of 2005 on DME competitive bidding and on quality standards and accreditation and a final rule in 2006. CMS’s oversight has not been sufficient to determine whether NSC is meeting its responsibilities in screening, enrolling, and monitoring DMEPOS suppliers. CMS was unaware—until we informed the agency— that NSC had not conducted all required on-site inspections of suppliers. Furthermore, CMS did not know that, in contrast to its requirements, NSC’s procedures allow its staff to use discretion in selecting which suppliers received on-site inspections. In addition, CMS did not recognize gaps in NSC’s verification of suppliers’ state licenses and as a result, Medicare paid suppliers whose licenses the contractor did not verify. During our review, we found weaknesses in the methods CMS uses to oversee its contractor that could lead to the agency not recognizing problems in the verification process. CMS evaluates NSC’s performance primarily through an annual inspection. During this inspection, CMS analyzes a small random sample of supplier files to determine, for instance, whether NSC is conducting on-site inspections, processing enrollment applications, and handling appeals of denied or revoked billing privileges in accordance with its requirements. The analysis of NSC’s supplier files is CMS’s most direct means of assessing NSC’s efforts to screen and enroll suppliers; however, we determined that CMS’s past practice of basing NSC’s performance on a sample selected from a single quarter of the year may not be adequate. NSC’s performance might differ during the quarters in which it was not reviewed. CMS also recognized problems with basing its review on a single quarter, and in October 2004 began to institute quarterly reviews of a sample of supplier files. However, if any problems are uncovered, the sample sizes examined by CMS are too small to be used as a means of oversight, relative to the number of application files processed and other type of files reviewed. During fiscal year 2004, NSC processed more than 58,000 supplier applications for enrollment or reenrollment. To evaluate NSC’s efforts to enroll suppliers during its fiscal year 2004 inspection, CMS examined a sample of 10 approved supplier applications, as well as 10 denied and 10 returned applications. To evaluate NSC’s efforts to reenroll suppliers, CMS examined a sample of 20 approved reenrollments. If CMS uncovered any problems, it would need to select a much larger sample to determine if the problems were systemic, a step that is not indicated in the evaluation protocol. CMS’s evaluation of NSC’s performance is focused primarily on whether the suppliers’ applications are filled out and processed correctly—not whether NSC has conducted the required verification tasks thoroughly. For example, while NSC may have a supplier site inspection form with the boxes checked to indicate that a supplier is complying with various standards—such as the one to maintain documentation of delivery of items to beneficiaries—CMS cannot know from reviewing the form if the inspector checking that supplier actually examined any beneficiary files. CMS also oversees NSC through reviewing monthly reports from NSC, but this does not provide information on the thoroughness of NSC’s screening and enrollment efforts. Instead, CMS reviews the monthly reports to monitor NSC’s workload—including the number of enrollment and reenrollment applications received, pending, approved, and returned; the timeliness in processing applications; the number of denials and revocations; and the timeliness with which NSC handles inquiries from suppliers. This monitoring is important to ensure that NSC is managing its workload, but does not inform CMS as to how well NSC performs these activities. Finally, while CMS has established performance goals in NSC’s contract related primarily to processing supplier applications and managing other aspects of NSC’s workload—such as handling inquiries—it has not established performance goals connected to effectiveness of the screening or fraud prevention efforts. CMS uses both the annual inspection and the monthly reports to measure NSC’s performance against goals established in its contract. These goals are linked to timeliness in processing suppliers’ applications, appeals, and inquiries. For example, according to its contract, NSC must process 90 percent of all applications and reenrollments accurately within 60 calendar days of receipt and 99 percent of applications within 120 calendar days of receipt, process 90 percent of appeals accurately within 60 calendar days of answer 85 percent of supplier telephone calls within the first 60 seconds. These performance measures do not indicate the success of NSC or its SACU in identifying noncompliant and fraudulent suppliers. Further, CMS’s contract requires NSC to maintain a SACU, but the contract does not establish outcomes expected from this unit. Similarly, in its annual inspection, CMS does not evaluate the SACU’s efforts—whether, for instance, the SACU has adequately educated suppliers, adequately supervised the quality of on-site inspections, or analyzed supplier enrollment and billing data so that NSC can identify suppliers for additional inspections. CMS is responsible for assuring that Medicare beneficiaries have access to the equipment, supplies, and services they need, and at the same time, for protecting the program from abusive billing and fraud. The supplier standards and NSC’s gatekeeping activities were intended to provide assurance that potential suppliers are qualified and would comply with Medicare’s rules. However, there is overwhelming evidence—in the form of criminal convictions, revocations, and recoveries—that the supplier enrollment processes and the standards are not strong enough to thoroughly protect the program from fraudulent entities. We believe that CMS must focus on strengthening the standards and overseeing the supplier enrollment process. It needs to better focus on ways to scrutinize suppliers to ensure that they are responsible businesses, analogous to federal standards for evaluating potential contractors. CMS’s current effort to develop additional guidance on the standards and the development of quality standards for DMEPOS suppliers provide an opportunity for the agency to establish stronger requirements for potential and enrolled suppliers. Developing more rigorous quality standards that include an assessment of suppliers’ performance, integrity, and financial, managerial, and technical ability would help ensure that only qualified companies became suppliers. Suppliers whose previous performance was poor or that demonstrated a lack of integrity should not be allowed to quickly reenter the program. CMS also needs to provide more specific requirements in NSC’s contract so that the program’s policies will be consistently carried out. Finally, we believe that CMS has not adequately evaluated NSC’s activities to ensure that it is meeting all of its responsibilities and using all of the tools available to identify, and address, problem suppliers. The Congress should consider whether suppliers that have violated standards should have to wait a specified period of time from the date of revocation to have a billing number reissued. To improve the supplier enrollment process and oversight of NSC, we recommend that the Administrator of CMS take eight actions—five related to NSC’s efforts to verify DMEPOS suppliers’ compliance with the 21 standards, one related to the supplier standards, and two related to the agency’s oversight of NSC. We recommend that CMS: Starting in states where licensure is mandatory, require NSC to routinely check suppliers’ billing for oxygen, prosthetics, orthotics, and any other items requiring licensure, against the items the suppliers declared they are providing on applications. Where suppliers are billing for services not declared, take appropriate action to revoke the billing numbers of suppliers not complying with program requirements. Require NSC to provide information from suppliers’ billing histories to inspectors before they conduct on-site inspections to help them collect information to assess whether suppliers’ inventory or contracts to obtain inventory are congruent with the suppliers’ Medicare payments. When suppliers report having inventory that is primarily maintained off site or supplied through another company, require NSC to evaluate the legitimacy of the supply location or source and any related contracts. As part of the on-site inspections, require inspectors to review, and provide information to NSC analysts on the contents of, a minimum number of patient files to determine supplier adherence to standards for maintaining documentation of services and information provided to beneficiaries. Oversee NSC’s activities to ensure that it conducts on-site inspections of suppliers as required by CMS and maintains accurate data on the on-site inspections it conducts. Establish a minimum number of out-of-cycle on-site inspections in its contract that NSC must perform each year. Develop standards that incorporate requirements for suppliers to demonstrate that they have the integrity and capability to perform their functions analogous to the standards for federal contractors. Revise current evaluation procedures to fully assess the outcomes expected from the SACU’s activities and NSC’s adherence to contract requirements. In its written comments on a draft of this report, CMS generally concurred with our eight recommendations and cited actions it is taking to implement each recommendation. It also affirmed its commitment to protect beneficiaries and Medicare from fraud, waste, and abuse by ensuring that NSC only enrolled qualified suppliers and enforced the supplier standards. CMS agreed with our five recommendations related to improving NSC’s efforts to verify DMEPOS suppliers’ compliance with the 21 standards. In response to four of these recommendations, CMS stated that it has revised the statement of work for fiscal year 2006 to require NSC to: check suppliers’ licenses and liability insurance each year, rather than every 3 years at reenrollment, and compare suppliers’ billing histories to the licenses they provide at that time; provide on-site inspectors with the billing histories of DMEPOS suppliers conduct site inspections of suppliers’ off-site inventory storage locations and of businesses that provide them with inventory through contracts; and conduct out-of-cycle inspections, the number of which CMS will manage based on NSC’s workload and budgetary constraints. In addition to the completed revisions, to address the other recommendation related to NSC’s efforts to verify suppliers’ compliance with the 21 standards, CMS indicated that it intends to further revise the statement of work to require site inspectors to review a minimum number of beneficiary files maintained by suppliers. CMS also agreed with our recommendation to develop standards for suppliers to ensure they have the integrity and capability to perform their functions analogous to the standards for federal contractors. In its response to that recommendation, CMS indicated that the quality standards the agency is developing for suppliers will improve its ability to deter health care fraud and abuse. The agency stated that it will publish a proposed rule to implement the standards in the fall of 2005 and expects to issue a final rule in 2006. Finally, to address the two recommendations on improving its oversight, CMS stated that it intends to more closely review NSC’s activities to ensure that the contractor conducts on-site inspections as required and maintains accurate data on these inspections. CMS also noted that it had expanded its oversight and evaluation procedures during fiscal year 2005 to include quarterly reviews of NSC and SACU enrollment functions. CMS’s written comments on a draft of this report are included in appendix III. CMS also provided technical comments, which we included as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Administrator of CMS, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (312) 220-7600 or aronovitzl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To evaluate the National Supplier Clearinghouse’s (NSC) efforts to verify suppliers’ compliance with the 21 standards, we conducted interviews, document reviews, field inspections, investigations, and data analysis. We interviewed the Centers for Medicare & Medicaid Services (CMS) officials that oversee NSC and NSC staff, assessed CMS’s contract statement of work for enrollment screening, and reviewed NSC’s written procedures to gain a better understanding of the procedures used. Through that assessment, we determined that its procedures to check licensure and conduct on-site inspections of suppliers were critical to verifying compliance with the standards and we focused our evaluation on these procedures. To better understand the on-site inspection process, we accompanied NSC officials as they conducted on-site inspections of 12 suppliers in Maryland during August 9 and 10, 2004. In addition, to test the effectiveness of the licensure verification, we analyzed Medicare durable medical equipment, orthotics, prosthetics, and supplies (DMEPOS) claims data for 2003 and 2004 from Florida, Illinois, Louisiana, and Texas and NSC’s active supplier data file to determine whether suppliers had the licenses necessary for items billed. We also tested whether all required on- site inspections had been conducted through an analysis of NSC’s active supplier data file and inspection procedures. To assess the reliability of the 2003 and 2004 claims from CMS and NSC’s supplier data files, we performed electronic testing of required data elements, reviewed existing information about the data and the systems that produced them, and interviewed CMS and NSC officials knowledgeable about the data. We determined that these data were sufficiently reliable for the purposes of this report. We also contacted Florida, Texas, and Louisiana to determine which of the suppliers that had not disclosed to NSC that they would be providing oxygen and were paid at least $1,000 for oxygen claims in 2004 actually had the needed state licenses. In addition, we also checked with these states to determine whether a small sample of suppliers that had disclosed the intention to bill for oxygen, and were paid at least $1,000 for oxygen claims in 2004, had the needed state licenses. For custom- fabricated orthotics and prosthetics, we were not able to confirm whether the suppliers that had not disclosed to NSC that they would be providing these items and were paid at least $1,000 for such claims in 2004 in Florida, Illinois, and Texas had the proper state licenses, because those states license individuals to be allowed to supply these items, not companies. To evaluate procedures for on-site inspections, we analyzed on-site inspection instructions and the standards and interviewed on-site inspectors and officials in NSC and Overland Solutions, Inc. We investigated two companies cited as sources of inventory by two groups of Florida and Texas suppliers that had their billing privileges denied or revoked, in part because of inventory issues, and also investigated those suppliers. To evaluate the adequacy of the 21 supplier standards, we compared them to the requirements for government contractors and those imposed by the California and Florida Medicaid program on suppliers. In addition, we analyzed cases of revocations that had been appealed to CMS in 2004 to determine if weaknesses in the standards were leading to suppliers with questionable billing practices being reinstated in the program. We also obtained documentation on cases of suppliers that had defrauded Medicare and interviewed fraud inspectors at NSC and in the Department of Health and Human Services Office of Inspector General to develop insight into the problems that they saw with the 21 standards. We also interviewed NSC and CMS officials and individuals from the following organizations: the American Association for Homecare, the American Orthotic and Prosthetic Association, Hoveround, National Association for Home Care and Hospice, Power Mobility Coalition, and a representative from the National Supplier Clearinghouse’s Advisory Council. To evaluate CMS’s oversight of NSC, we considered the information we had gathered to answer the previous questions. We reviewed CMS’s written procedures used to evaluate NSC and other documents related to CMS’s oversight. We also discussed CMS’s oversight with CMS and NSC officials. Our work was conducted from June 2004 to September 2005 in accordance with generally accepted government auditing standards. Suppliers of durable medical equipment (DME), prosthetics, orthotics, and supplies must meet 21 standards in order to obtain and retain their Medicare billing privileges. The NSC is responsible for screening suppliers to ensure that they meet the standards. An abbreviated summary of the most recent version of these standards, which became effective December 11, 2000, is presented in table 3. The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 requires CMS to develop quality standards that must be at least as stringent as current standards for all Medicare suppliers of DME, prosthetics, orthotics, and supplies. Supplier compliance with the quality standards will be determined by one or more designated independent accreditation organizations. In addition to the contact named above, Sheila K. Avruch, Assistant Director; Kevin Dietz; Cynthia Forbes; Krister Friday; Christine Hodakievic; Daniel Lee; Lisa Rogers; John Ryan; and Craig Winslow made key contributions to this report. Medicare: CMS’s Program Safeguards Did Not Deter Growth in Spending for Power Wheelchairs. GAO-05-43. Washington, D.C.: November 17, 2004. Medicare: Past Experience Can Guide Future Competitive Bidding for Medical Equipment and Supplies. GAO-04-765. Washington, D.C.: September 7, 2004. Medicare: CMS Did Not Control Rising Power Wheelchair Spending. GAO-04-716T. Washington, D.C.: April 28, 2004. Medicare: HCFA to Strengthen Medicare Provider Enrollment Significantly, but Implementation Behind Schedule. GAO-01-114R. Washington, D.C.: November 2, 2000.
In fiscal year 2004, the Centers for Medicare & Medicaid Services (CMS) estimated that Medicare improperly paid $900 million for durable medical equipment, prosthetics, orthotics, and supplies--in part due to fraud by suppliers. To deter such fraud, CMS contracts with the National Supplier Clearinghouse (NSC) to verify that suppliers meet 21 standards before they can bill Medicare. NSC verifies adherence to the standards through on-site inspections and document reviews. Recent prosecutions of fraudulent suppliers suggest that there may be weaknesses in NSC's efforts to screen suppliers or in the standards. In this report, GAO evaluated: 1) NSC's efforts to verify suppliers' compliance with the 21 standards, 2) the adequacy of the standards to screen suppliers, and 3) CMS's oversight of NSC's efforts. NSC's efforts to verify compliance with the 21 standards are insufficient because of weaknesses in two key screening procedures--checking state licensure and conducting on-site inspections. NSC's licensure check is ineffective because it relies on self-reported information about the items suppliers intend to provide to beneficiaries and does not match this against actual billing later. We found a total of 22 suppliers in Florida, Louisiana, and Texas that had each been paid at least $1,000 by Medicare in 2004 for providing oxygen services, but did not have the required state license. Further, more than half of the almost $107 million paid by Medicare for custom-fabricated orthotics and prosthetics in Florida in 2004 went to suppliers that had not had their licenses checked. At least 46 of these suppliers were under investigation for fraud as of April 2005. NSC's on-site inspections also have weaknesses that limit their effectiveness. We estimate that NSC did not conduct required on-site inspections of 605 suppliers. Further, when conducting on-site inspections, NSC does not require its inspectors to examine beneficiary files to assess whether suppliers are meeting the standard to maintain proof of delivery or check whether suppliers have a real source of inventory, as required by Medicare. Medicare's 21 standards are currently too weak to be used effectively to screen medical equipment suppliers. Although Medicare paid suppliers about $8.8 billion in fiscal year 2004, the program's 21 standards do not include measures related to supplier integrity and capability analogous to those that federal agencies generally apply to prospective contractors or those used by at least two state Medicaid programs for their suppliers. For example, in sworn testimony before the Committee on Finance in April 2004, an individual who pleaded guilty to Medicare fraud described how she was able to open a sham business with $3,000--despite lacking the experience and the financial, technical, and managerial resources to operate a legitimate supply company. If an agency finds a company does not meet federal contracting standards for integrity and capability, the agency may decline to award it a contract. If a contractor performs inadequately, the agency can terminate the contract. Further, agencies may disqualify a contractor from competing for other federal contracts. In addition, a California supplier that is disenrolled from Medicaid for failing to meet state requirements cannot reenroll for 3 years. In contrast, if a Medicare supplier can later demonstrate compliance with the 21 standards, CMS readmits it into the program. CMS's oversight has not been sufficient to determine whether NSC is meeting its responsibilities in screening and enrolling DMEPOS suppliers. For example, CMS was unaware--until we informed the agency--that NSC had not conducted all required on-site inspections for suppliers. Moreover, while CMS has established performance goals for NSC related primarily to processing applications, it has not established a method to evaluate NSC's success in identifying noncompliant and fraudulent suppliers and recommending that they be removed from the program.
Since the early 1980s, roughly half of the private-sector work force has participated in either a defined benefit (DB) retirement plan, commonly known as a pension, or a defined contribution (DC) plan, such as a 401(k) plan. In 2006, approximately 79 million—or about half of all workers— worked for an employer or union sponsoring either a DB or DC plan, and about 62 million workers participated in such a plan. Congress created IRAs, in part, to help those individuals not covered by a DB or DC plan save for retirement. Employees of small firms, for example, are unlikely to work for an employer that sponsors either a DB or DC retirement plan. Almost half of all U.S. private sector workers in 2006 were employed by firms with fewer than 100 employees, and only 1 in 4 of these workers worked for an employer sponsoring a retirement plan. Currently there are several types of IRAs for individuals and small employers and their employees. Congress created two types of employer-sponsored IRAs with fewer regulatory requirements than DB and DC plans to encourage small employers to offer IRAs to their employees: Savings Incentive Match Plans for Employees (SIMPLE). SIMPLE IRAs, available only to employers with 100 or fewer employees, allow eligible employees to direct a portion of their salary, within limits, to a SIMPLE IRA. Employers sponsoring SIMPLE IRAs must either match the employee’s contributions up to 3 percent of his or her compensation or make 2 percent contributions of each employee’s salary to the SIMPLE IRAs for all employees making at least $5,000 for the year. Simplified Employee Pensions (SEP). SEP IRAs allow employers to make tax-deductible contributions to their own and each eligible employee’s traditional IRA at higher contribution limits than other IRAs. SEP IRAs do not permit employee contributions, and annual employer contributions are not mandatory. IRS and Labor share oversight responsibilities for IRAs. Labor’s Employee Benefits Security Administration (EBSA) enforces ERISA Title I, which specifies the standards for employer-sponsored retirement plans, including applicable fiduciary reporting and disclosure requirements. EBSA also oversees the fiduciary standards for employer-sponsored IRAs, and seeks to ensure that fiduciaries, such as employers, operate their plans in the best interest of plan participants. While Labor does not have direct oversight of payroll-deduction IRA programs, it has provided “safe harbor” guidance to employers, which sets the conditions by which employers may offer payroll-deduction IRA programs without becoming subject to ERISA Title I requirements. IRS enforces Title II of ERISA for all types of IRAs, which provides tax benefits for retirement plan sponsors and participants and details participant eligibility, vesting, and funding requirements. IRS also enforces various tax rules for IRAs, including rules for eligibility, contributions, distributions, and rolling assets into IRAs or converting assets from a traditional IRA into a Roth IRA. (See table 1 for annual contribution limits for IRAs.) Labor and IRS work together to oversee IRA prohibited transactions. In general, prohibited transactions include any improper use of an IRA by the account holder or others. Labor generally has interpretive jurisdiction over prohibited transactions and IRS has certain enforcement authority. Both ERISA and the Internal Revenue Code contain various statutory exemptions from the prohibited transaction rules, and Labor has authority to grant administrative exemptions and establish exemption procedures. Labor may grant administrative exemptions on a class or individual basis for a wide variety of proposed transactions in a plan. IRS has responsibility for imposing an excise tax on parties that engage in a prohibited transaction. (See fig. 1 for a description of IRS and Labor responsibilities regarding IRAs.) Most assets flowing into IRAs come not from direct contributions, but from transfers, or rollovers, of retirement assets from other retirement plans, including 401(k) plans. These rollovers allow individuals to preserve their retirement savings when they change jobs or retire. As shown in figure 2, from 1998 to 2004, more than 80 percent of funds flowing into IRAs came from rollovers, demonstrating that IRAs play a significantly smaller role in building retirement savings than in preserving them. In addition, IRA accounts with rollover assets were larger than those without rollover assets. For example the median amount in a traditional IRA with rollover assets in 2007 was $61,000, while the median amount in a traditional IRA without rollover assets was $30,000. Since 1998, IRA assets have comprised the largest portion of the retirement market. As shown in figure 3, IRA assets in 2004 totaled about $3.5 trillion compared to DC assets of $2.6 trillion and DB assets of $1.9 trillion. More households own traditional IRAs, which were the first IRAs established, than Roth IRAs or employer-sponsored IRAs. In 2007, nearly 33 percent of all households owned traditional IRAs, and about 15 percent owned Roth IRAs. In contrast, about 8 percent of households participated in employer-sponsored IRAs. Ownership of traditional and Roth IRAs is associated with higher education and income levels. In 2004, 59 percent of IRA households were headed by an individual with a college degree, and only about 3 percent were headed by an individual with no high school diploma. Over one- third of these IRA households earned $100,000 or more; about 2 percent earned less than $10,000. Households with IRAs also tend to own their homes. Research shows that higher levels of education and household income correlate with a greater propensity to save. Therefore, it is not surprising that IRA ownership increases as education and income levels increase. However, despite the association of IRA ownership to individuals with higher incomes, data show that lower- and middle-income individuals also own IRAs. A study by the Congressional Budget Office (CBO) found that in 2003, 4 percent of workers earning $20,000 to $40,000 (in 1997 dollars) contributed to traditional IRAs and 3 percent of these workers contributed to Roth IRAs. In the same year, 7 percent of workers earning between $120,000 and $160,000 contributed to a traditional IRA and 8 percent contributed to a Roth IRA. The study also found that 33 percent of individuals earning $20,000 to $40,000 who contributed to a traditional IRA contributed the maximum amount allowed, and 35 percent made maximum contributions to their Roth IRAs. By contrast, 87 percent of individuals earning $120,000 to $160,000 who contributed to a traditional IRA made the maximum contribution, and 61 percent made the maximum contribution to their Roth IRAs. A study by the Investment Company Institute (ICI) that included data on contributions by IRA owners shows that more households with Roth IRAs or employer-sponsored IRAs contribute to their accounts than households with traditional IRAs. For example more than half of households with Roth, SIMPLE, or Salary Reduction Simplified Employee Pension IRA (SAR-SEP IRA) contributed to their accounts in 2004, but less than one- third of households with traditional IRAs did so. This, again, may be partly attributed to the emerging role of traditional IRAs as a means to preserve rollover assets rather than build retirement savings. The ICI study also stated that the median household contribution to traditional IRAs was $2,300 compared to the median contribution to Roth IRAs of $3,000. The median contribution to SIMPLE and SAR-SEP IRAs was $5,000. The study noted that this difference may be related to higher contribution limits for employer-sponsored IRAs than for traditional and Roth IRAs. According to experts and available government data, worker access to employer-sponsored and payroll-deduction IRAs appears limited. To address the issue of low retirement plan sponsorship among small employers, Congress created SEP and SIMPLE employer-sponsored IRAs. Labor issued a regulation under which an employer could maintain a payroll-deduction program for employees to contribute to traditional and Roth IRAs without being considered a pension plan under ERISA. Although employer-sponsored IRAs have few reporting requirements to encourage small employers to offer them, and payroll-deduction IRAs have none, worker access to these IRAs appears limited. Increased access to payroll-deduction IRAs could help many workers to save for retirement, but several barriers, including costs to employers, may discourage employers from offering these IRAs to their employees. Retirement and savings experts offer several proposals to encourage employers to offer and employees to participate in payroll-deduction IRAs. The majority of employers with fewer than 100 employees do not offer an employer-sponsored retirement plan for their employees. In 2006, almost half of all U.S. private sector workers were employed by firms with fewer than 100 employees, and only 1 in 4 of these employees worked for an employer sponsoring a retirement plan. To address the issue of low retirement plan sponsorship among small employers, Congress created SIMPLE and SEP employer-sponsored IRAs with less burdensome reporting requirements than 401(k) plans to encourage their adoption by small employers. In addition, under a regulation issued by Labor, employers may also provide payroll-deduction IRA programs, which allow employees to contribute to traditional or Roth IRAs through payroll- deductions by their employer, without employers being considered a pension plan sponsor under ERISA and becoming subject to various ERISA fiduciary and reporting requirements. In order to encourage their adoption, employer-sponsored and payroll-deduction IRAs offer a variety of features designed to appeal to small employers (see table 2). Data on the number of employers offering employees employer-sponsored IRAs and payroll-deduction IRAs is limited. However, based on available data, employee access to SIMPLE and SEP IRAs appears limited. Under ERISA Title I, there is no reporting requirement for SIMPLE IRAs, and there is an alternative method available for reporting of employer- sponsored SEP IRAs. Payroll-deduction IRA programs are not subject to ERISA requirements for employer-sponsored retirement plans and have no reporting requirements. Because there are very limited reporting requirements for employer-sponsored IRAs and none for payroll-deduction IRAs, information on employers who offer these IRAs is limited and we were unable to determine how many employers actually do so. For example, the Bureau of Labor Statistics provides some data on the percentage of employees participating in employer-sponsored IRAs, but no data on the percentage of employers sponsoring them. The Bureau of Labor Statistics reported that 8 percent of private sector workers in firms with fewer than 100 employees participated in a SIMPLE IRA in 2005, and 2 percent of these workers participated in a SEP IRA. An IRS evaluation of employer-filed W-2 forms estimated that 190,000 employers sponsored SIMPLE IRAs in 2004. IRS did not provide an estimate of the number of employers sponsoring SEP IRAs, and we were unable to determine how many employers make these IRAs available to employees. Few employers appear to offer their employees the opportunity to contribute to IRAs through payroll deductions, but data are insufficient to make this determination. Through payroll-deduction IRA programs, employees may contribute to traditional or Roth IRAs by having their employer withhold an amount determined by the employee and forwarded directly to the employee’s IRA. Although any employer can provide payroll-deduction IRAs to their employees, regardless of whether or not they offer another retirement plan, retirement and savings experts told us that very few employers do so. Because employers are not required to report this activity to the federal government, neither Labor nor IRS is able to determine how many employers offer payroll-deduction IRAs. According to experts and economics literature that we reviewed, individuals are more likely to save for retirement through payroll deductions than they are without payroll deductions. Both SIMPLE IRAs and payroll-deduction IRA programs allow workers to contribute to their retirement through payroll deductions. Payroll deductions are a key feature in 401(k) and other DC plans. Economics literature that we reviewed identifies payroll deduction as a key factor in the success of 401(k) plans, and participation in these plans is much higher than in IRAs, which do not typically use payroll deduction. The Congressional Budget Office reported that 29 percent of all workers contributed to a DC plan in 2003—where payroll deductions are the norm—while only 7 percent of all workers contributed to an IRA. Saving for retirement in the workplace through payroll deductions helps workers save by providing a “commitment device” to make automatic contributions to retirement savings before wages are spent. Such a commitment device helps some workers overcome a common tendency to procrastinate or not take action to save based on the choices associated with investing or selecting a retirement savings vehicle. Payroll deductions allow workers to contribute to retirement savings automatically before wages are spent, relieving them of making ongoing decisions to save. According to Labor’s guidance on payroll-deduction IRAs and several experts we interviewed, individuals are more likely to save in IRAs through payroll deductions than they are without these automatic deposits. Payroll-deduction IRA programs could provide a retirement savings opportunity at work for the millions of workers without an employer- sponsored retirement plan. In theory, all workers under age 70½ who lack an employer-sponsored retirement plan could be eligible to contribute to a traditional IRA through payroll deduction, should their employer offer such a program. Further, based on the contribution rules for traditional and Roth IRAs, many of these individuals would be eligible to claim a tax deduction for their traditional IRA contributions, and most low- and middle-income workers would be eligible to contribute to Roth IRAs. Experts told us payroll-deduction IRAs are the easiest way for small employers to offer their employees a retirement savings vehicle. Payroll- deduction IRAs have fewer requirements for employee communication than SIMPLE and SEP IRAs, and employers are not subject to ERISA fiduciary responsibilities as long as they meet the conditions in Labor’s regulation and guidance for managing these plans. Employer-sponsored IRAs may also help employees of small firms save for their retirement. For example, the higher contribution limits for SIMPLE and SEP IRAs offer greater savings benefits than payroll-deduction IRAs. In 2007, individuals under age 50 were able to contribute up to $10,500 to SIMPLE IRAs—more than twice the amount allowed in 2007 for payroll- deduction IRAs. Since SIMPLE IRAs require employers to either match the contributions of participating employees or make nonelective contributions to all employee accounts, employees are able to save significantly more per year in SIMPLE IRAs than they are in payroll- deduction IRAs. As we previously reported, we found that several factors may discourage employers from establishing employer-sponsored SIMPLE and SEP IRAs. For example, small business groups told us that the costs of managing SIMPLE and SEP IRAs may be prohibitive for small employers. Experts also pointed out that certain contribution requirements for these plans may, in some cases, limit employer sponsorship of these plans. For example, because SIMPLE IRAs require employers to make contributions to employee accounts, some small firms may be unable to commit to these IRAs. Small business groups and IRA providers told us that small business revenues are inconsistent and may fluctuate greatly from year to year, making required contributions difficult for some firms. In addition, employers offering SIMPLE IRAs must determine before the beginning of the calendar year whether they will match employee contributions or make nonelective contributions to all employees’ accounts. According to IRA providers, this requirement may discourage some small employers from offering these IRAs, and if employers had the flexibility to make additional contributions to employee accounts at the end of the year, employers may be encouraged to contribute more to employee accounts. With regard to SEP IRAs, two experts said small firms may be discouraged from offering these plans because of the requirement that employers must set up a SEP IRA for all employees performing service for the company in 3 of the past 5 years and with more than $500 in compensation for 2007. These experts stated that small firms are likely to hire either seasonal employees or interns who may earn more than $500, and these employers may have difficulty finding an IRA provider willing to open an IRA small enough for these temporary or low-earning participants. We also found that several barriers may discourage small employers even from offering payroll-deduction IRAs, including: (1) costs to employers for managing payroll deductions, (2) a perceived lack of flexibility to promote payroll-deduction IRAs to employees, (3) lack of incentives to employers, and (4) lack of awareness about how these IRAs work. Costs to employers. Additional administrative costs associated with setting up and managing payroll-deduction IRAs may be a barrier for small employers, particularly those without electronic payroll processing. Small business groups told us that costs are influenced by the number of employees participating in the program and whether an employer has a payroll processing system in place for automatic deductions and direct deposits to employee accounts. Several experts told us many small employers lack electronic, or automatic, payroll systems, and these employers would be subject to higher management costs for offering payroll-deduction IRAs. According to Labor, costs to employers also are significantly influenced by the number of IRA providers to which an employer must remit contributions on behalf of employees. Although experts reported that payroll-deduction IRAs represent costs to employers, we found varied opinions on the significance of those costs. Experts advocating for expanded payroll-deduction IRAs reported that most employers would incur little or no costs since they already make payroll deductions for Social Security and Medicare, as well as federal, state, and local taxes. According to these experts, payroll-deduction IRAs function like existing payroll tax withholdings, and adding another deduction would not be substantial. However, other experts and one report we reviewed indicated that employer costs may be significant, particularly for employers without electronic payrolls. The report did not estimate actual costs to employers on a per account basis. In our review, we were unable to identify reliable government data on actual costs to small employers. Flexibility to promote payroll-deduction IRAs. According to IRA providers, some employers are hesitant to offer a payroll-deduction IRA program because they find Labor’s guidance limits their ability to effectively publicize the program to employees for fear of being subject to ERISA requirements. IRA providers told us employers need greater flexibility in Labor’s guidance on payroll-deduction IRAs if they are to encourage employees to save for retirement. However, Labor told us that it has received no input from IRA providers as to what that flexibility would be, and Labor officials note that Interpretive Bulletin 99-1 specifically provides for flexibility. Lack of savings incentives for small employers. Small business member organizations and IRA providers said contribution limits for payroll-deduction IRAs do not offer adequate savings incentives to justify the effort in offering these IRAs. Because the contribution limits for these IRAs are significantly lower than those that apply to SIMPLE and SEP IRAs, employers seeking to provide a retirement plan to their employees would be more likely to choose other options. Those options also allow business owners to contribute significantly more to their own retirement account than payroll-deduction IRAs. Lack of awareness. Representatives from small business groups said many small employers are unaware that payroll-deduction IRAs are available or that employer contributions are not required. However, Labor has produced educational materials describing the payroll-deduction and employer-sponsored IRA options available to employers and employees. Retirement and savings experts told us that several legislative proposals could encourage employers to offer and employees to participate in IRAs. Several bills have been introduced in Congress to expand worker access to payroll-deduction IRAs. Employer incentives to offer IRAs. Several retirement and savings experts said additional incentives should be in place to increase employer sponsorship of IRAs. For example, experts suggested tax credits should be made available to defray start-up costs for small employers of payroll- deduction IRAs, particularly for those without electronic or automatic payroll systems. These credits should be lower than the credits available to employers for starting SIMPLE, SEP, and 401(k) plans to avoid competition with those plans, these experts said. IRA providers and small business groups said increasing contribution limits for SIMPLE IRAs to levels closer to those for 401(k) plans would encourage more employers to offer these plans. Other experts said doing so could provide incentives to employers already offering 401(k) plans to switch to SIMPLE IRAs, which have fewer reporting requirements. Employee incentives to participate in IRAs. Experts offered several proposals to encourage workers to participate in IRAs, including: (1) expanding existing tax credits for moderate- and low-income workers, (2) offering automatic enrollment in payroll-deduction IRAs, and (3) increasing public awareness about the importance of saving for retirement and how to do so. Several experts said expanding the scope of the Retirement Savings Contribution Credit, commonly known as the “saver’s credit,” could encourage IRA participation among workers who are not covered by an employer-sponsored retirement plan. They said expanding the saver’s credit to include more middle-income earners and making the credit refundable—available to tax filers even if they do not owe income tax—could encourage more moderate- and low-income individuals to participate in IRAs. However, an expanded and refundable tax credit would have revenue implications for the federal government. Other experts told us that automatically enrolling workers into payroll-deduction and SIMPLE IRAs could increase employee participation; however, small business groups and IRA providers said that mandatory automatic enrollment could be burdensome to small employers. In addition, given the lack of available income for some, several experts told us that low- income workers may opt out of automatic enrollment programs or be more inclined to make early withdrawals. Experts also said increasing public awareness of the importance of saving for retirement and educating individuals about how to do so could increase IRA participation. Earlier this month, we reported that changes at IRS and Labor could encourage employers to offer IRAs and improve IRA information and oversight. We found that regulators lack information about employer- sponsored and payroll-deduction IRAs that could help determine whether these vehicles help workers without employer-sponsored pension plans build retirement savings. For example, IRS collects information on employer-sponsored IRAs but does not share the information collected with Labor, which has oversight responsibility for employer-sponsored IRAs. We also found that certain oversight vulnerabilities need to be improved. Currently, Labor has no process in place to monitor employer- sponsored IRAs, and has no jurisdiction over payroll-deduction IRAs. As a result of our findings, we made recommendations to Labor and IRS to improve IRA information and oversight and suggested that Congress may wish to consider whether payroll-deduction IRAs should have some direct oversight. Because employer-sponsored IRAs have few employer reporting requirements and payroll-deduction IRAs have none, regulators lack information on these IRAs. Under Title I of ERISA, there is no reporting requirement for SIMPLE IRAs, and there is an alternative method available for reporting employer-sponsored SEP IRAs. Employers who offer payroll-deduction IRAs have no reporting requirements, and consequently, there is no reporting mechanism that captures how many employers offer payroll-deduction IRAs. Although IRS receives information reports for all traditional and Roth IRAs, those data do not show how many were for employees using payroll-deductions. In our discussions with Labor and IRS officials, they explained that the limited reporting requirements for employer-sponsored IRAs were put in place to try to encourage small employers to offer their employees retirement plan coverage by reducing their administrative and financial burdens. Although the reporting requirements for employer-sponsored IRAs are limited, IRS does collect some information on these IRAs through several “information” forms provided by financial institutions and employers. These forms provide information on salary-reduction contributions to employer-sponsored IRAs, as well as information on IRA contributions, fair market value, and distributions. For example, information on retirement plans are reported annually by employers and others to IRS on its Form W-2. The Form W-2 contains details on the type of plan offered by the employer, including employer-sponsored IRAs, and the amounts deducted from wages for contributions to these plans. According to agency officials, IRS cannot share the information it receives on employer-sponsored IRAs with Labor because it is confidential tax information. Labor also does not receive relevant information from employers, such as annual financial reports, as it does from private pension plan sponsors. For example, pension plan sponsors must file Form 5500 reports with Labor on an annual basis, which provide Labor with valuable information about the financial health and operation of private pension plans. Although Labor’s Bureau of Labor Statistics National Compensation Survey surveys employee benefit plans in private establishments, receiving information on access, participation, and take- up rates for DB and DC plans, the survey does not collect information on the number of employers sponsoring employer-sponsored IRAs. Consequently, Labor does not receive important information on employers who have established employer-sponsored IRAs, over which it has oversight responsibilities. Ensuring that regulators obtain information about employer-sponsored and payroll-deduction IRAs is one way to help them and others determine the status of these IRAs and whether those individuals who lack employer- sponsored pension plans are able to build retirement savings through employer-sponsored and payroll-deduction IRAs. However, key information on IRAs is currently not reported, such as information that identifies employers offering payroll-deduction IRAs and the distribution by employer of the number of employees that contribute to payroll- deduction IRAs. Experts that we interviewed said that, without such information, they are unable to determine how many employers and employees participate in payroll-deduction IRAs and the extent to which these IRAs have contributed to the retirement savings of participants. In addition, the limited reporting requirements prevent information from being obtained about the universe of employers that offer employer- sponsored and payroll-deduction IRAs, which affects Labor’s ability to monitor employer-sponsored IRAs. This information also can be useful when determining policy options to increase IRA participation among uncovered workers because it provides a strong foundation to assess the extent to which these IRAs are being utilized. Although IRS does publish some of the information it receives on IRAs through its Statistics of Income program, IRS does not produce IRA reports on a consistent basis. IRS officials told us that they are currently facing several challenges that affect their ability to publish IRA information more regularly. First, IRS relies, in part, on information returns to collect data on IRAs: such returns are not due until the following year after the filing of the tax return. IRS officials said that these returns have numerous errors, making it difficult and time-consuming for IRS to edit them for statistical analysis. They also said that the IRA rules, and changes to those rules, are difficult for some taxpayers, employers, and trustees to understand, which contributes to filing errors. Also, in the past, one particular IRS employee, who has recently retired, took the lead in developing a statistical analysis on IRAs. Since IRS does not have a process in place to train another employee to take over this role, a knowledge gap was created that IRS is trying to fill. IRS officials told us that they recognize this problem and are in the early stages of determining ways to correct it. In addition, IRS officials told us they had problems with the IRA data for tax year 2003, which prevented them from issuing a report for that year. The result has been that IRS has published IRA data for tax years 2000, 2001, 2002 and 2004, but none for tax year 2003. Labor officials and retirement and savings experts told us that without the consistent reporting of IRA information by IRS, they use studies by financial institutions and industry associations for research purposes, which include assessing the current state of IRAs and future trends. These experts said that although these studies are helpful, some may double- count individuals because one person may have more than one IRA at different financial institutions. They also said that more consistent reporting of IRA information could help them ensure that their analyses reflect current and accurate information about retirement assets, such as the fair market value of IRAs. Since IRS is the only agency that has data on all IRA participants, consistent reporting of these data could give policymakers and others a more comprehensive view of the IRA landscape. Given the limited reporting requirements for employer-sponsored IRAs and the absence of requirements for payroll-deduction IRAs, as well as Labor’s role in overseeing employer-sponsored IRAs, a minimum level of oversight is important to ensure that employers are acting in accordance with the law and within Labor’s guidance on payroll-deduction IRAs. Yet, Labor officials said that they are unable to monitor (1) whether all employers are in compliance with the prohibited transaction rules and fiduciary standards, such as by making timely and complete employer- sponsored IRA contributions or by not engaging in self-dealing; and (2) whether all employers who offer a payroll-deduction IRA are meeting the conditions of Labor’s guidance. Employer-sponsored IRAs. Labor officials said that they do not have a process for actively seeking out and determining whether employer- sponsored IRAs are engaging in prohibited transactions or not abiding by their fiduciary responsibilities, such as by having delinquent or unremitted employer-sponsored IRA contributions. Instead, as in the case of Labor’s oversight of pension plans, Labor primarily relies on participant complaints as sources of investigative leads to detect employers that are not making the required contributions to their employer-sponsored IRA. Payroll-deduction IRAs. Payroll-deduction IRAs are not under Labor’s jurisdiction; however, Labor does provide guidance regarding the circumstances under which an employer may provide a payroll-deduction IRA program without being subject to the Title I requirements of ERISA. As long as employers meet the conditions in Labor’s regulation and guidance, employers are not subject to the fiduciary requirements in ERISA Title I that apply to employer-sponsored retirement plans, such as 401(k) plans. IRS, also, does not have direct oversight over payroll- deduction IRA programs. IRS oversees the rules associated with the traditional and Roth IRAs that payroll-deduction programs may fund through employee contributions. However, IRS does not have oversight over employer management of these programs. Labor officials told us that they are not aware of employers improperly relying on the safe harbor regarding payroll-deduction IRAs. However, without a process to monitor payroll-deduction IRAs, Labor cannot be certain of the extent or nature of certain employer activities that may fall outside of the guidance provided by Labor. In order to improve oversight and information available on IRAs, we recently made several recommendations to Congress, Labor, and IRS, which are summarized in table 3. We reported that neither IRS nor Labor have direct oversight of payroll- deduction IRAs and that Congress may wish to consider whether payroll- deduction IRAs should have some direct oversight. Although Labor provides guidance regarding the circumstances under which employers may offer payroll-deduction programs without being subject to the Title I requirements of ERISA, Labor does not have jurisdiction to monitor whether employers are managing such programs within the bounds of Labor’s safe harbor. Similarly, IRS has responsibility over tax rules for establishing and maintaining traditional and Roth IRAs that may be funded through employee contributions from payroll-deduction programs; however, IRS also does not have authority to monitor employers offering these programs. We have reported that without direct oversight of payroll- deduction IRAs, employees may lack confidence that payroll-deduction IRAs will provide them with adequate protections to participate in such programs, which is particularly important given the increasing role that IRAs have in retirement savings. As such, we have suggested that Congress consider whether payroll-deduction IRAs should have some direct oversight. We have also reported that it is important for Labor to have an accurate accounting of the costs to employers for managing payroll-deduction IRAs. In our review, we were unable to determine the actual costs to employers for managing a payroll-deduction IRA program. Some experts reported that such costs were significant, while others reported that they were minimal. Further, under Labor’s guidance on payroll-deduction IRAs, employers may receive reasonable compensation for the cost of operating payroll-deduction IRA programs as long as such compensation does not represent a profit to employers. However, because the information on costs of managing such programs is lacking, Labor may be unable to readily determine if employers are receiving excessive compensation and if such programs fall outside the safe harbor and may be considered to have become ERISA Title I programs. Furthermore, without accurate cost estimates and a determination of what constitutes “reasonable compensation” to employers, employers may be reluctant to seek compensation from IRA service providers to defray the costs of operating a payroll-deduction IRA program. Currently, IRAs play a major role in preserving retirement assets but a very small role in creating them. Although studies show that individuals find it difficult to save for retirement on their own, millions of U.S. workers have no retirement savings plan through their employer. Employer-sponsored and payroll-deduction IRAs afford an easier way for workers, particularly those who work for small employers, to save for retirement. They also offer employers less burdensome reporting and legal responsibilities than defined benefit pension plans and defined contribution plans, such as 401(k) plans. Yet, barriers exist, such as administrative costs, that may discourage employers from offering payroll-deduction IRAs. As federal agencies begin to determine the true cost of establishing payroll-deduction IRAs, employers will have a better understanding of the costs and will be in a better position to evaluate whether they will be able to offer payroll- deduction IRAs to their employees. Encouraging employers to offer IRAs to their employees can be much more productive if Congress and regulators ensure that there is adequate information on employer-sponsored IRAs and payroll-deduction IRAs. Although the limited reporting requirements for employer-sponsored IRAs and the absence of reporting requirements for payroll-deduction IRAs were meant to encourage small employers to offer retirement savings vehicles to employees, there is also a need for agencies that are responsible for overseeing retirement savings vehicles to have the information necessary to do so. Providing complete and consistent data on IRAs would help ensure that regulators have the information they need to make informed decisions about how to increase coverage and facilitate retirement savings. In addition, ensuring that Labor has a process in place to monitor employer-sponsored IRAs will help ensure that there is a structure in place to help protect individuals’ retirement savings if they choose employer-sponsored IRAs. If current oversight vulnerabilities are not addressed, future problems could emerge as more employers and workers participate in employer-sponsored IRAs. Steps must also be taken to improve oversight of payroll-deduction IRAs and determine whether direct oversight is needed. Without direct oversight, employees may lack confidence that payroll-deduction IRAs will provide them with adequate protections to participate in these programs, which is particularly important given the current focus in Congress on expanding payroll-deduction IRAs. However, any direct oversight of payroll-deduction IRAs should be done in a way that does not pose an undue burden on employers or their employees. We are continuing our work on IRAs, and are beginning to examine the fees that are charged IRA participants. We are pleased that the Committee on Ways and Means and this subcommittee are interested in retirement savings, particularly IRAs, and look forward to continuing our work with you. Mr. Chairman and Members of the subcommittee, this completes my prepared statement and I would be happy to respond to any questions the subcommittee may have at this time. For further information regarding this testimony, please contact Barbara D. Bovbjerg, Director, Education, Workforce, and Income Security Issues at (202) 512-7215 or bovbjergb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Tamara Cross (Assistant Director), Matt Barranca, Susan Pachikara, Raun Lazier, Joseph Applebaum, Susan Aschoff, Doreen Feldman, Edward Nannenhorn, MaryLynn Sergent, Roger Thomas, Walter Vance, and Jennifer Wong. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Congress created individual retirement accounts (IRAs) with two goals: (1) to provide a retirement savings vehicle for workers without employer-sponsored retirement plans, and (2) to preserve individuals' savings in employer-sponsored retirement plans when they change jobs or retire. Questions remain about IRAs' effectiveness as a vehicle to facilitate new, or additional, retirement savings. GAO was asked to report on (1) the role of IRAs in retirement savings, (2) the prevalence of employer-sponsored and payroll-deduction IRAs and barriers discouraging employers from offering these IRAs, and (3) changes that are needed to improve IRA information and oversight. GAO reviewed published reports from government and financial industry sources and interviewed retirement and savings experts, small business representatives, IRA providers, and federal agency officials. Although Congress created IRAs to allow individuals to build and preserve their retirement savings, IRAs are primarily used to preserve savings through rollovers rather than build savings through contributions. Over 80 percent of assets that flow into IRAs come from assets rolled over, or transferred, from other accounts and not from direct contributions. Assets in IRAs now exceed assets in the most common employer-sponsored retirement plans: defined contribution plans, including 401(k) plans, and defined benefit, or pension plans. Payroll-deduction IRA programs, which allow employees to contribute to IRAs through deductions from their paychecks, and employer-sponsored IRAs, in which an employer establishes and contributes to IRAs for employees, were established to provide more options for retirement savings in the workplace. Experts GAO interviewed said that several factors may discourage employers from offering these IRAs to employees, including administrative costs and concerns about employer fiduciary responsibilities. Information is lacking on how many employers offer employer-sponsored and payroll-deduction IRAs and the actual costs to employers for administering payroll-deduction IRAs. Earlier this month, GAO reported on the role that federal agencies can have in helping employers provide IRAs to employees and in improving oversight of these savings vehicles. GAO made several recommendations to the Department of Labor (Labor) and the Internal Revenue Service to provide better information and oversight, but in the course of the review, GAO found that Labor does not have jurisdiction over payroll-deduction IRAs. Consequently, GAO also suggested that Congress may wish to consider whether payroll-deduction IRAs should have some direct oversight. A clear oversight structure could be critical if payroll-deduction IRAs become a more important means to provide a retirement savings vehicle for workers who lack an employer-sponsored retirement plan.
The Army has been faced with a significant challenge to meet the facility needs associated with several recent initiatives, such as the transformation of the Army’s force structure, the permanent relocation of thousands of overseas military personnel back to the United States, the implementation of Base Realignment and Closure actions, and the planned increase in the Army’s active-duty end strength. As shown in figure 1, the Army estimated that taken together these initiatives resulted in a threefold increase in the Army’s military construction program with appropriated funds increasing from about $3.4 billion in fiscal year 2005 to a peak of about $10.7 billion in fiscal year 2009 before beginning to decline in fiscal year 2010. To meet the challenges associated with the large increase in its military construction program and ensure that required new facilities would be completed in time to meet planned movements of organizations and personnel, the Army concluded that it could not continue to rely on its traditional military facility acquisition and construction practices. The Army’s solution was the adoption of a new strategy in 2006 that the Army termed military construction transformation. The strategy included numerous changes to the Army’s traditional practices that were designed to reduce facility acquisition costs and construction timelines. Included among the changes were the following: The development of clear requirements that need to be met in 43 different types of Army facilities and the creation of standard designs for 24 common facility types, such as headquarters buildings, company operations and tactical equipment maintenance facilities, barracks, dining facilities, and child care centers. A transition from “design-bid-build” project delivery, where a project’s design and construction are normally awarded via separate contracts, to “design-build” project delivery, where a project’s design and construction are awarded to a single contractor. By using one contractor and overlapping the design and construction phases, the design-build approach attempts to reduce project risk and construction timelines. The development of a standard solicitation approach for most common-type facilities that used performance-based criteria focused on what the Army needed rather than on detailed, prescriptive criteria that focused on how the Army’s requirements should be met. Under the approach, the Army revealed to potential bidders the available funding for the project and tasked project bidders to provide an innovative proposal that meets the performance-based criteria while maximizing quality, sustainability, and energy conservation. Army officials stated that its new standard solicitation approach encouraged potential bidders to develop design solutions that considered the use of all types of construction materials and methods allowed by DOD building guidance. This included the use of wood materials and modular building methods in addition to the use of steel, concrete, and masonry materials and on-site building methods traditionally used by the Army, the Navy, and the Air Force for permanent facilities, such as administrative buildings and barracks. As a result, under its military construction transformation strategy, the Army expanded the use of wood materials and modular building methods for some permanent facilities. Appendix II contains further details on the various categories of construction materials and methods allowed by DOD guidance. Because the Army believed that the changes it made to its facility acquisition and building practices under its transformation strategy would result in lower construction costs and shorter building timelines, the Army established goals to reduce its military construction costs by 15 percent and facility construction timelines by 30 percent beginning in fiscal year 2007. The Army planned to implement the cost reduction goal by having project planners reduce the estimated cost of planned facilities by 15 percent, requesting funding from the Congress for the reduced amount, and then attempting to award and complete the project within the approved funding amount. Thus, the goal was not directly related to actual facility costs but rather to estimated facility costs. While continuing to apply the strategy to its military construction program, the Army discontinued these numerical goals in fiscal year 2010, stating that most cost and timeline reduction benefits from its strategy would have been obtained by the end of fiscal year 2009. As required by Section 2859 of Title 10, DOD has developed and implemented antiterrorism construction standards designed to reduce facility vulnerability to terrorist attack and improve the security of facility occupants. The standards include 22 mandatory standards, such as requiring open areas around new facilities to keep explosives at a distance from the facilities, and 17 recommended but optional measures, such as avoiding exterior hallway configurations for inhabited facilities. Appendix III contains further details on the standards and measures. For decades, the federal government has attempted to improve energy efficiency and energy and water conservation at federal facilities. Over the past few years, several laws, executive orders, and other agreements added new energy efficiency and energy and water conservation requirements for federal facilities. In particular, in January 2006, DOD joined 16 other federal agencies in signing a memorandum of understanding that committed the agency to leadership in designing, constructing, and operating high-performance and sustainable buildings. The main goals of sustainable design are to avoid resource depletion of energy, water, and raw materials; prevent environmental degradation caused by facilities and infrastructure; and create facilities that are livable, comfortable, safe, and productive. To help measure the sustainability of new military buildings, DOD uses the U.S. Green Building Council’s Leadership in Energy and Environmental Design Green Building Rating System. The system defines sustainable features for buildings and includes a set of performance standards that can be used to certify the design and construction of buildings. The standards are categorized under five major topics—sustainable sites, water efficiency, energy and atmosphere, materials and resources, and indoor environmental quality. By meeting the standards during facility design and construction, builders can earn credits and become certified in accordance with an established four-level scale—certified, silver, gold, and platinum. For fiscal year 2009, DOD set a goal that at least 70 percent of military construction projects would be silver-level certifiable, which is the second level on the four-level scale with platinum being the highest rating. Appendix IV contains additional details on DOD’s sustainable design goals. The Office of the Deputy Under Secretary of Defense for Installations and Environment has responsibility for DOD’s installations and facilities. The office is responsible for establishing policy and guidance for DOD’s military construction program and monitoring the execution of the services’ military construction projects. The United States Army Corps of Engineers and the Naval Facilities Engineering Command have primary responsibility for planning and executing military construction projects for the Army and the Navy, respectively. Air Force officials stated that the Air Force Center for Engineering and the Environment has primary responsibility for planning and overseeing the construction of Air Force military construction projects, although the Army Corps of Engineers or the Naval Facilities Engineering Command normally executes the individual projects for the Air Force and DOD guidance provides these organizations with a role in design and construction. Since 1997, we have identified management of DOD support infrastructure as a high-risk area because infrastructure costs have affected the department’s ability to devote funds to other more critical programs and needs. In a January 2009 update to our high-risk series, we noted that although DOD has made progress in managing its support infrastructure in recent years, a number of challenges remain in managing its portfolio of facilities and in reducing unneeded infrastructure while providing facilities needed to support several simultaneous force structure initiatives. Further, we noted that because of these issues, DOD’s management of support infrastructure remains a high-risk area. We have issued several reports over the past few years that highlighted aspects of DOD’s military construction program and challenges in managing the program. For example, in a 2003 report, we found that opportunities existed to reduce the construction costs of government- owned barracks through greater use of residential construction practices, which included the use of wood materials. However, we also found that questions remained concerning the durability of wood-frame barracks and the ability of wood-frame barracks to meet all antiterrorism force protection requirements. We recommended that engineering studies be undertaken to resolve these questions. DOD concurred with our recommendation and subsequently the Army determined that wood-frame barracks could be built in a manner that met all antiterrorism construction standards. However, DOD did not undertake studies on the durability of wood-frame barracks. In a 2004 report, we found that while DOD had taken a number of steps to enhance the management of the military construction program, opportunities existed for further improvements. Among other things, we recommended that DOD complete management tools for standardizing military construction practices and costs. DOD agreed and subsequently took steps to provide a more consistent approach to managing facilities and planning construction projects and costs. Further, in a September 2007 report, we discussed the complex implementation challenges faced by the Army to meet the infrastructure needs associated with the growth of personnel assigned to many installations as a result of base realignment and closure, overseas force rebasing, and force modularity actions. Also, in October 2009, we issued a report that discussed agencies’ progress toward implementing sustainable design and high-performance federal building requirements found in the Energy Independence and Security Act of 2007. This report also addressed the key challenges agencies may encounter when implementing federal building requirements for reducing energy use and managing storm water runoff. Further, in a January 2009 testimony befo the House of Representatives’ Committee on Transportation and Infrastructure, we noted that investment in infrastructure could reduce energy and operations and maintenance costs and address important energy and water conservation measures as well as other measures outlined within the Energy Independence and Security act of 2007. A list of these reports can be found at the end of this report in the Related GAO Products section. Because the Army did not measure the achievement of its goals to reduce military construction costs and timelines, the Army did not know to what extent the goals were met nor whether its military construction transformation strategy resulted in actual reductions in facility costs. Our review of selected project information showed that the Army did reduce the estimated cost of some facility construction projects and shortened building timelines during fiscal years 2007 through 2009, but it did not meet its overall stated goals. We also found that the Army did not consistently apply the cost reduction goal to all facility projects during fiscal years 2007 through 2009. Although the Army discontinued these numerical goals in 2010, Army officials believed its efforts to transform its military construction acquisition and building practices were successful in dampening the escalation of Army facilities’ costs and would continue to help ensure cost-effective and timely facilities in future years. When the Army set goals to reduce construction costs and building timelines, it did not establish a framework for monitoring the achievement of these goals. Effective management practices call not only for setting program goals but also for monitoring goal achievement so that results can be measured and adjustments can be made to programs, if needed, to better achieve the goals. According to internal control standards for federal agencies, activities need to be established to monitor performance measures and indicators and managers need to compare actual performance to planned or expected results so that analyses of the relationships can be made and appropriate actions taken. During our review, senior Army headquarters officials acknowledged that a framework to measure goal achievement should have been established when the cost and timeline goals were instituted. The officials also stated that the only explanation for not monitoring the goals was that they were so involved in implementing the many changes adopted under the Army’s military construction transformation strategy that no one took the time to monitor and track the results being achieved from the changes. During our review, we found that the Army did not subject all Army facility projects to its 15 percent cost reduction goal. According to Army officials, the Army planned to implement the cost goal by having project planners reduce the estimated cost of planned facilities by 15 percent, requesting funding from the Congress for the reduced amount, and then attempting to award and complete the project within the approved funding amount. Thus, the cost goal was not directly related to actual facility costs but rather to estimated facility costs. However, all facility projects were not subjected to the reduction in estimated costs, as the following examples illustrate: For fiscal year 2007, Army officials stated that the 15 percent cost goal only applied to military construction facility projects that were budgeted for under the base realignment and closure program. Reductions were not required in the estimated costs of facility projects budgeted under the Army’s regular military construction program. According to Army officials, reduced funding was not requested for the regular military construction program projects because the project estimates for the regular program were already complete before the reduction goal was announced, and the Army did not have sufficient time to recalculate the project estimates at the reduced amount before the budget request had to be submitted. For fiscal year 2008, Army officials stated that all Army facility cost estimates were subject to the 15 percent cost reduction goal, regardless of the funding source or type of facility. However, while all fiscal year 2008 projects were subject to the goal, Army officials stated that the 15 percent cost reduction in estimated costs was mandatory only for brigade, battalion, and company headquarters buildings, barracks, and dining facilities. For other types of facilities, if project planners believed that a 15 percent cost reduction could not be achieved when construction bids were ultimately solicited, the planners could submit a justification stating the reasons that a reduction was not made to the facility’s estimated cost. For fiscal year 2009, Army officials stated that the 15 percent reduction goal was applied only to five specific types of facilities—brigade, battalion, and company headquarters buildings, barracks, and dining facilities. Cost estimates for all other types of facilities were not subjected to the goal. According to Army officials, general cost increases in the construction industry indicated that a 15 percent cost reduction could not be achieved for most fiscal year 2009 facilities. However, because of the changes incorporated under the Army’s military construction transformation strategy, the officials believed that reductions could be achieved for the five specified facility types. Because the Army had not monitored and thus did not know to what extent it had met its cost goal, we performed an analysis and found that, while the Army reduced the estimated cost and met its goal on some facility projects, it did not meet the goal on other projects. Specifically, we reviewed the construction cost estimates for a non-probability sample of 75 facility projects that were in the categories subject to the goal to determine whether a 15 percent reduction was taken in the estimated cost of the facilities, as reported in each facility’s project justification. The 75 facilities included 15 fiscal year 2007 facilities funded under the base realignment and closure program, 30 projects from fiscal year 2008, and 30 projects from fiscal year 2009 for the five facility types subject to the goal. As shown in table 1, we found that the Army met its goal in 31 of the facilities (41 percent) and did not meet its goal in 44 of the facilities (59 percent). However, some reduction, but less than 15 percent, was made in the estimated cost of 24 of the 44 facilities that did not meet the goal. Although the Army had information on the actual costs of completed military construction projects, the Army did not routinely document the actual costs of the individual facilities included in the projects. For this reason, we could not determine whether any of these facilities resulted in actual savings compared to cost estimates based on DOD cost estimating guidance. The following examples illustrate the achievement of the Army’s cost goal in selected projects we reviewed: A fiscal year 2008 Army military construction project at Schofield Barracks, Hawaii, included a barracks. According to DOD military construction cost estimating guidance for that year, the project planners should have estimated $24.7 million for the cost of this barracks. However, according to the project’s justification, the barracks’ estimated cost was $20.9 million, which was the amount requested for funding. Because the barracks’ estimated cost was about $3.8 million, or about 15 percent, less than the amount based on DOD guidance, the Army achieved its goal in this case. A fiscal year 2009 Army military construction project at Fort Lee, Virginia, included a dining facility. According to DOD military construction cost estimating guidance for that year, the project planners should have estimated $5.8 million for the cost of this facility. However, according to the project’s justification, the dining facility’s estimated cost was $5.4 million, which was the amount requested for funding. In this case, the facility’s estimated cost was $400,000 (7 percent) less than the amount based on DOD guidance. Thus, the Army achieved some reduction in the estimated cost of this facility but did not meet the 15 percent goal. A fiscal year 2009 Army military construction project at Fort Stewart, Georgia, included a barracks. According to DOD military construction cost estimating guidance for that year, the project planners should have estimated $82.0 million for the cost for this facility. However, according to the project’s justification, the barracks’ estimated cost was $86.4 million, which was the amount requested for funding. In this case, the barracks estimated cost was about $4.4 million (5 percent) greater than the amount based on DOD guidance. Thus, the Army did not meet the 15 percent goal and actually requested more funding than it would have requested based on DOD guidance. Army officials stated that the cost goal was not met in some projects because the projects’ planners believed that a 15 percent cost reduction could not realistically be achieved when bids for the project were solicited because of local construction market conditions. In addition, the officials stated that, although the 15 percent goal might not have been achieved for all projects, they believed that the Army’s efforts to transform its military construction acquisition and building practices were successful in dampening the escalation of Army facility costs. Because the Army had not monitored and thus did not know to what extent it had met its 30 percent building timeline reduction goal, we performed an analysis to assess goal accomplishment and found that, while the Army shortened some building timelines, the overall goal was not achieved. Specifically, our analysis compared the actual average lapsed time between key building milestones for all completed projects approved during fiscal years 2007 through 2009 with the average lapsed times for the same milestones for completed projects approved in fiscal years 2004 through 2006—the 3 years before the implementation of the Army’s military construction transformation strategy. To illustrate, one key Army building timeline measure is the lapsed time between the date that a project’s design begins and the date that the project is ready for occupancy. As shown in table 2, we found that the Army’s average lapsed time for this timeline measure was reduced by about 11 percent during fiscal years 2007 through 2009—an improvement, but less than the Army’s 30 percent goal. Another key Army building timeline measure is the lapsed time between the date that the Army notifies the building contractor to begin construction and the date that the project is ready for occupancy. As shown in table 3, we found that the Army’s average lapsed time for this timeline measure was reduced by about 5 percent during fiscal years 2007 through 2009—also an improvement, but also less than the Army’s 30 percent goal. Army officials stated that they were pleased that average building timelines had been reduced even if the 30 percent goal was not achieved. During our review, Army officials stated that the Army decided to discontinue its construction cost and timeline reduction goals beginning in fiscal year 2010. The officials stated that, although the Army did not know to what extent cost and timeline reductions had been achieved, they believed that most of the cost and timeline reduction benefits from the Army’s military construction transformation strategy had been obtained by the end of fiscal year 2009. The officials also stated that, although the specific cost and timeline goals were discontinued, the numerous changes made to the Army’s facility acquisition and construction processes under the military construction transformation strategy would help ensure the continued delivery of cost-effective and timely facilities in the future. DOD guidance allows the use of various building materials and methods and the Army appears to have achieved some savings in initial construction costs by expanding the use of wood materials and modular construction methods for some permanent facilities. However, DOD has not determined whether the use of these materials and methods also will result in savings over the long term compared to the traditional use of steel, concrete, and masonry materials and on-site building methods. Over the past several years, DOD has taken several steps to bring uniformity among the military services in the criteria, standards, and codes used to design and construct military facilities. This has included the development of DOD’s unified facilities criteria and unified facilities guide specification system of guidance for the design, construction, sustainment, restoration, and modernization of all DOD facilities. For example, in 2007, DOD issued guidance—the Unified Facilities Criteria 1-200-01, “General Building Requirements”—which applies to the design and construction of all new and renovated facilities throughout DOD. The guidance states that the 2006 International Building Code, with some other modifications and exceptions, is the building code for DOD. Among e types of things, the International Building Code defines several allowabl construction based, in part, on the materials used in the construction and the materials’ potential to be a fuel source in case of a fire. For example, type I and type II construction use materials such as steel, concrete, and masonry that, in accordance with applicable testing standards, are classified as noncombustible. Type V construction allows the use of various materials, including combustible materials, and typically includes facilities built with wood framing. Although the code allows the use of many construction materials, the military services have traditionally used types I and II construction consisting of steel, concrete, and masonry when building permanent common facilities, such as administrative buildings, barracks, and dining facilities. Appendix II contains further details on DOD’s building materials and methods, including descriptions of types III and IV construction. During our review, we identified little quantitative information that compared the relative merits and economic impacts from the use of wood materials and modular construction methods with steel, concrete and masonry materials and on-site construction methods. The Army’s decision to expand its consideration and use wood materials and modular construction for some permanent facilities was primarily based on the Army’s desire to reduce military construction costs and building timelines in view of the significant increase in the Army’s construction requirements beginning in fiscal year 2006. According to Army officials, the Army believed that the increased use of wood framing and modular construction would reduce initial construction costs and building timelines for new facilities, result in facilities that met the Army’s needs, and also result in lower facility life-cycle costs. However, the Army did not have substantial quantitative information or analyses to support its view on lower life-cycle costs. For example, according to Army officials, the Army had performed only two analyses that compared the life-cycle costs of permanent facilities built with alternative construction materials and building methods. One analysis compared the life-cycle cost of a barracks built with wood materials with the life-cycle costs of a similar barracks built with steel, concrete, and masonry. Although this analysis estimated that the barracks constructed with wood would have lower life-cycle costs, the analysis was not based on actual costs. Instead, the analysis used cost estimates which might or might not provide a reliable prediction of actual costs over the long term. In addition, our review of the analysis found other flaws and data errors, such as understating the square footage of one of the projects by 39 percent, which affected the outcome of the analysis and cast further doubt on the reliability of the analysis. The other Army analysis assessed life-cycle costs for several types of construction materials and methods. However, it also was not based on actual costs but rather on estimates obtained in planning documents. The Navy and the Air Force generally disagreed with the Army’s views on the benefits from expanded use of wood materials and modular building methods. Senior officials with the Naval Facilities Engineering Command and the Air Force Center for Engineering and the Environment stated that they believed that use of wood materials and modular methods instead of steel, concrete, and masonry would result in facilities with shorter service lives and higher, not lower, life-cycle costs. To illustrate, the officials noted that features sometimes used in wood-frame construction could result in higher maintenance costs. For example, a wood-frame building finished with a shingle roof might have higher maintenance costs over the long term compared to a building finished with a steel roof because the shingles would have to be replaced periodically over the life of the building. While their views differed with the Army, Navy and Air Force officials stated that they had little quantitative support for their views and had performed no analyses that compared the long-term costs of facilities built with wood materials versus steel, concrete, and masonry materials. During our visits to private organizations that represented the interests of wood, modular building, and concrete and masonry industries, we found various views and opinions on the long-term merits and economic benefits from the use of alternative construction materials and building methods. However, we did not find documented analyses comparing the actual life- cycle costs of facilities constructed with alternative materials and methods. To gain some insight into the economic merits of the Army’s increased use of wood materials and modular construction, we reviewed available information related to initial facility construction costs depending on the materials and methods used to construct new buildings. We found evidence that the use of wood-frame construction can result in lower initial building costs. For example, we found that the Army apparently had achieved construction cost savings by using wood-frame construction in several barracks projects that were initially designed to be built with steel, concrete, and masonry. To illustrate, according to Army officials, a fiscal year 2006 project at Fort Carson to construct a barracks and company operations facility was estimated to cost about $35 million based on actual contract bids and the use of steel, concrete, and masonry construction. After switching the barracks’ design to wood-frame construction and resoliciting the project, the officials stated that the project was subsequently awarded for about $24 million, a savings of about $11 million, or 31 percent in estimated costs (see fig. 2). Similarly, a fiscal year 2001 barracks project at Fort Meade, Maryland, called for the construction of eight three-story barracks buildings with a total of 576 private sleeping rooms. On the basis of the project’s initial design using steel, concrete, and masonry, the Army estimated that the project would cost about $48 million, which was more than the amount approved for the project. In an effort to reduce the cost, the project was redesigned to specify the use of wood materials and residential construction practices. Subsequently, the project was constructed at a cost of about $39 million, or about $9 million (19 percent) less than the original estimated cost (see fig. 3). Sources outside of DOD also have noted that the use of wood-frame construction can result in lower initial building costs. For example, an August 2009 building valuation guide published by the International Code Council reported that the use of residential building methods, including wood-frame construction, for several types of facilities resulted in a 19 percent to 25 percent construction cost savings compared to the use of commercial construction methods, including the use of steel, concrete, and masonry materials. Also, a 2005 study collected information from cities across the United States to develop a construction cost model to accurately evaluate the relative construction costs of a multifamily building constructed using five different construction materials. Information collected during the study showed that the use of wood-frame construction could result in an average 6 percent to 7 percent construction cost savings compared to the use of masonry construction. Although we found little quantitative information on the long-term economic merits from the use of alternative building materials and methods, we found some evidence suggesting that the long-term costs of facilities built with wood-frame materials might result in lower or equal long-term costs compared to similar facilities built with steel, concrete, and masonry materials. For example, we reviewed the annual maintenance costs associated with two wood-frame barracks projects constructed in 2003 and 2006 at Fort Meade and Fort Detrick, Maryland, respectively. These facilities are the Army’s initial two modern, permanent barracks constructed with wood frame. During fiscal years 2007 and 2008, the annual maintenance costs of the wood-frame barracks on a square-foot basis was significantly less than the annual maintenance costs of other barracks at each installation constructed with steel, concrete, and masonry methods. However, the wood-frame barracks were newer by several years compared to the concrete and masonry barracks, which could account for the difference in maintenance costs. Still, local officials responsible for barracks maintenance at each installation stated that based on experience to date they believed that even in the long term the annual maintenance costs of the wood-frame barracks would be no greater than the annual maintenance costs of the installations’ concrete and masonry barracks. As another illustration, we visited two privatized housing projects for unmarried servicemembers where service officials stated that private developers were responsible for constructing, owning, operating, and maintaining the housing for 50 years in one case and 46 years in the other. During each visit, the developers stated that wood-frame construction was being used because the developers believed that, based on their internal long-term cost analyses, this type of construction would result in the most economical project over the long term. For example, the Navy partnered with a developer to build a pilot privatized housing project for unaccompanied personnel in the Norfolk, Virginia, area. The project includes the construction of 755 rooms in a six-story midrise building and 435 rooms in 87 separate housing units. The developer stated that the midrise building used noncombustible materials, such as concrete, and the 87 separate housing units used wood-frame materials. The developer stated that the type of construction used for each type of building was based on the most cost-effective type of construction, considering life- cycle costs, to provide the lowest total cost over a 50-year period. Further, the developer also stated that, because the exterior surfaces and interior finishes for both the midrise building and separate housing units were very similar, no difference in operation and maintenance costs was anticipated with regard to the different types of construction (see fig. 4). In the other project visited, the Army had partnered with a developer to build, own, and operate a privatized housing project for senior unmarried servicemembers at Fort Bragg, North Carolina, for 46 years. The project includes 13 apartment-style buildings with a mix of 312 one- and two- bedroom apartments. The developer stated that wood-frame construction was used in the project because, compared to the use of noncombustible materials and building methods, wood-frame construction resulted in lower initial construction costs and, based on the developer’s long-term analyses, was expected to also result in lower life-cycle costs (see fig. 5). Determining the relative merits and economic impacts of alternative building materials and methods over the long term requires the consideration of possible differences in facility service life and durability resulting from the use of different building materials and methods. Although we found no DOD studies or definitive analyses assessing possible service life and durability differences and any associated impact on life-cycle costs, we discussed opinions on the issue with service headquarters officials and local officials at five Army installations we visited. Army, Navy, and Air Force headquarters officials expressed the opinion that steel, concrete, and masonry facilities generally had longer service lives and were more durable than wood-frame facilities. However, we found that the services had different opinions on the importance of durability. For example, although Army officials agreed with the opinion of Navy and Air Force officials that the use of steel, concrete, and masonry generally resulted in more durable facilities, the Army’s opinion differed from the other services’ opinions on whether greater durability also meant that such facilities were more desirable. Army officials stated that because missions, requirements, and standards change over time, facilities constructed today will be outdated in 20 to 25 years and will require major renovation or possibly conversion to other uses to meet needs in the far outyears. Thus, Army officials stated that considering facility use beyond 25 years is not productive and facilities built with wood-frame materials and modular building methods will meet the Army’s needs even if they do not last as long as facilities constructed with steel, concrete, and masonry. Officials at the Army installations we visited had various opinions on the expected service life and durability of facilities constructed with wood materials and modular building methods. Officials at Fort Meade and Fort Detrick, for example, stated that they were satisfied with the durability of wood-frame barracks constructed on-site at their installations and would not seek to use steel, concrete, and masonry even if they had the opportunity to rebuild the facilities. With respect to wood modular construction, we found the following concerns expressed by officials at Fort Bliss and Fort Carson: Fort Bliss officials noted that because modular units were constructed off-site and then transported in some cases over 1,000 miles to the installation for assembly, the vibrations experienced during transportation might affect the units’ structures and result in durability issues. The modular industry, however, contends that modular units are constructed to withstand such transportation stresses. Fort Carson officials expressed concern that temperature changes would cause the expansion and contraction of the joints where modular units were joined, which might adversely affect durability in the long term. Fort Bliss and Fort Carson officials expressed concerns that settling of the different sections of modular facilities might show stress where they join together, resulting in additional maintenance requirements in the long term. Officials at Fort Bliss and Fort Carson also said that reconfiguring modular-built facilities for other uses, if needed in the future, might be more difficult compared to wood-frame facilities built on-site, and thus result in a shorter facility service life. Figure 6 shows a Fort Bliss barracks under construction using modular building methods. Fort Bliss officials added that, although they had some concerns about the durability of modular construction, the use of modular construction methods resulted in faster building timelines compared to steel, concrete, and masonry construction, which would help ensure the timely completion of facilities needed to accommodate the large number of soldiers reporting to the installation over the next few years. Although officials at some installations we visited expressed concerns over the durability of facilities built with modular building methods, other sources have reported information that supports the durability of modular facilities. For example, after Hurricane Andrew hit Florida in 1992, a team from the Federal Emergency Management Agency conducted a study of various building types and how well they weathered the storm. On the basis of its observations, the team concluded that, in general, both masonry buildings and wood-framed modular buildings performed relatively well. Although there are areas of conflict when designing facilities that meet both antiterrorism construction standards and sustainable design goals, military service officials stated that the conflicts are considered to be manageable and not a significant obstacle to the design and construction of new facilities. Service officials noted, however, that achieving higher levels of sustainability in future construction projects while still meeting the antiterrorism standards would further increase initial facility costs and create additional design challenges. DOD has recognized that areas of conflict exist between DOD’s antiterrorism building standards and sustainable design goals and has developed approaches to help deal with the conflicts. To illustrate, military service officials noted that the antiterrorism mandatory building standard to provide standoff distances around new facilities to keep potential explosives at a distance reduces development density and conflicts with a sustainable design goal to increase development density. Similarly, some officials stated that sustainable design goals related to greater use of windows to increase natural lighting conflicts with the recommended antiterrorism building measure related to minimizing hazards from flying glass fragments from windows. To help deal with such conflicts, a facility planning tool was developed that identifies and addresses the potential conflicts from integrating required antiterrorism standards with sustainable design goals. The tool uses a color-coded matrix to identify the relationship between the antiterrorism standards and sustainable design goals. Conflicting or possibly conflicting relationships are coded red and yellow, respectively, and the tool provides additional information to aid project designers in dealing with these areas. The services do not consider the conflicts between antiterrorism building standards and sustainable goals to be a significant obstacle when designing and building new military facilities. Service officials stated that with use of the facility planning tool and a comprehensive design approach, project designers are able to develop successful building solutions that ensure both secure and high-performance facilities. In particular, officials in each military service stated that the services had set a goal that beginning in fiscal year 2009 all new major military construction buildings would be designed and constructed to be silver- level certifiable under the U.S. Green Building Council’s Leadership in Energy and Environmental Design Green Building Rating System. This 100 percent goal was higher than the DOD-wide goal for fiscal year 2009, which called for 70 percent of new buildings to be silver-level certifiable. Further, service officials stated that in some cases military buildings have been constructed that met the rating system’s next higher sustainable design level—the gold level—while still complying with all antiterrorism standards. However, service officials also noted that achieving higher levels of sustainability while still meeting all antiterrorism standards increases initial facility costs and creates additional design challenges. To obtain additional details on how the services were dealing with the conflicts between the standards and the goals, we followed up with the project planners responsible for 90 military construction projects from a non-probability sample of Army, Navy, and Air Force projects approved during fiscal years 2007 through 2009. According to the planners, 80 of the 90 projects (89 percent) required no special steps or workarounds to meet both antiterrorism standards and sustainable design goals. For the projects where special steps or workarounds were needed, most issues related to facility windows and the required building standoff distances. For example, the planners of a fiscal year 2007 child development center at Fort Lewis, Washington, reported that special steps or workarounds were needed to simultaneously meet antiterrorism standards and sustainable goals. According to the planners, both the child care program and sustainable design goals encouraged large window areas on the exterior of the building for daylighting and child-height window views on both the building’s exterior and interior. However, the antiterrorism standards and recommendations encourage reduced window sizes with specific window glazing techniques to minimize hazards from flying glass fragments and the use of reflective glazing to prevent views of a building’s interior. The planners stated that an acceptable design solution was developed, but the result significantly increased the cost of the facility’s windows. Although the project planners stated that 80 of the 90 projects in our sample required no special steps or workarounds to meet both antiterrorism standards and sustainable design goals, the planners also reported that in some cases meeting both the standards and goals resulted in additional land use, community decentralization, or installation development sprawl. Specifically, project planners reported that, primarily because of the required standoff distances around new facilities, 18 (20 percent) of the 90 projects we reviewed resulted in additional land use, community decentralization, or installation development sprawl. For example, planners of a fiscal year 2008 instruction building at Fort Huachuca, Arizona, reported that because of the antiterrorism standoff distance standard, the building site was approximately 50 percent larger than required if there were no standoff requirements. Similarly, project planners of a fiscal year 2009 unit maintenance facilities project at Fort Campbell, Kentucky, stated that complying with the antiterrorism standoff distance standard resulted in additional land use, including the construction of an additional parking lot situated across the street from the facilities. According to service officials, incorporating antiterrorism standards in new facilities typically adds about 1 to 5 percent to construction costs and incorporating sustainable design building features typically adds about 2 percent to construction costs. The officials noted, however, that each project is unique and the estimated cost to incorporate antiterrorism standards and sustainable design features can vary significantly among military construction projects. To obtain additional details on the costs of incorporating antiterrorism standards and sustainable design features in new facilities, we reviewed information contained in the project justifications for the 90 military construction projects included in our non-probability sample of Army, Navy, and Air Force projects approved during fiscal years 2007 through 2009. The review showed that the average estimated cost to incorporate antiterrorism standards in the projects was about 2.0 percent of a project’s total cost with the range varying from 0.3 percent to 6.6 percent. The review also showed that the average estimated cost of the sustainable design features was about 1.6 percent of a project’s total cost with the range varying from 0.7 percent to 2.6 percent. According to the project planners, the actual costs of incorporating antiterrorism standards and sustainable design features in new projects was not available because contractors normally do not separately identify these costs in their bids responding to solicitations for project construction. Although the Army appears to have achieved some savings in initial construction costs by expanding the use of wood materials for some permanent facilities, the military services had little quantitative information on whether the use of wood materials and modular building methods will also result in lower long-term costs compared to the traditional use of steel, concrete, and masonry materials and on-site building methods. Determining the relative merits and economic impacts from the use of alternative construction materials and methods is an important issue for the military services to resolve to help ensure that DOD’s military construction program meets requirements at the lowest cost over the long term. Unless the services perform additional study and analysis to determine the relative merits and long-term economic impacts from the use of alternative construction materials and methods, DOD will not know whether the use of wood materials and modular building methods will result in the most economical long-term building solution or whether DOD’s unified facilities criteria, or other military construction program guidance, needs to be changed so that new facilities are constructed with materials and methods that meet requirements at the lowest cost over the long term. To address unanswered questions about the merits and long-term costs from the use of alternative construction materials and methods for new common facilities, such as administrative buildings and barracks, we recommend that the Secretary of Defense direct the Deputy Under Secretary of Defense (Installations and Environment) to commission a tri- service panel that would be responsible for determining and comparing the estimated life-cycle costs of facilities built with alternative construction materials and methods, including a mix of wood and steel, concrete, and masonry construction materials and on-site and modular construction methods. We also recommend that the Deputy Under Secretary of Defense (Installations and Environment) use the results from the tri-service panel’s determinations to revise DOD’s unified facilities criteria or other appropriate military construction guidance, as deemed appropriate, to ensure that new facilities are constructed with the materials and methods that meet requirements at the lowest cost over the long term. Officials from the Office of the Deputy Under Secretary of Defense (Installations and Environment) provided oral comments on a draft of this report. In the comments, DOD stated that it agreed with our recommendation to commission a tri-service panel that would be responsible for determining and comparing the estimated life-cycle costs of facilities built with alternative construction materials and methods. DOD stated that the department needed to better understand the life-cycle cost implications of different building materials and methods and to use this knowledge in evaluating and comparing total life-cycle cost alternatives. In view of the questions raised during the course of our review, DOD stated that it had already initiated a tri-service panel to develop a template that will objectively evaluate the relative life-cycle costs between competing construction proposals in the facilities acquisition process. When complete, the template is expected to allow prospective project designers to propose alternative construction materials and methods, among other design considerations, to achieve lower life-cycle costs or best overall value. DOD stated that this approach would recognize that the department cannot be solely responsible for determining the life-cycle cost implications of each possible alternative and needs to consider the best available industry knowledge, expertise, and innovation for any particular facility requirement. Nonetheless, DOD stated that it expects to monitor the performance of alternative materials and methods to better inform this process over time. We believe that DOD’s actions, once implemented, will address the intent of the recommendation. DOD stated that it partially agreed with our recommendation that the department use the results of the tri-service panel’s determinations to revise DOD’s unified facilities criteria or other appropriate military construction guidance, as deemed appropriate, to ensure that new facilities are constructed with the materials and methods that meet requirements at the lowest cost over the long term. DOD stated that it agreed with the general concept that lessons learned should be incorporated into facilities criteria and specifications to the extent practical. However, DOD also stated that in some cases, such as to minimize adverse environmental impacts, facilities might be built with materials or methods that do not result in the lowest cost but in the best value for the department. In short, DOD stated that the use of the lowest- cost materials and methods should be an important consideration in facilities acquisition, but not the overriding goal. Our recommendation was not intended to restrict DOD in its efforts to achieve the best value, but rather to ensure adequate consideration of the long-term merits and economic impacts from building alternatives. We continue to believe that when all costs are considered over the long term, including environmental costs, the best value to DOD will normally be the construction alternative with the lowest life-cycle cost. Further, as stated in our recommendation, when revising its construction guidance based on the tri-service panel’s determinations, we believe that DOD should only make revisions that it deems to be appropriate. As a result, we believe DOD’s plan to incorporate the tri-service panel’s findings into its guidance will address the intent of the recommendation. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions on the information discussed in this report, please contact me at (202) 512-4523 or leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. To assess the Army’s measurement and achievement of its military construction cost and timeline reduction goals, we interviewed Army headquarters and U.S. Army Corps of Engineers officials and reviewed applicable documentation concerning the Army’s military construction transformation strategy and the associated establishment and implementation of the Army’s goals to reduce construction costs and building timelines. We also reviewed guidance for internal controls and effective management practices that call for the monitoring of performance goals and discussed with Army officials the reasons that the Army did not establish a framework to monitor the achievement of its construction cost and building timeline reduction goals. To obtain some insight into the Army’s accomplishment of its cost goal, we reviewed the construction cost estimates for a non-probability sample of 75 facility projects to determine whether a 15 percent reduction was taken in the estimated cost of the facilities, as called for according to the Army’s plan for implementing the goal. We selected projects for review from a list of all Army military construction projects approved during fiscal years 2007 through 2009. The projects selected represented a range of facility types and geographic locations and were in the categories of facilities subject to the cost reduction goal. More specifically, the 75 facilities included 15 fiscal year 2007 facilities funded under the base realignment and closure program, 30 projects from fiscal year 2008, and 30 projects from fiscal year 2009 for five facility types subject to the goal. The construction cost estimates were included in the project justifications submitted to the Congress as part of the Army’s funding request. To obtain some insight into the Army’s accomplishment of its building timeline goal, we used actual Army project timeline information to compare the average lapsed time between key building milestones for all completed projects approved during fiscal years 2007 through 2009 with the lapsed times for the same milestones for completed projects approved in fiscal years 2004 through 2006—the 3 years before the implementation of the Army’s military construction transformation strategy. Although we did not independently validate the Army’s building timeline data, we discussed with the officials steps they had taken to ensure reasonable accuracy of the data. As such, we determined the data to be sufficiently reliable for the purposes of this report. To evaluate the merits and economic impacts from the Army’s expanded use of wood materials and modular building methods for permanent facilities, we interviewed Office of the Secretary of Defense, Army, Navy, and Air Force officials and reviewed related documentation, policies, and construction guidance on the use of construction materials and building methods for military facilities. We also discussed how various construction materials and building methods could affect initial construction costs, long-term costs, service life, and durability of new military facilities and reviewed available documentation on the issue from the U.S. Army Corps of Engineers, the Naval Facilities Engineering Command, the Air Force Center for Engineering and the Environment, and from representatives of three industry groups—the American Wood Council, the Modular Building Institute, and the National Concrete and Masonry Association. To observe the use of alternative construction materials and methods and discuss the issue with local military officials, we visited five Army installations—Fort Bliss, Texas; Fort Bragg, North Carolina; Fort Carson, Colorado; Fort Detrick, Maryland; and Fort Meade, Maryland—where wood materials or modular building methods had been used to construct permanent facilities. During the visits, we obtained opinions and reviewed available information on the relative merits and economic impacts from using alternative construction materials and building methods. We also met with the developers of two military privatized unaccompanied personnel housing projects to discuss the reasons that the building materials and methods used in the projects were chosen. One privatized project was associated with the Navy and was located in the Norfolk, Virginia, area and the other project was associated with the Army and was located at Fort Bragg, North Carolina. To review potential conflicts between antiterrorism construction standards and sustainable design goals and the costs to incorporate the standards and goals in new facilities, we reviewed applicable Department of Defense (DOD) policies, guidance, goals, and costs related to incorporating antiterrorism construction standards and sustainable design goals in new military facilities. We also interviewed officials at the Office of the Secretary of Defense, the U.S. Army Corps of Engineers, the Naval Facilities Engineering Command, and the Air Force Center for Engineering and the Environment concerning how potential conflicts between the standards and the goals are identified and addressed and how incorporating the standards and goals affects the cost of new facilities. To obtain additional details on how the services were dealing with potential conflicts between the standards and the goals, we followed up with the project planners responsible for 90 military construction projects selected from a non-probability sample of Army, Navy, and Air Force projects approved during fiscal years 2007 through 2009. We selected projects for review from a list of all Army, Navy, and Air Force military construction projects approved during fiscal years 2007 through 2009. We also selected projects to represent a range of facility types and geographic locations and included 10 Army, 10 Navy, and 10 Air Force projects approved in each of the three fiscal years—for a total of 30 projects approved in each fiscal year. During the follow up, we asked the project planners whether the projects required any special steps or workarounds to meet both antiterrorism standards and sustainable design goals and whether the projects resulted in additional land use, community decentralization, or installation development sprawl. We did not independently verify the information provided by the project planners. In addition, to obtain additional details on the costs of incorporating antiterrorism standards and sustainable design features in new facilities, we reviewed information contained in the project justification of each of the 90 projects. The justifications included the estimated cost to incorporate antiterrorism standards in the project and, for fiscal year 2009 projects, the justifications also included the estimated cost to incorporate sustainable design goals. We conducted this performance audit from March 2009 to February 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In 2007, DOD issued guidance—the Unified Facilities Criteria 1-200-01, General Building Requirements—that applies to the design and construction of all new and renovated facilities throughout DOD. The guidance adopted the 2006 International Building Code, with some modifications and exceptions, as the building code for DOD. The International Building Code defines allowable types of construction based on factors such as the size, configuration, and planned facility use and categorizes planned buildings into five construction types. The construction type classifications are based on the fire-resistive capabilities of the predominant materials used in the construction progressing from type I, the most fire-resistive, to type V, the least fire-resistive. More specifically, types I and II construction incorporate materials such as steel, concrete, and masonry which, in accordance with applicable testing standards, are classified as noncombustible. Types III and V construction incorporate the use of any material permitted by the code to include combustible materials such as wood products and plastics. Type IV construction is related to the use of heavy timber. Table 4 illustrates the materials that are allowed to be used in the building elements—i.e., the structural frame, bearing walls, nonbearing walls, floor, and roof—of a facility built according to each type of construction. In each of the construction types, the intended level of fire protection is achieved by assembling building elements to achieve fire-resistance ratings established by the International Building Code. In a type I steel- frame building, for example, spray-applied fire-resistive material can be used to enable the structural frame to achieve the 3-hour fire-resistance rating required by the code, and in a type V wood-frame building, covering exposed wood with drywall allows the affected building elements to achieve the 1-hour fire-resistance rating required by the code. In addition to the fire protection provided by the assembly of building elements, the code establishes requirements for use of automatic fire sprinkler systems based on factors to include the planned use and size of a facility and the planned number of occupants. The International Building Code also serves to limit building size based on the level of fire protection provided by its construction. Because type I construction is the most fire-resistive of the construction types, the code places minimal limits on the dimensions of type I buildings. To account for the comparatively lower level of fire protection provided by type II through type V construction types, the code establishes limits on building dimensions. For example, a type V barracks building that is protected with an automatic sprinkler system is limited under the code to a maximum height of 4 stories, or 60 feet, with each story having maximum floor area of 36,000 square feet. DOD has traditionally built permanent buildings using on-site construction where materials are delivered to the construction site and the materials are then assembled into a finished facility. However, as part of its military construction transformation strategy, the Army has allowed, among other alternative construction techniques, the use of modular construction. In this method of construction, building sections are fabricated off-site in a factory environment, transported to the construction site, and then connected to other building sections to assemble the facility. Although some on-site construction is normally needed to complete the facility, the Modular Building Institute reports that in a typical modular construction project between 80 and 95 percent of the total construction is completed at an off-site factory. Because the off-site construction can proceed under controlled conditions at the same time that on-site foundation and other work is being completed, modular construction projects can potentially be completed with less material waste and in less time compared to projects built with on-site construction methods. DOD’s minimum antiterrorism construction standards are contained in DOD’s Unified Facilities Criteria 4-010-01, DOD Minimum Antiterrorism Standards for Buildings. The standards include 22 mandatory standards and 17 recommended, but not required, measures designed to mitigate antiterrorism vulnerabilities and terrorist threats in inhabited buildings. Mandatory standards 1 through 5 are considered site planning standards. These standards note that operational, logistic, and security requirements must be integrated into the overall design of buildings, equipment, landscaping, parking, roads, and other features and that the most cost- effective solution for mitigating explosive effects on buildings is to keep explosives as far as possible from the buildings. Standards 6 through 9 are considered structural design standards. These standards require that additional structural measures be incorporated into building designs to ensure that buildings do not experience progressive collapse or otherwise experience disproportionate damage even if required standoff distances can be achieved. Standards 10 through 15 are considered architectural design standards. These standards cover many aspects of building layout that must be incorporated into designs to improve overall protection of personnel inside buildings. Standards 16 through 22 are considered electrical and mechanical design standards. These standards address limiting damage to critical infrastructure; protecting building occupants against chemical, biological, and radiological threats; and notifying building occupants of threats or hazards. Concerning the 17 recommended measures, DOD states that incorporating these measures can enhance site security and building occupants’ safety with little increase in cost and should be considered for all new and existing inhabited buildings. Table 5 provides a brief summary description of each mandatory standard and recommended measure. Sustainable design, or development, generally refers to efforts to design, construct, maintain, and remove facilities in ways that efficiently use energy, water, and materials; improve and protect environments; and provide long-term benefits for occupant health, productivity, and comfort. Sustainable design efforts are generally grouped under six fundamental principles—optimize site potential, optimize energy use, protect and conserve water, use environmentally preferable products and practices, enhance indoor environmental quality, and optimize operational and maintenance practices. Within the building industry, sustainable design is also known by such terms as green, high performance, or environmentally friendly. DOD sustainable design requirements are contained in DOD’s Unified Facilities Criteria 4-030-01, Sustainable Development. The document provides instruction, requirements, and references for DOD facility professionals and architect/engineer and construction contractors to apply sustainable development principles and strategies consistently in DOD facilities throughout their life cycle—from planning to programming and securing of funds; to site selection, design, and construction; to documentation and operations and maintenance; and to reuse or deconstruction and removal. The document’s purpose is to help produce and maintain DOD facilities that comply with existing service policies and federal mandates for sustainable design, energy efficiency, and procurement of environmentally preferable materials. Further, the document provides guidance to help reduce the total cost of facility ownership, while minimizing negative impacts on the environment and promoting productivity, health, and comfort of building occupants. To help measure the sustainability of new military buildings, DOD uses the U.S. Green Building Council’s Leadership in Energy and Environmental Design Green Building Rating System. Created in 1998, the rating system represents the Council’s effort to provide a nationally accepted benchmark for the design, construction, and operation of high-performance green buildings. The system also provides for a certification program for new construction projects by identifying a set of prerequisites and credits categorized under several environmental categories. The prerequisites are required tasks in order to be considered for a certification. The credits are tasks, steps, or measures that could be incorporated into a construction project and include a variable number of points—some based on performance levels and some based on addressing distinct measures related to an overarching sustainable concept. The United States Green Building Council can award a specific certification level to a new building depending on the total number of points achieved in the design and construction of the building. The certification levels for new construction and renovation projects under the 2009 rating system include: certified (40 to 49 points), silver (50 to 59 points), gold (60 to 79 points), and platinum (80 points and above). For fiscal year 2009, DOD set a goal that at least 70 percent of DOD’s new buildings would be silver-level certifiable. However, each of the military services set a goal that beginning in fiscal year 2009 all new major military construction buildings would be designed and constructed to be silver-level certifiable. Table 6 below shows by category the prerequisites, credits, and available points under the U.S. Green Building Council’s Leadership in Energy and Environmental Design Green Building Rating System. In addition to the contact named above, Terry Dorn, Director; Michael Armes, Assistant Director; Laura Durland, Assistant Director; Grace Coleman; George Depaoli; Tobin McMurdie; Jeanett Reid; and Gary Phillips made significant contributions to this report. Federal Energy Management: Agencies are Taking Steps to Meet High- Performance Federal Building Requirements, but Face Challenges and Need to Clarify Roles and Responsibilities. GAO-10-22. Washington D.C.: October 30, 2009. Real Property: Infrastructure Investment Presents Opportunities to Address Long-standing Real Property Backlogs and Reduce Energy Consumption. GAO-09-324T. Washington D.C.: January 22, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. Defense Infrastructure: Challenges Increase Risks for Providing Timely Infrastructure Support for Army Installations Expecting Substantial Personnel Growth. GAO-07-1007. Washington, D.C.: September 13, 2007. Defense Infrastructure: Long-term Challenges in Managing the Military Construction Program. GAO-04-288. Washington, D.C.: February 24, 2004. Military Housing: Opportunities That Should Be Explored to Improve Housing and Reduce Costs for Unmarried Junior Servicemembers. GAO-03-602. Washington, D.C.: June 10, 2003.
To meet the challenges associated with a threefold increase in the Army's military construction program between fiscal years 2005 and 2009, the Army adopted numerous changes, including the expanded use of wood materials and modular building methods, designed to reduce building costs and timelines for new facilities. With the changes, the Army set goals to reduce building costs by 15 percent and timelines by 30 percent. The Army, Navy, and Air Force have also faced challenges associated with incorporating both antiterrorism construction standards and sustainable design ("green") goals into new facilities. GAO was asked to (1) assess the Army's progress in meeting its goals, (2) evaluate the merits from the Army's expanded use of wood materials and modular building methods, and (3) examine potential conflicts between antiterrorism construction standards and sustainable design goals. GAO reviewed relevant documentation, interviewed cognizant service officials, analyzed selected construction project data, and visited five Army installations to review facilities built with alternative materials and methods. The Army set goals to reduce its estimated construction costs by 15 percent and building timelines by 30 percent, but it did not monitor goal achievement and thus did not know to what extent the goals had been met or whether changes made to its military construction program resulted in actual reductions in facility costs. GAO's review of selected project information showed that the Army did reduce the estimated cost of some facility construction projects and shortened building timelines during fiscal years 2007 through 2009, but it did not meet its overall stated goals. For example, GAO found that the average building timeline for one key measurement (design start to ready for occupancy) was reduced by about 11 percent--an improvement, but less than the 30 percent goal. The Army discontinued the numerical goals in fiscal year 2010, and Army officials stated that, although the specific goals might not have been achieved, they believed that the Army's efforts were successful in dampening the escalation of Army facilities' costs and would continue to help ensure cost-effective and timely facilities in future years. The Army appears to have achieved some savings in selected construction projects by expanding the use of wood materials and modular construction methods for some of its facilities, but GAO found little quantitative data on whether the use of these materials and methods will result in savings over the long term compared to the traditional use of steel, concrete, and masonry materials and on-site building methods. Without long-term or life-cycle analyses that consider not only initial construction costs but also possible differences in facility service lives and annual operating and maintenance costs between the construction alternatives, it is not clear that the Army's expanded use of wood materials and modular building methods will achieve the Army's intended purpose of reduced facility costs over the long term. The Navy and the Air Force generally disagreed with the Army's view and believed that the use of wood materials and modular construction will result in facilities with shorter service lives and higher life-cycle costs. However, none of the services had the analyses to support its views. Without additional study and analysis, DOD will not know whether military construction program guidance needs to be changed to ensure that facilities are constructed with materials and methods that meet needs at the lowest cost over the long term. Conflicts between antiterrorism building standards and sustainable design goals exist, but military service officials stated that the conflicts are considered to be manageable. GAO's review of 90 Army, Navy, and Air Force military construction projects, approved during fiscal years 2007 through 2009, showed that although incorporating the standards and the goals in new facilities added to construction costs, 80 of the projects required no special steps or workarounds to meet both the standards and the goals. However, service officials noted that achieving higher levels of sustainability in future construction projects while still meeting the antiterrorism standards would further increase initial facility costs and create additional design challenges.
Section 861 of the NDAA for FY2008 directed the Secretary of Defense, the Secretary of State, and the USAID Administrator to sign an MOU related to contracting in Iraq and Afghanistan. The law specified a number of issues to be covered in the MOU, including identifying common databases to serve as repositories of information on contract and contractor personnel. The NDAA for FY2008 required the databases to track the following, at a minimum: for each contract that involves work performed in Iraq or Afghanistan, a brief description of the contract, its total value, and whether it was awarded competitively; and for contractor personnel working under contracts in Iraq or Afghanistan, total number employed, total number performing security functions, and total number killed or wounded. In July 2008, DOD, State, and USAID signed an MOU in which they agreed SPOT would be the system of record for the statutorily required contract and personnel information. SPOT is a Web-based system initially developed by the U.S. Army to track detailed information on a limited number of contractor personnel deployed with U.S. forces. The MOU specified that SPOT would include information on DOD, State, and USAID contracts with more than 14 days of performance in Iraq or Afghanistan or valued at more than $100,000, as well as information on the personnel working under those contracts. Each agency further agreed to ensure that data elements related to contractor personnel, such as the number of personnel employed on each contract in Iraq or Afghanistan, are entered into SPOT accurately. Although the law only directs the agencies to track aggregate data, SPOT is currently configured in a manner that tracks individuals by name and records information such as the contracts they are working under, deployment dates, blood type, and next of kin. The agencies agreed that contract-related information, such as value and extent of competition, are to be imported into SPOT from the Federal Procurement Data System – Next Generation (FPDS-NG), the federal government’s system for tracking information on contracting actions. Also, per the MOU, DOD is responsible for all maintenance and upgrades to the system, but the agencies agreed to negotiate funding arrangements for any agency-unique requirements. Since the signing of the July 2008 MOU, the requirements of section 861 have been amended. The Duncan Hunter National Defense Authorization Act for Fiscal Year 2009 specified additional matters to be covered in the agencies’ MOU to address criminal offenses committed by or against contractor personnel. Additionally, the NDAA for FY2010 amended the original requirements by redefining “contract in Iraq and Afghanistan” to include grants and cooperative agreements and redefining “contractor” for these purposes to include grantees and cooperative agreement recipients. The NDAA for FY2010 also revised the minimum threshold for tracking contracts, task and delivery orders, grants, and cooperative agreements from 14 days of performance in Iraq or Afghanistan to 30 days. In April 2010, the three agencies signed a new MOU to incorporate these statutory changes. DOD, State, and USAID have phased in their implementation of SPOT, with each developing its own policies and procedures governing the use of SPOT. DOD designated SPOT in January 2007 as its primary system for collecting data on contractor personnel deployed with U.S. forces. At that time, it directed contractor firms to enter by name all U.S., third country, and local nationals working under its contracts in Iraq or Afghanistan into SPOT. DOD officials informed us that they have not issued a policy directing that personnel working under assistance instruments be entered into SPOT because the department has made very limited use of these instruments in Iraq or Afghanistan. State issued a policy in March 2008 requiring contractors to enter data on their personnel working in Iraq and Afghanistan into SPOT. An additional directive was issued in January 2009 to expand this requirement to personnel working under assistance instruments in the two countries. USAID issued a directive in April 2009 requiring contractors and assistance recipients in Iraq to begin entering personnel data into SPOT. In July 2010, USAID issued a directive that expanded that requirement to contractors and assistance recipients in Afghanistan. DOD, State, and USAID have encountered several practical and technical challenges that undermined SPOT’s ability to accurately and reliably track personnel, as well as contracts and assistance instruments, as agreed in the MOUs. Although DOD, State, and USAID revised their MOU in April 2010 to incorporate changes pertaining to the use of SPOT, they lacked agreement on how to proceed with its implementation. This lack of agreement existed partly because the agencies have not assessed their respective agency information needs for managing contracts and assistance instruments in Iraq and Afghanistan and how SPOT should be designed to meet these needs. SPOT’s implementation to date falls short of tracking information as agreed to in the MOUs. Specifically, agency policies and other challenges have limited which personnel have been entered into the system and tracked, including those performing security functions. Furthermore, while SPOT has the capability to record when personnel have been killed or wounded, such information has not been regularly updated. Finally, SPOT does not have the capability to track the contract and assistance instrument data elements as agreed to in the MOUs. For personnel working under contracts and assistance instruments, we identified at least three challenges the agencies faced in ensuring that SPOT contained complete and accurate information. Specifically: USAID and State policies limited the extent that local national personnel were entered into SPOT. Following the passage of the NDAA for FY2008, USAID and State developed agency-specific policies regarding SPOT’s implementation. However, in some instances these policies limited the extent to which local nationals were required to be entered into the system. USAID’s April 2009 contract and assistance policy specified only that contractor and assistance personnel deployed to Iraq must be registered in SPOT. The policy explicitly excluded Iraqi entities and nationals from being entered into SPOT, until a classified system is established. It was not until July 2010 that USAID directed that its contractor and assistance personnel working in Afghanistan be accounted for in SPOT. The policy notes that procedures will be provided separately for entering information on Afghan nationals into SPOT, but as of September 2010, such procedures have not been developed. As a result of these policies, information on local nationals working under USAID contracts and assistance instruments in Iraq and Afghanistan is still not being tracked in SPOT. State’s assistance policy directs that U.S. and third country nationals working under grants must be entered into SPOT. While the policy specifies that local nationals should be entered into the system, State officials told us that agency staff can use their discretion to determine whether local national personnel working under grants are entered into SPOT. In contrast, State requires all U.S. citizens, third country, and local nationals working under its contracts to be entered into SPOT. In explaining why their policies make exceptions for local nationals, officials from USAID and State cited security concerns. USAID officials told us that they held off entering Iraqi or Afghan nationals into SPOT because identifying local nationals who work with the U.S. government by name could place those individuals in danger should the system be compromised. Similarly, State officials cited concern for the safety of these individuals should SPOT, with its detailed personnel information, be compromised. Practical limitations hindered the agencies’ ability to track local national personnel. Even when local national personnel are required to be entered into SPOT, agency officials have explained that such personnel are particularly difficult to track, especially in Afghanistan, and as a result, their numbers in SPOT are not a close representation of their actual numbers. This is primarily due to practical limitations the agencies encountered, including: Many local nationals working under contracts and assistance instruments are at remote locations and their numbers can fluctuate daily. DOD officials in Iraq and Afghanistan explained that this is especially true for construction projects, where the stage of construction and season can affect the total number of personnel working on a project. For example, DOD officials in Afghanistan told us that at one project site the number of local national personnel working fluctuated anywhere from 600 to 2,100. Further, DOD contracting officials told us in some instances it could be weeks before they are notified that local national personnel are no longer working on a particular project. This has limited the ability to track, in real time, the status of these personnel in SPOT. Also, for personnel working at remote locations, the ability of U.S. government officials to verify the completeness of information in SPOT is hindered by security conditions that make it difficult for them to visit regularly, and they cannot use their limited time on site to verify personnel information. Local nationals working under DOD, State, or USAID contracts and assistance instruments rarely need SPOT-generated letters of authorization (LOAs) because they are not accessing U.S. facilities or using U.S. government services. In contrast, U.S. and third country nationals typically need a SPOT-generated LOA, for example to even enter Iraq or Afghanistan, and, therefore, are more likely to be entered into SPOT. As we have previously reported, the need for a SPOT-generated LOA has served as the primary factor and incentive for ensuring that personnel have been entered into the system. Information necessary for entering personnel into SPOT may not be available. DOD, State, and USAID officials told us some local national contractors are hesitant or simply refuse to submit information on their personnel because of safety concerns. Additionally, some information required for SPOT data fields, such as first and last names and date of birth, may not exist or be known. This is particularly true in Afghanistan, where it is common for local nationals to have only one name and know only their approximate year of birth. Limited access to reliable internet connections in Iraq and Afghanistan inhibit local firms’ ability to enter personnel information into SPOT. Since SPOT is a Web-based system that requires internet access for extended periods of time to input detailed personnel information, agency officials noted that this is a major impediment to the widespread use of SPOT in both countries. Contractors and assistance recipients have not kept SPOT updated. Although the agencies have increasingly required their contractors and assistance recipients to enter personnel information into the system, there has been little emphasis placed on ensuring that the information entered into SPOT is up to date. Specifically, contractors and assistance recipients have not consistently closed the accounts of their personnel once they have left Iraq or Afghanistan. As a result, SPOT does not accurately reflect the number of contract and assistance personnel in either country, and in some cases the numbers may be overstated. SPOT program officials told us that in March 2010 they began periodically reviewing SPOT to close out the accounts of any personnel who either did not actually travel to Iraq or Afghanistan or whose estimated deployment ending date was 14 days overdue. Based on this review, in April 2010 alone, they identified and closed the accounts of over 56,000 such personnel who had been listed in SPOT as still being deployed. Although SPOT was designated as a system for tracking the number of personnel performing security functions, it cannot be used to reliably distinguish personnel performing security functions from other contractors. SPOT program officials explained that the number of security personnel working under contracts and assistance instruments for the three agencies can be identified using multiple methods, all of which have limitations and yield different results, as shown in table 1. However, in acknowledging the limitations of these methods, the officials noted that they are developing guidance that better explains the different methods and the results they yield. The three methods used to count security contractors include: The common industry classification system identifies the types of goods and services the firm provided under the contract. However, by using this contract classification system to calculate the number of security contractors, other personnel working on the security contract but not performing security functions, such as administrative and support staff, would be included in the count. Job titles are to be entered into SPOT by employers for each individual. SPOT program officials identified five job titles that they include in counts of security personnel. These officials acknowledged there is a risk that an employee providing security services may have a job title other than one of those five and, therefore, would not be included in the count. The weapon authorization data field in SPOT identifies personnel who have been authorized to carry a firearm. Employers of armed security contractors are required to enter this information into SPOT as part of DOD’s process to register and account for such personnel in each country. However, USAID officials in Iraq explained that security personnel working under the agency’s contracts and assistance instruments receive authorization to carry firearms from the Iraqi government, not DOD, and are not identified in SPOT as having a weapons authorization. Further, some contractors performing security functions are not authorized to carry weapons and would, therefore, not be included in a count using this method. Conversely, some personnel who are not performing security functions have been authorized to carry weapons for personal protection and would be included in the count. Regardless of the method employed to identify personnel in SPOT, it appears that not all personnel performing security functions are being captured in the system. For example, based on an analysis of SPOT data, no more than 4,309 contractor personnel were performing security functions for DOD in Afghanistan during the second quarter of fiscal year 2010. In contrast, DOD officials overseeing armed contractors in Afghanistan estimated the total number of DOD security contractors in Afghanistan for the same time period was closer to 17,500. With regard to tracking personnel who were killed or wounded while working on contracts and assistance instruments in Iraq and Afghanistan, SPOT was upgraded in January 2009 so that contractors could update the status of their personnel in the system, including whether they were killed or wounded. However, officials from the three agencies informed us they do not rely on SPOT for such information because contractors and assistance recipients generally have not recorded in SPOT whether personnel have been killed or wounded. This is evidenced by the fact that when we compared information in SPOT to DBA insurance case data provided by Labor on 213 contractors who had been killed in Iraq or Afghanistan during our review period, only 78 of the contractors were in SPOT and, of these, only 9 were listed as having been killed. SPOT program officials explained that SPOT users may not be aware of the requirement to update the system with such information and they are working to develop new guidance that clarifies the requirement. SPOT currently cannot be used to track information on contracts and assistance instruments as agreed to in the MOUs. For example, SPOT still cannot import contract dollar values directly from FPDS-NG. SPOT program officials told us that the system has been reconfigured to import data from FPDS-NG, but the direct link between the two systems will not occur in 2010 as previously estimated. The officials explained that they are coordinating with FPDS-NG officials to determine when the link can be established. Further, while the MOU was updated in April 2010 to cover assistance instruments, the revised MOU did not address how assistance instrument information, such as value and competition, would be entered into SPOT as such information is not available through FPDS-NG. USAID and State officials informed us they do not plan to directly link SPOT and the systems that currently track their respective assistance instruments. They explained that this is due in part to the fact that both agencies are implementing new tracking systems. Without such links the agencies will have to manually enter assistance information into SPOT. In addition, although SPOT was upgraded in 2009 to allow users to include information on whether the contract or assistance instrument was awarded using competitive procedures, the system is not a reliable source for this information as it is generally not being entered. For example, we found that competition information had only been entered for 45 percent of the contracts in SPOT with performance during our review period. There has been a lack of agreement among, and in some instances within, DOD, State, and USAID about how to proceed with SPOT’s implementation. At a March 2010 congressional hearing, officials from the three agencies testified that they would modify how SPOT tracked personnel. Specifically, they explained the system would be modified to allow users to enter the aggregate number of personnel working on a particular contract or assistance instrument, as opposed to requiring each individual to be entered by name. The proposed modification was primarily in response to USAID’s concerns that the cost and resources needed to enter all of the currently required data outweigh the benefits of having detailed information as well as to alleviate security concerns over entering personal information on local nationals into SPOT. However, as of September 2010, SPOT still does not allow users to enter aggregate personnel data, as the agencies have disagreed on who will pay for the modification and what approach to take. DOD estimated that it would cost as much as $1.1 million to reconfigure the system to allow aggregate data to be entered and stored. Since the modification would be made to address USAID’s concerns, DOD officials noted that in accordance with the MOU, USAID should cover the cost. However, USAID officials informed us that the modification would not solely benefit USAID as State and even DOD components have expressed interest in having SPOT track aggregate personnel information. State began conducting preliminary tests on an approach that would upload into SPOT groups of unique records assigned to each local national instead of individual names and associated personal data. In August 2010, DOD and State officials indicated that they had successfully uploaded the first batch of records into SPOT using this method. Although USAID’s preferred approach would have users directly enter the total number of U.S., third country, and local nationals working under each contract or assistance instrument, USAID officials recently indicated the agency would begin testing State’s approach as a low-cost solution. The lack of agreement on how to proceed with SPOT’s development and implementation can be partly attributed to the fact that the agencies designated it as their system of record for meeting statutory requirements without first identifying their information needs. SPOT program officials acknowledged that they were unaware of the informational needs of the contracting commands—required users of SPOT—or whether the commands had any uses for the detailed data contained in the system. Further, the agencies do not have a shared understanding of the value of tracking detailed data, particularly since the level of detail required for all contractor and assistance personnel in SPOT is greater than what is statutorily required. For example, senior USAID contracting and assistance officials told us the agency had no plans to use the detailed information tracked in SPOT as a tool for managing and overseeing its contracts and assistance instruments. They further noted SPOT is being implemented only because the agency is statutorily required to have a system for tracking such information. Even within agencies there is not consensus on the need for detailed information on all contractor and assistance personnel. For example, while DOD policy requires all contractor personnel to be individually entered into SPOT, several senior DOD officials we met with in Iraq and Afghanistan stated that they do not see the benefit of collecting detailed information on all individuals, especially local nationals working at remote locations, given the challenges associated with collecting such information and the likelihood of it being incomplete or inaccurate. However, SPOT program officials we met with explained that while they recognize that the benefits of the information collected through SPOT will vary throughout organizations, they are working to identify other potential users of SPOT data. For example, they noted that some users find detailed personnel information valuable, such as base commanders who could use the system to obtain insight as to who is on their installations. Senior officials from DOD, State, and USAID agreed that the agencies should obtain an understanding of their respective informational needs and ensure that a system is in place to collect that information at the appropriate level of detail. Without such an understanding, they noted that the agencies risk expending resources unnecessarily in difficult environments trying to collect and verify detailed data that may be of limited utility. Last year, we reported on the challenges associated with the agencies’ implementation of SPOT. To address the shortcomings identified in our 2009 report, we recommended that the Secretaries of Defense and State and the USAID Administrator jointly develop and execute a plan with associated time frames for the continued implementation of the NDAA for FY2008 requirements, including: ensuring the agencies’ criteria for entering contracts and contractor personnel into SPOT are consistent with the NDAA for FY2008 and with the agencies’ respective information needs for overseeing contracts and contractor personnel, revising SPOT’s reporting capabilities to ensure they fulfill statutory requirements and agency information needs, and establishing uniform requirements on how contract numbers are to be entered into SPOT so that contract information can be pulled from FPDS-NG. DOD and State disagreed with the need for the agencies to develop and execute a plan to address the issues we identified. They cited ongoing coordination efforts and planned upgrades to SPOT as sufficient. While USAID did not address our recommendation, it noted plans to continue meeting with DOD and State regarding SPOT. At that time, we cautioned that continued coordination without additional actions would not be sufficient and that a plan would help the agencies identify the concrete steps needed to help ensure that the data in SPOT are sufficiently reliable to fulfill statutory requirements and their respective agencies needs. As our current work demonstrates, many of the issues with the agencies’ implementation of SPOT that our recommendation was intended to address have not been resolved. In particular, the agencies have not assessed their respective informational needs or determined how SPOT could be best implemented to meet those needs. Further, the system still cannot be relied on to reliably track statutorily required data. DOD, State, and USAID reported to us that as of March 2010 there were 262,681 contractor and assistance personnel in Iraq and Afghanistan, 18 percent of whom were performing security functions. DOD reported 207,553 contractor personnel, while State and USAID reported 19,360 and 35,768 contractor and assistance personnel, respectively. Of the personnel reported by the three agencies, 88 percent were contractors and the remaining 12 percent worked under assistance instruments. Due to limitations with SPOT, the reported data were obtained primarily through periodic agency surveys and reports from contractors and assistance recipients. We determined that caution should be exercised when identifying trends or drawing conclusions about the number of contractor and assistance personnel in either country based on the data the agencies reported to us. Several factors, many of which are similar to the challenges with SPOT, hindered the agencies’ ability to collect accurate and reliable personnel data, including difficulty obtaining information on the number of local nationals, low response rates to agency surveys, and limited ability to verify the accuracy or completeness of the personnel data reported. Despite such limitations, the officials characterized the data reported to them and provided to us as the best data available on the number of contractor and assistance personnel in the two countries. DOD Contractor Personnel As of the second quarter of fiscal year 2010, DOD reported to us that there were 95,461 contractor personnel in Iraq and 112,092 contractor personnel in Afghanistan (see table 2 and also app. II for additional DOD contractor personnel data). Of that total, approximately 14 percent were reported to be performing security functions. DOD reported that it had no personnel working under grants or cooperative agreements in either country during our review period. The contractor personnel numbers were obtained through the U.S. Central Command’s (CENTCOM) quarterly census. CENTCOM initiated its quarterly census of contractor personnel in June 2007 as an interim measure until SPOT was fully implemented, and for our reporting period, DOD continued to use the census to count the number of DOD contractor personnel in Iraq and Afghanistan. The census is dependent on contractor firms reporting their personnel data to DOD components, which then compile the data and report them to CENTCOM at the end of each quarter. According to DOD officials, the quarterly census remains the most reliable source of contractor personnel data. However, DOD officials overseeing the census acknowledged that the census numbers represent a rough approximation of the actual number of contractor personnel who worked in either country. These officials told us that because of how the data were collected and reported by the various DOD components, it was difficult to compile and obtain a more precise count of contractor personnel. Specifically, there are several factors that hindered DOD’s ability to collect accurate and reliable data, including difficulty in counting local nationals and an inability to validate the data. As military operations increase in Afghanistan, efforts to obtain an accurate count of the contractor workforce may be more complicated than in Iraq, because DOD’s contractor workforce in Afghanistan consists of more local nationals than in Iraq, and data on local nationals are more difficult to obtain than data on U.S. citizens and third country nationals. The reasons cited— fluctuating numbers and work at remote locations—are similar to those cited for why it is challenging to ensure that local nationals are entered into SPOT. DOD officials in both Iraq and Afghanistan explained that security conditions limit their ability to conduct site visits to remote locations and added that while at sites their focus is primarily on assessing the status of a project, as opposed to checking on the number of personnel working. Moreover, the challenges associated with CENTCOM’s quarterly census were heightened by the transition to an automated census. In the second quarter of fiscal year 2010, DOD began to transition from the manually compiled CENTCOM census to eventual reliance on SPOT. In doing so, DOD used a SPOT-populated census template—called SPOT-Plus—as an interim step. Although the DOD official responsible for the SPOT program has stated that CENTCOM’s manual census was cumbersome, resource intensive, and provided only a snapshot in time, DOD officials implementing SPOT-Plus stated that it was even more cumbersome and resource intensive. In particular, the SPOT-Plus process required reporting units to manually provide data on contracts and contractor personnel—as was the case with the manual census—but the number of census data fields increased from 18 to over 50. Although DOD issued instructions to facilitate the initial transition from the quarterly census to SPOT-Plus, the process did not go as well as anticipated. CENTCOM officials told us that in some cases reporting units responded to the second quarter census by using an older census spreadsheet that was not populated with SPOT data or did not respond at all. DOD officials stated that in some instances there was confusion as to who should compile and verify the contract and contractor personnel data and the task was mistakenly delegated to DOD organizations that were not privy to or responsible for that information. Furthermore, since the second quarter SPOT-Plus template did not provide a way to differentiate the numbers of private security contractors from the total, CENTCOM had to subsequently request that reporting units provide this information in a separate section of the SPOT-Plus template. CENTCOM and SPOT program officials stated that many of the challenges experienced with the second quarter SPOT-Plus census have since been addressed. SPOT program officials now estimate that the transition from the census to SPOT will be completed no later than the fourth quarter of fiscal year 2011. There continue to be considerable discrepancies between the contractor counts obtained through the census and SPOT (see table 3). In some instances, DOD contractor personnel numbers in SPOT may be overreported, and in others, underreported. For example, in comparing SPOT-reported data to census data at the end of the second quarter of fiscal year 2010, we found that SPOT included almost 18,000 more personnel working in Iraq than the census. Conversely, SPOT did not include more than 70,000 personnel working in Afghanistan who were included in the census. Further, DOD officials from one service component in Afghanistan told us SPOT contained data on 4,200 contractor personnel who worked on their contracts, but their census submission to CENTCOM showed there were over 40,000 personnel working on their contracts for the same period. As of the end of the fiscal year 2010 second quarter, State reported 11,236 personnel working under contracts in Iraq and Afghanistan and an additional 8,074 working under assistance instruments, while USAID reported 12,229 contractor personnel and 23,539 assistance personnel in the two countries. Table 4 depicts the total number of State- and USAID- reported contractor and assistance personnel in the two countries, while appendix II provides additional State and USAID contractor and assistance personnel data. Of the total number of contractor and assistance personnel working in Iraq and Afghanistan at the end of the second quarter in fiscal year 2010, State reported that about 35 percent were performing security functions. USAID reported that about 32 percent of the total number of contractors and assistance personnel working in Iraq and Afghanistan were performing security functions. Table 5 depicts the numbers State and USAID reported to us regarding personnel performing security functions under contracts and assistance instruments. In some instances, State has contracted directly for personnel to perform security services, for example, to guard the embassies in Baghdad and Kabul. Additionally, State and USAID contractors and assistance recipients have subcontracted for security services to protect their personnel and facilities. State and USAID took similar approaches to provide us with the numbers of contractor and assistance personnel for fiscal year 2009 and the first half of 2010. Although State now requires contract personnel and some grant personnel to be entered into SPOT, to respond to our request, State’s bureaus generally relied on manually compiled surveys—with at least one bureau supplementing its response with SPOT data. Similarly, USAID relied on a combination of periodic surveys and data obtained through quarterly reports submitted by the agency’s contractors and assistance recipients. However, State officials informed us that their contractors and assistance recipients are not required to provide such reports a nd, therefore, response rates to requests for personnel numbers are low. For example, officials with one State office noted that none of its Afghan grant recipients provided personnel numbers. In contrast, USAID officials in Iraq indicated that they regularly receive personnel numbers from all of their contractors and assistance recipients, while USAID officials in Afghanistan we spoke with stated they generally receive responses from about 70 percent of their contractors and assistance recipients. We identified several contracts and assistance instruments for which personnel information was not provided. For example, we identified a State contract to design and build offices and housing in Afghanistan with obligations totaling $234 million for which personnel numbers were not reported. In another example, we identified four USAID cooperative agreements for a program promoting food security in Afghanistan with total obligations of $144 million for which information on the number of personnel working on the agreement was not provided. Agency officials acknowledged several additional challenges in providing us with complete data on their contract and assistance personnel in Iraq and Afghanistan. First, not all local nationals working on State and USAID contracts and assistance instruments were included in the numbers they provided to us. As with SPOT, local nationals were not always captured in personnel counts because it was either not feasible or too difficult to obtain accurate information. In addition, State and USAID officials stated that they have limited ability to verify the accuracy or completeness of the personnel data provided. State officials in Iraq and Afghanistan informed us that they have no visibility into the extent to which contractors use subcontracted employees and generally are not able to track the numbers of subcontract personnel. However, USAID officials in Iraq explained that they have instituted measures to review the reported data to improve accuracy. Although agency officials acknowledged that not all contractor and assistance personnel were being tracked over the course of our review period, they still considered the data provided to our requests for personnel information to be more accurate than SPOT. Reflective of their policies regarding SPOT’s use and challenges associated with collecting data through SPOT, there are significant discrepancies—both in terms of under- and overreporting—between the numbers in SPOT and what was reported to us by State and USAID. For example, as of the end of the second quarter of fiscal year 2010, there were 7,077 fewer State contractor and assistance personnel in SPOT than were reported to us. In fact, SPOT did not include any of the 5,741 personnel working under assistance instruments in Afghanistan that State reported to us. The discrepancies for USAID were also notable, given that during our review period USAID did not require the use of SPOT in Afghanistan or for Iraqi nationals. For USAID, there were only 579 personnel in SPOT as of end of the second quarter of fiscal year 2010–35,189 fewer than what the agency reported to us. Although DOD, State, and USAID are required to track the number of personnel killed or wounded while working on contracts and assistance instruments in Iraq and Afghanistan, only State and USAID tracked this information during our review period. State reported to us that 9 of its contractor and assistance personnel were killed and 68 were wounded during fiscal year 2009 and the first half of fiscal year 2010. For the same period, USAID reported to us that 116 contractor and assistance personnel were killed and 121 were wounded (see table 6). Both agencies noted For that some of the reported casualties resulted from nonhostile actions. example, USAID reported that 3 contractors sustained injuries in a traffic accident. These data were based on reports submitted to State and USAID by contractors and assistance recipients. Without alternative sources of data, we could not verify whether State’s and USAID’s data were complete. However, a recent report from the USAID Inspector General suggested that not all security contractors in Afghanistan are reporting incidents that result in personnel being injured or killed. USAID, Audit of USAID/Afghanistan’s Oversight of Private Security Contractors in Afghanistan, Audit Report Number 5-306-10-009-P (May 21, 2010). DOD officials informed us they eventually intend to track the number of killed and wounded contractor personnel through SPOT. DOD reported that it has other systems that collect information on contractor casualties, but they have limitations. For example, the Defense Casualty Information Processing System contains information on American citizens who were killed or wounded while working as contractors or civilian employees. However, the system does not differentiate between direct-hire government civilians and contractors and does not include data on local or third country nationals. Additionally, some individual components within the department receive reports on killed or wounded contractor personnel, but such reports are not consistently tracked in a readily accessible or comprehensive manner. For example, contracting officials in Afghanistan explained that they receive serious incident reports, which include information on incidents in which personnel were killed or wounded, submitted by their private security contractors. A DOD official in Afghanistan knowledgeable on the matter cautioned though that the reports most likely understate the actual number of contractor casualties, as not all contractors submit reports as required. Absent a reliable system for tracking killed or wounded contractor personnel, DOD officials referred us to Labor for data on cases filed under DBA for killed or injured contractors—as they have for our prior reports. However, as we previously reported, Labor’s DBA case data do not provide an appropriate basis for determining the number of contractor personnel killed or wounded in Iraq and Afghanistan. Under the NDAA for FY2008, as amended, Labor—unlike DOD, State, and USAID—has no responsibility for tracking killed or wounded contractor personnel, and as such its data were not designed to do so. Labor officials also explained that not all deaths and injuries reported under DBA would be regarded as contractors killed or wounded within the context of the NDAA for FY2008. They further explained that injuries to local and third country contractors, in particular, may be underreported. While Labor’s DBA data do not serve as a proxy for fulfilling the NDAA for FY2008 requirements, Labor’s DBA case data provide insights into contractor deaths and injuries in Iraq and Afghanistan. According to data provided by Labor, there were 10,597 DBA cases, including 213 cases reporting contractor deaths, that resulted from incidents in Iraq and Afghanistan during fiscal year 2009 and the first half of fiscal year 2010. As shown in table 7, the number of deaths and injuries in Iraq has declined since 2007. In Afghanistan, the number of contractor deaths has increased since 2007, while the number of injury cases has fluctuated from over 1,100 to almost 2,000. However, Labor’s DBA data cannot provide insight into the number of personnel working under assistance instruments who have been killed or injured in Iraq or Afghanistan as such instruments are not subject to DBA. See appendix III for additional data regarding DBA cases for contractor deaths occurring during our review period. Based on our analysis of all 213 DBA cases for contractor personnel killed in Iraq and Afghanistan during our review period, we determined that 49 percent of deaths resulted from hostile incidents. When comparing deaths in Afghanistan to those in Iraq, we found that 62 percent of the reported fatalities in Afghanistan were caused by hostile incidents, whereas in Iraq, 26 percent were the result of hostile actions, as shown in figure 1. In both countries, improvised explosive devices were a primary cause of death for incidents involving hostile actions. In one incident, a vehicle carrying a group of engineers to a project site hit such a device, resulting in eight fatalities. For both countries, nonhostile deaths resulted from various types of accidents or health issues. For example, we found that at least 31 percent of the nonhostile fatalities were the result of health conditions or illnesses, such as cardiac arrest. DOD, State, and USAID collectively obligated $35.7 billion on 133,283 contracts, and $1.8 billion on 668 assistance instruments with performance in Iraq and Afghanistan during fiscal year 2009 and the first half of fiscal year 2010. DOD accounted for the vast majority of all contract obligations, while State and USAID accounted for all of the reported obligations on grants and cooperative agreements. The fundamental reason as to why agencies choose a contract instead of an assistance instrument is dependant upon whom the agency determines to be the primary beneficiary. With contracts, the goods or services obtained are for the direct benefit or use by the U.S. government, whereas the primary purpose of assistance instruments is to further a public purpose. Most contracts and associated obligations reported to us by the agencies were awarded during fiscal year 2009 and the first half of fiscal year 2010, with the agencies generally using competitive procedures to award their contracts. State and USAID relied heavily on assistance instruments to achieve their missions in Iraq and Afghanistan and used different types of assistance instruments depending on the purpose for the funding. Additionally, State and USAID officials indicated that consistent with their policies, they used competitive procedures whenever practical in awarding assistance instruments. The agencies were unable to provide information on subcontracts and subgrantees, which we were required to report. See appendix IV for detailed information on each agency’s Iraq and Afghanistan contracts, assistance instruments, and associated obligations during our review period. DOD accounted for the vast majority of all contracts and obligations made by the three agencies during our review period. Of the reported $35.7 billion obligated by the three agencies on contracts with performance in Iraq and Afghanistan, 88 percent of obligations were for DOD contracts, as shown in figure 2. Task orders made up the largest number of contracts and the majority of obligations for DOD, State, and USAID. For example, DOD had over 98,000 active task orders with obligations totaling $24.7 billion—of which almost $6.3 billion was for one task order that provides food, housing, and other services for U.S. military personnel. Similarly, State reported that 68 percent of its contracts were purchase orders, which accounted for only 1 percent of its total obligations. In contrast, task orders accounted for over 76 percent of State’s total contract obligations but only 17 percent of its contracts. While USAID task orders accounted for only 8 percent of its total number of contracts, obligations on these task orders amounted to 51 percent of the agency’s total contract obligations. Approximately half of DOD and State’s contracts and obligations were for performance in Iraq during our 18- month review period. In contrast, almost 85 percent of USAID’s contract obligations were for contracts with performance in Afghanistan. Some contracts also included work in both countries. For example, DOD provided us with data on seven active task orders under a construction contract with total obligations of approximately $152 million and indicated that there was performance in both Iraq and Afghanistan. However, in such cases, it was not possible based on the data reported to us to isolate which portion of the total obligations was specific to Iraq or Afghanistan. As a result, we counted contracts, and their associated obligations, with performance in both Iraq and Afghanistan as well as contracts where the agency indicated that performance was in Iraq or Afghanistan but did not specify which country, as “other.” Further, we counted contracts with performance in multiple countries and their associated obligations with the Iraq contracts if the agency identified the place of performance as including Iraq, but not Afghanistan. Similarly, we counted contracts and their associated obligations with the Afghanistan contracts if the place of performance included Afghanistan but not Iraq. Of the over 133,000 contracts, including task and delivery orders active during our review period, 98 percent were new contracts and orders awarded by the three agencies during fiscal 2009 and the first half of fiscal year 2010. Similarly, 83 percent of the total funds obligated were on contracts awarded during this same period. There were some variations between agencies, as shown in figure 3. For example, for both State and USAID, about 84 percent of their obligations were on contracts awarded prior to fiscal year 2009, whereas the vast majority of obligations for DOD were on contracts awarded during our review period. The three agencies reported that they generally used competitive procedures when awarding their contracts. Out of a total of 32,876 contracts, excluding task and delivery orders, awarded in the period of our review, 92 percent were reported as awarded using competitive procedures. These competitively awarded contracts also accounted for about 92 percent of the obligations made on contracts awarded during our review period, as depicted in figure 4. Generally, contracts should be awarded on the basis of full and open competition. The agencies reported that most of their new contracts were awarded using full and open competition, but in some instances the agencies reported a contract as being competed but did not indicate whether full and open or other than full and open competition was used. For about 5 percent of the contracts awarded during our review period, the agencies s did not report competition information. did not report competition information. Most of the 801 contracts reported to us by the three agencies as not competed had relatively small obligations during our review period. Approximately 78 percent of these contracts had obligations less than $25,000. In contrast, only 13 of the 801 contracts had over $1 million in obligations, accounting for 63 percent of obligations for the noncompeted contracts. Competition requirements generally do not apply to the issuance of task orders. However, where there are multiple awardees under the underlying contract, the FAR requires the contracting officer in most instances to provide each awardee a fair opportunity to be considered for each order exceeding $3,000. The agencies reported that 99 percent of the task and delivery orders issued during our review period were competed— either the underlying contract was awarded competitively or multiple awardees were given a fair opportunity to be considered for each order. State and USAID reported obligations of $1.8 billion on 668 grants and cooperative agreements with performance in Iraq and Afghanistan during fiscal year 2009 and the first half of fiscal year 2010. Conversely, DOD reported that it did not have any grants or cooperative agreements with obligations during our review period. Of the total number of active State and USAID assistance instruments in the two countries, 88 percent were grants. However, grants accounted for only 42 percent of the total assistant instrument obligations. Cooperative agreements, although smaller in number, accounted for the majority of the total amounts obligated on assistance instruments during our review period. According to State and USAID policy, the type of assistance instrument used is determined based on a variety of factors. Among the factors to be considered is the level of involvement the agency anticipates will be needed to effectively administer the agreement. State and USAID generally relied on different types of assistance instruments during our review period depending on the purpose for the funding. Of State’s active assistance instruments, 84 percent of its assistance obligations were for grants, whereas 63 percent of USAID’s assistance obligations were for cooperative agreements as shown in figure 5. The principal purpose of State’s grants in Iraq and Afghanistan varied by bureau and covered a wide range of activities such as teaching computer skills to women and adolescents, covering the travel costs for subject matter experts to attend conferences, and funding explosive ordnance and mine clearance efforts. In contrast, USAID used cooperative agreements generally to implement development programs in sectors such as banking, education, health, and road construction in the two countries. Each agency has implemented programs designed to provide grants to local national organizations and individuals to develop the Iraqi and Afghan economies. During our review period, State reported that its local grants program provided $15.3 million in funding to over 280 Iraqi grant recipients, with 84 percent of the awards being $25,000 or less. USAID has similar programs, but as we recently reported, in some instances the agency also relied on contractors to award and administer such grants. In these instances, the contract data we received contained the cumulative value of the obligations made under both the base contracts and the grants being managed under those contracts. State and USAID policies require the use of competitive procedures when awarding assistance instruments unless an authorized exception to the use of competition applies. State and USAID officials informed us that they used competitive procedures for assistance awards in Iraq and Afghanistan whenever practical. Based on our review of 52 randomly sampled State assistance instruments active during fiscal year 2009, we found that 79 percent were awarded competitively. Similarly, in our review of 36 randomly sampled USAID assistance agreements in Iraq and Afghanistan that were active in fiscal year 2009, we found that 50 percent were competed. The NDAA for FY2008, as amended, mandated that we identify the total number and value of all contracts, grants, and cooperative agreements, which include prime contracts, task or delivery orders, as well as subawards at any tier. While we were able to obtain data on the number of and amount obligated on prime contracts and orders as well as grants and cooperative agreements, the agencies were unable to provide comparable data on subcontracts and subgrants. As we have reported in the past, contract and assistance instrument files may contain information on subcontracts and subgrants but none of the agencies systematically tracked this information in a readily retrievable manner. The value of subawards would be included in the total value of the prime contract or assistance instrument, but the agencies could not readily distinguish the amount that went to the prime contractor, grantee, or cooperative agreement recipients from the amount that went to subcontracts or subgrants for all contracts and assistance instruments. Over the past 2 years, DOD, State, and USAID have made some progress in implementing SPOT. While that progress has been hindered by practical and technical limitations, a continued lack of interagency agreement on how to address issues, particularly those related to tracking local nationals, has been an impediment toward moving forward. Tracking Iraqi and Afghan nationals who work under contracts and assistance instruments presents unique challenges, not only in terms of obtaining aggregate numbers, but especially in terms of obtaining the detailed information currently required by SPOT. The still unresolved issue of how local nationals will be tracked reliably in SPOT reflects a lack of consensus among and even within the agencies about the value and use of such data beyond fulfilling a statutory requirement. With SPOT not yet fully implemented, the agencies have relied on other methods of collecting data that have their own shortcomings to respond to our requests for required information, and in some cases, data were not provided. Last year, we recommended that the agencies develop a joint plan with associated time frames to address SPOT’s limitations, but agency officials believed that a plan was not needed and their ongoing coordination efforts were sufficient. However, our work since then demonstrates that their ongoing efforts alone were not sufficient to ensure that statutory requirements are met. Over the past year, SPOT’s implementation has continued to be undermined by a lack of agreement among the agencies on how to proceed and how best to meet their respective data needs to fulfill statutory requirements and improve oversight and management of contracts and assistance instruments. Until the agencies individually assess their own data needs given the relative challenges and benefits of tracking detailed information on contracts, assistance instruments, and associated personnel and collectively agree on how to best address those needs while meeting statutory requirements, as we have previously recommended, they are not in a position to determine how best to move forward. By working with potential users of the data to better understand their information needs, each agency can help ensure the information tracked in SPOT is sufficient to meet statutory requirements as well as help facilitate agency oversight of contracts, grants, and cooperative agreements in Iraq and Afghanistan. Once the agencies have agreed on how to proceed, having a plan with defined roles and responsibilities and associated time frames can help hold the agencies accountable and ensure timely implementation. Otherwise, implementation of SPOT will continue to languish, with the agencies not collecting reliable information required by Congress and risking collection of other information they will not use. Therefore, we believe the recommendation in our 2009 report still applies, and we are not making any new recommendations. We requested comments on a draft of this report from DOD, State, and USAID. DOD and State informed us they had no comments on the draft’s findings or concluding observations. In its written comments, USAID described the extent to which it intends to use SPOT in Iraq and Afghanistan in a manner that would satisfy statutory requirements while meeting the agency’s needs (see app. V for USAID’s written comments). Additionally, after receiving the draft report USAID provided us with revised data on contractor and assistance personnel working in Afghanistan during the first half of fiscal year 2010. After reviewing and analyzing these data, we incorporated the results of our analysis into the final report as appropriate. We also provided a draft of this report to Labor for its review, but the department did not have any comments. We are sending copies of this report to the Secretary of Defense, the Secretary of State, the Administrator of the U.S. Agency for International Development, the Secretary of Labor, and interested congressional committees. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Section 863 of the National Defense Authorization Act for Fiscal Year 2008, as amended, directs GAO to review and report on matters relating to Department of Defense (DOD), Department of State (State), and U.S. Agency for International Development (USAID) contracts in Iraq and Afghanistan. In response to this mandate, we are assessing the status of the three agencies’ efforts to implement the Synchronized Predeployment and Operational Tracker (SPOT) and providing the results of our analysis of agency-reported data for fiscal year 2009 and the first half of fiscal year 2010 on (1) the number of personnel, including those performing security functions, working under DOD, State, and USAID contracts and assistance instruments with performance in Iraq and Afghanistan; (2) the number of such personnel who were killed or wounded; and (3) the number and value of contracts and assistance instruments that were active or awarded during our 18-month review period and the extent of competition for new awards. To address our first objective, we reviewed DOD, State, and USAID’s July 2008 and April 2010 Memorandums of Understanding (MOUs) that addressed the National Defense Authorization Act for Fiscal Year 2008 and the National Defense Authorization Act for Fiscal Year 2010 requirements. We compared SPOT’s capabilities to the MOU requirements to determine the extent to which SPOT fulfilled the terms of the MOUs. In addition, we reviewed each agency’s policies and guidance governing the use and implementation of SPOT. We interviewed officials from the three agencies responsible for implementing SPOT to determine the criteria and practices for entering information into SPOT and the system’s current and planned capabilities. We also met with DOD, State, and USAID officials, including those in Iraq and Afghanistan, to obtain insight into the extent to which SPOT was being used by each agency and the obstacles they were encountering. In addition, we met with the contractor responsible for SPOT’s development to discuss the continued development of the system. We reviewed DOD’s internal controls governing SPOT and interviewed SPOT program and contractor officials to assess the processes used to ensure the data elements contained in the system are complete and accurate. We also obtained SPOT data from DOD on behalf of each agency for contractor and assistance personnel with deployments during our period of review and compared them to other sources such as the personnel and contract data we received for our other objectives. Because the data from other sources had limitations, we did not have a means to determine the full extent to which SPOT was incomplete or inaccurate for our review period. However, based on the data we obtained from other sources and our review of the internal controls, we determined that there were significant discrepancies associated with the SPOT data that undermined their reliability. To address our second objective, we requested that the three agencies provide us with contractor and assistance personnel data covering fiscal year 2009 and the first half of fiscal year 2010. DOD, State, and USAID provided the number of U.S., third country, and local nationals working under contracts and assistance instruments with performance in Iraq or Afghanistan in fiscal year 2009 and the first half of fiscal year 2010. The data provided were generally obtained by the agencies through surveys and periodic reports submitted by contractors and assistance recipients. These data included individuals reported to be performing security functions. To assess the completeness of the reported personnel data, we compared the data to the list of contracts and assistance instruments we compiled to address our objective on the number and value of contracts and assistance instruments. Furthermore, we interviewed agency officials regarding their methods for collecting data on the number of contractor and assistance personnel in Iraq and Afghanistan. Based on our analyses and discussions with agency officials, we determined that caution should be exercised when using the agency-provided data on contractor and assistance personnel to draw conclusions about either the actual number in Iraq or Afghanistan for any given time period or trends over time. However, we are presenting the reported data along with their limitations as they establish a rough order of magnitude for the number of contractor and assistance personnel during our period of review. To address our third objective, we analyzed USAID and State data on the number of contract and assistance personnel killed or wounded in Iraq and Afghanistan during the period of our review. Due to the lack of other available and reliable data sources, we could not independently verify whether USAID’s and State’s data were accurate. Nevertheless, we are providing them as they provide insight into the number of contractor and assistance personnel who were killed or wounded during our period of review. DOD did not collect and could not provide such data. After informing us that they did not have a reliable system for tracking killed or wounded personnel, DOD officials referred us to use the Department of Labor’s (Labor) Defense Base Act (DBA) case data. We analyzed data from Labor on DBA cases arising from incidents that occurred in Iraq and Afghanistan in fiscal year 2009 and the first half of fiscal year 2010. We obtained similar DBA data from Labor for our previous reports, for which we determined that the data were sufficiently reliable for our purposes, when presented with appropriate caveats. We reported in 2009 that DBA data are not a good proxy for determining the number of contractor and assistance instruments personnel who were killed or wounded in Iraq and Afghanistan, but they do provide insights into the number killed or wounded, common causes of death, and whether claimants died from hostile or nonhostile actions. We reviewed the entire population of fatality case data reported by Labor that occurred during our review period, which totaled 213, to determine information such as the circumstances of the incident resulting in death and the nationality of the individual killed. To address our fourth objective, we obtained data from DOD, State, and USAID on the number of active or awarded contracts, grants, and cooperative agreements with performance in Iraq and Afghanistan during fiscal year 2009 and the first half of fiscal year 2010, the amount of funds obligated on those contracts and assistance instruments during our review period, and the extent to which new contracts, grants, and cooperative agreements were competitively awarded. We also interviewed agency officials to discuss the reported data. The agencies provided data from FPDS-NG, agency-specific databases, and manually compiled lists of obligations and deobligations. We determined that the data each agency reported were sufficiently reliable to determine the minimum number of active or awarded contracts and obligation amounts, as well as the extent of competition, based on prior reliability assessments, interviews with agency officials, and verification of some reported data to information in contract files. We took steps to standardize the agency-reported data. This included removing duplicates and contracts and assistance instruments that did not have obligations or deobligations during our review period. DOD provided us with 36 separate data sets, State provided 11, and USAID provided 12. The reported data included multiple contract numbering conventions for each agency. We reformatted each data set and combined them to create a single, uniform list of contracts, orders, assistance instruments, and modifications for each agency. We excluded the base contracts under which orders were issued. This was done, in part, because such contracts do not have obligations associated with them as the obligations are incurred with the issuance of each order. We also excluded other contract vehicles such as leases, sales contracts, and notices of intent to purchase, as these instruments do not include performance by contractor personnel in Iraq or Afghanistan. In addition, we also excluded voluntary contributions, property grants, and participating agency service agreements from our assistance data, as these types of instruments do not include performance by assistance personnel in either country. For all contracts and assistance instruments within our scope, we summed the reported obligations for each contract, order, and assistance instrument for fiscal year 2009 and the first half of fiscal year 2010. Some contracts and assistance instruments had obligations in both fiscal year 2009 and the first half of fiscal year 2010, so the number of active contracts, grants, and cooperative agreements for the entire 18-month period was lower than the combined number of contracts, grants, and cooperative agreements that were active in each fiscal year. We reviewed 52 State and 36 USAID assistance files as part of our data reliability assessment of agency-specific databases. From State’s Grant Database and Management System, we randomly selected 68 assistance files that were active during fiscal year 2009 and reviewed 52 of these files to ensure the accuracy of basic information—such as the assistance agreement number, the amount obligated, and date of action, among others—that the agency provided in response to our requests for information. From USAID’s Electronic Procurement and Information Collection System, we randomly selected 39 assistance files that were active during fiscal year 2009 and reviewed 36 of these files in either Iraq or Afghanistan. Although we found a small number of errors when comparing the data contained in State and USAID’s databases to the assistance agreement documents, we determined that the errors were inconsequential and that the data were sufficiently reliable for the purposes of this report. We conducted this performance audit from November 2009 through September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 8 shows the total number of Department of Defense (DOD) contractor personnel in Iraq or Afghanistan, as reported by the U.S. Central Command’s (CENTCOM) census, for each quarter in fiscal year 2009 and the first half of fiscal year 2010. The data depict an overall decrease in personnel in Iraq and an overall increase in personnel in Afghanistan during our review period. DOD did not report having any personnel working under assistance instruments in either country during our review period. Table 9 provides a breakdown of the total number of DOD contractor personnel by nationality working in the two countries at the end of fiscal year 2009 and the end of the second quarter of fiscal year 2010. The number of Afghan personnel working on DOD contracts was significantly larger than the number of U.S. or third country national personnel working on DOD contracts in Afghanistan, while in Iraq a smaller percentage of DOD’s contractor workforce consisted of Iraqi nationals. Table 10 shows the number of Department of State (State) contractor and assistance instrument personnel, by nationality, as reported to us based on State surveys of contractors and assistance instrument recipients. Table 11 shows the number of U.S. Agency for International Development (USAID) contractor and assistance instrument personnel, by nationality, as reported to us based on USAID surveys and reports from its contractors and assistance instrument recipients. During our review period, the number of local national personnel in both Iraq and Afghanistan working under USAID contracts or assistance instruments was significantly larger than the number of U.S. or third country national personnel. Figure 6 provides information on the number Defense Base Act (DBA) cases by nationality for contractors killed in Iraq or Afghanistan during fiscal year 2009 and the first half of fiscal year 2010. In Iraq, the total number of fatality cases resulting from incidents during our review period was 80. By comparison, the total number of fatality cases during the same period in Afghanistan was 133. In Iraq, death cases were fairly evenly distributed among U.S., local, and third country nationals, but in Afghanistan the majority of death cases involved local nationals. Table 12 shows, by occupation, the number of DBA fatality cases for incidents that occurred during our review period. The security contractor occupation category had the highest number of fatalities with 68 cases for fiscal year 2009 and the first half of fiscal year 2010. Table 13 shows all Department of Defense (DOD) contracts, along with the associated obligations, reported to us as active in Iraq, Afghanistan, or both during fiscal year 2009 and the first half of fiscal year 2010. For last year’s review, DOD reported obligating $26,981.6 million on 46,645 contracts for fiscal year 2008. DOD did not report any obligations for assistance instruments with performance in either country during fiscal year 2009 and the first half of fiscal year 2010. Table 14 provides information on the number of contracts awarded by DOD and associated obligations made during our review period. The majority of DOD’s active contracts were awarded during our review period, while 92 percent of the DOD’s obligations were made on the new contract awards. Table 15 shows competition information for the DOD contracts (excluding task and delivery orders) that were awarded during our review period. DOD reported that 29,440 (93 percent) contracts were competed, including 26,544 contracts that were awarded using full and open competition. For 1,528 contracts, DOD either provided no competition information or provided insufficient information for us to determine whether the contract was competed. As shown in table 16, most of the DOD contracts reported as awarded without competition had relatively small obligations during our review period. Table 17 shows all Department of State (State) contracts, along with the associated obligations, reported to us as active in Iraq, Afghanistan, or both during fiscal year 2009 and the first half of fiscal year 2010. For last year’s review, State reported obligating $1,475.7 million on 846 contracts for fiscal year 2008. Table 18 provides information on the number of contracts awarded and associated obligations made during our review period. The majority of State’s active contracts were awarded during our review period but only 16 percent of State’s obligations were made on the new contract awards. Table 19 shows competition information for the State contracts (excluding task and delivery orders) that were awarded during our review period. State reported that 76 percent of its contracts were competed, including 489 (40 percent) that were awarded using full and open competition. For 72 contracts, State either provided no competition information or provided insufficient information for us to determine whether the contract was competed. As shown in table 20, most of the State contracts reported as awarded without competition had relatively small obligations during our review period. Table 21 shows all active State assistance instruments along with the associated obligations reported to us as active in Iraq, Afghanistan, or both during fiscal year 2009 and the first half of fiscal year 2010. Table 22 provides information on the number of assistance instruments awarded and associated obligations made during our review period. Nearly all of State’s active assistance instruments were awarded during our review period. Table 23 shows State’s assistance instruments active in Iraq and Afghanistan and associated obligations by type—grants, including those made using Quick Response Funds, and cooperative agreements. During our review period, grants accounted for 97 percent of State’s active assistance instruments and 84 percent of assistance obligations. Table 24 shows all U.S. Agency for International Development (USAID) contracts, along with the associated obligations, reported to us as active in Iraq or Afghanistan during fiscal year 2009 and the first half of fiscal year 2010. For last year’s review, USAID reported obligating $1,656.7 million on 277 contracts for fiscal year 2008. Table 25 provides information on the number of contracts awarded and associated obligations made during our review period. Fifty-two percent of USAID’s active contracts were awarded prior to our review period and these contracts accounted for nearly 84 percent of USAID’s obligations. Table 26 shows competition information for the USAID contracts (excluding task and delivery orders) that were awarded during our review period. USAID reported to us that 107 contracts (48 percent) were competed, including 98 contracts that were awarded using full and open competition. For 93 contracts, USAID either provided no competition information or what was provided was not sufficient to determine whether the contract was competed. As shown in table 27, there were only 21 contracts that USAID reported as awarded without competition, 4 of which had obligations greater than $1 million during our review period. Table 28 shows all USAID assistance instruments along with the associated obligations, reported to us as active in Iraq, Afghanistan, or both during fiscal year 2009 and the first half of fiscal year 2010. During the first half of fiscal year 2010, USAID deobligated funds from one cooperative agreement with performance in Iraq, which resulted in its total assistance obligations showing negative $15.8 million for that time period. Table 29 provides information on the number of assistance instruments awarded and associated obligations made during our review period. The majority of USAID’s active assistance instruments were awarded before our review period and 84 percent of USAID’s obligations were made on the existing assistance awards. Table 30 shows USAID’s assistance instruments active in Iraq and Afghanistan and associated obligations by type—grants and cooperative agreements. During our review period, cooperative agreements accounted for 76 percent of USAID’s active assistance instruments and 63 percent of assistance obligations. John P. Hutton (202) 512-4841 or huttonj@gao.gov. In addition to the contact above, Johana R. Ayers, Assistant Director; Noah B. Bleicher; John C. Bumgarner; Burns D. Chamberlain; Morgan Delaney- Ramaker; Timothy J. DiNapoli; Justin Fisher; Cynthia Grant; David Greyer; Justin M. Jaynes; Christopher Kunitz; Jean McSween; Heather B. Miller; Jamilah Moon; Roxanna T. Sun; and Jeff Tessin made key contributions to this report.
The Departments of Defense (DOD) and State and the U.S. Agency for International Development (USAID) have relied extensively on contracts, grants, and cooperative agreements for a wide range of services in Afghanistan and Iraq. However, as GAO previously reported, the agencies have faced challenges in obtaining sufficient information to manage these contracts and assistance instruments. As part of our third review under the National Defense Authorization Act for Fiscal Year (FY) 2008, as amended, GAO assessed the implementation of the Synchronized Predeployment and Operational Tracker (SPOT) and data reported by the three agencies for Afghanistan and Iraq for FY 2009 and the first half of FY 2010 on the (1) number of contractor and assistance personnel, including those providing security; (2) number of personnel killed or wounded; and (3) number and value of contracts and assistance instruments and extent of competition for new awards. GAO compared agency data to other available sources to assess reliability. While the three agencies designated SPOT as their system for tracking statutorily required information in July 2008, SPOT still cannot reliably track information on contracts, assistance instruments, and associated personnel in Iraq or Afghanistan. As a result, the agencies relied on sources of data other than SPOT to respond to our requests for information. The agencies' implementation of SPOT has been affected by some practical and technical issues, but their efforts also were undermined by a lack of agreement on how to proceed, particularly on how to track local nationals working under contracts or assistance instruments. The lack of agreement was due in part to agencies not having assessed their respective information needs and how SPOT can be designed to address those needs and statutory requirements. In 2009, GAO reported on many of these issues and recommended that the agencies jointly develop a plan to improve SPOT's implementation. The three agencies reported to GAO that as of March 2010 there were 262,681 contractor and assistance personnel working in Iraq and Afghanistan, 18 percent of whom performed security functions. Due to limitations with agency-reported data, caution should be used in identifying trends or drawing conclusions about the number of personnel in either country. Data limitations are attributable to agency difficulty in determining the number of local nationals, low response rates to agency requests for data, and limited ability to verify the accuracy of reported data. For example, a State office noted that none of its Afghan grant recipients provided requested personnel data. While agency officials acknowledged not all personnel were being counted, they still considered the reported data to be more accurate than SPOT data. Only State and USAID tracked information on the number of contractor and assistance personnel killed or wounded in Iraq and Afghanistan during the review period. State reported 9 contractor and assistance personnel were killed and 68 wounded, while USAID reported 116 killed and 121 wounded. Both agencies noted that some casualties resulted from nonhostile actions. DOD still lacked a system to track similar information and referred GAO to Department of Labor data on cases filed under the Defense Base Act for killed or injured contractors. As GAO previously reported, Labor's data provide insights but are not a good proxy for the number of contractor casualties. DOD, State, and USAID obligated $37.5 billion on 133,951 contracts and assistance instruments with performance in Iraq and Afghanistan during FY2009 and the first half of FY2010. DOD had the vast majority of contract obligations. Most of the contracts were awarded during the review period and used competitive procedures. State and USAID relied heavily on grants and cooperative agreements and reported that most were competitively awarded. While DOD and State did not comment on the draft report, USAID commented on the challenges of implementing SPOT and provided revised personnel data that GAO reviewed and included in the report. In response to GAO's 2009 report, DOD, State, and USAID did not agree with the recommendation to develop a plan for implementing SPOT because they felt ongoing coordination efforts were sufficient. GAO continues to believe a plan is needed to correct SPOT's shortcomings and is not making any new recommendations.
During the 20th century, tens of thousands of wild horses were either killed or captured for slaughter on America’s western ranges. Documented abuses suffered by wild horses led concerned individuals and national humane organizations to push for federal protections in the 1950s. Subsequently, Congress passed legislation in 1959 prohibiting the use of aircraft or motor vehicles to capture or kill wild horses or burros on public lands and polluting watering holes on public lands to trap, kill, wound, or maim wild horses or burros. Despite the 1959 act, wild horse exploitation continued, and some questioned whether the population would eventually be eradicated. To protect wild horses and burros, Congress passed additional legislation in 1971 to require the protection and management of wild free-roaming horses and burros on public lands. The 1971 act was amended in 1976, 1978, 1996, and 2004 (see table 1). The 2004 amendments directed BLM to sell, without limitation, excess animals more than 10 years of age or that have been offered unsuccessfully for adoption at least three times. The passage of the 1971 act changed the way BLM managed wild horses and burros on public lands. Rather than considering them as feral species that caused damage to the rangeland, the agencies had to change their mind-set to protect and manage the animals as an integral part of the ecosystem. One of the first tasks in managing the animals was to determine where they lived and their populations. According to the act, BLM is only authorized to manage wild horses and burros in areas where they were found in 1971. The areas where wild horses and burros were found, largely on public lands managed by the BLM and the Forest Service, as of the date of the act are called herd areas, and they comprise about 53.5 million acres. Once the exact land status and ownership of the herd areas was verified, it was determined that most herd areas were on BLM administered public lands, but some also included private and state-owned in-holdings. The 1971 act states that the Secretaries of the Interior and Agriculture shall arrange for the removal of wild horses and burros that stray onto private land upon notification by the owner. Next, through its land management planning process, BLM designated HMAs within these herd areas. In making HMA designations, BLM determined whether or not the areas where wild horses and burros were found contained adequate forage and water to sustain the herds. BLM also designated some HMAs in such a way as to avoid conflicts with private landowners. Today, BLM is responsible for managing 199 HMAs covering 34.3 million acres across 10 western states (see fig. 1). BLM is currently compiling a history of how BLM field offices made the determination to manage wild horses and burros on the current 34.3 million acres, compared to the 53.5 million acres where they were originally found in 1971. According to BLM officials, they expect the review to be completed by March 2009. The number of HMAs and their acreage has changed over time for many different reasons, including BLM land being redesignated as National Park land and declines in forage or water that make an area unsustainable, among others. About half the acreage managed under BLM’s Wild Horse and Burro Program is located in Nevada (see table 2). While most of BLM’s management activities for wild horses and burros occur within HMAs, BLM is responsible for removing populations of animals that stray onto public lands outside of HMAs, as well as those that stray onto private property. Wild horses and burros are to be managed as self-sustaining populations of healthy animals in balance with other multiple uses and the productive capacity of their habitat. Because wild horses and burros reproduce at an estimated rate of 20 percent annually and no natural predators remain, except for in a very few isolated HMAs, BLM must actively manage the population of the herds. AML has been defined as the “optimum number of wild horses which results in a thriving natural ecological balance and avoids deterioration of the range.” AML determinations can be made in a variety of land planning or decision documents, including, but not limited to, resource management plans, Herd Management Area Plans, and multiple use decision documents. The actual number set through an AML determination is predicated, in part, on (1) the number of acres set-aside for the management of wild horses and burros within a specific resource planning area and (2) the proportion allocation of the available forage allotted for wild horse and burro consumption among other users, such as livestock and wildlife. After these two key multiple use decisions have been made, BLM field offices can then set the actual AML numbers. Available forage is based on range conditions and other data. BLM’s Wild Horse and Burro National Program Office encourages field offices to establish AML as a range with an upper and lower limit. The upper limit of the range equals the maximum number of animals that can be sustained to result in a thriving natural ecological balance and avoid deterioration of the range. The lower limit is generally determined as the number to which a population must be gathered to help ensure the population will not exceed the upper limit of AML within the established gather cycle. For example, if the established gather cycle was 4 years, it would be the number to which a population must be gathered to help ensure the population will not exceed the upper limit of AML within a 4 year time period. BLM strives to maintain a national herd population level that is at the midpoint of AML, where the recently gathered HMAs would be at the lower limit of AML, while those awaiting gathers would be closer to the upper limit of AML. As of February 2008, the upper limit of AML (the cumulative total for each of the 199 HMAs) was approximately 27,219, and the midpoint was about 22,588. Because AMLs are intended to reflect the population of animals that can be sustainably maintained in an HMA, they are subject to change over time. Changes in AML happen for several reasons, including when acreage is added to or subtracted from an HMA and when changes in rangeland conditions result in improved or reduced forage and water availability sufficient to sustain a certain population level. In the arid ranges where most wild horses and burros are managed, conditions generally do not improve rapidly and have been further degraded by drought conditions that have lasted for over a decade (see fig. 2). The effects of climate change are likely to exacerbate the poor conditions that many HMAs are already experiencing. Herd Mgement Are (HMA) Determining which type of animal is responsible for rangeland damage is important to properly managing an HMA and in determining the number of animals to permit on the range. BLM can control the number of livestock and wild horses and burros to permit on the range, but BLM is not responsible for managing wildlife numbers on the range. Because BLM is not the lead agency responsible for wildlife on public lands, they are to coordinate with state wildlife officials about the forage allocation for wildlife populations. An increase in allocation of any species may cause increased competition for the remaining users of the range, especially under severe conditions. For example, in severe drought conditions, grazing and browsing is concentrated in limited areas near water sources. This intense competition causes heavy use and perhaps depletion of the resources the animals are dependent upon. Throughout the life of the program, the population of wild horses and burros on the range has generally far exceeded AML. BLM has used the removal of animals from the range as a primary management tool for managing herd sizes. To gather animals for removal, BLM uses private contractors to herd the animals in an HMA into temporary on-site corrals. The animals are primarily gathered using helicopters. In some cases, when gathering smaller numbers of wild horses and burros, BLM officials or contractors will use other trapping techniques, such as bait trapping, to capture the animals. Once collected into the temporary corrals, BLM officials use a selective removal process to determine which of those gathered animals to remove from the HMA. Animals that are not selected are returned to the wild. When animals are removed from the range, they are taken to short-term holding facilities to receive vaccinations and other treatment prior to either being adopted, sold, or sent to long-term holding. Figure 3 depicts BLM’s management of wild horses and burros on and off of the range. Anim thre removed from the rnge nd cnnot dopted or old re plced in long-term holding fcilitie to live ot the ret of their live. Mot of the fcilitie re locted on Midwet grassnd in Ksas nd Oklhom. A of Jne 200, the ner of hor in long-term holding was 22,101. For fiscal year 2001, BLM requested a budget increase for the program as part of a major initiative to reach the upper limit of AML by 2005. Subsequently, program funding allocated from congressional appropriations—what the agency refers to as “enacted funding”— increased from $19.8 million in fiscal year 2000 to $34.4 million in fiscal year 2001, an increase of $14.6 million. In 2002, enacted funding for the program was $29.6 million, about $10 million over the 2000 congressional funding level. After reassessing the initiative in 2004, BLM estimated it needed an additional $10.5 million on top of their enacted funding level of $29.1 million in fiscal year 2004 to meet its revised goal of meeting the midpoint of AML by 2006. In fiscal year 2005, enacted funding was increased about $10 million for a total of $39 million in fiscal year 2005 (see fig. 4). The President’s 2008 budget requested $32 million for the program, about $4 million less than enacted funding for fiscal year 2007. BLM has made significant progress in setting and meeting AML for the HMAs. As of February 2008, BLM has set AML for 197 out of 199 HMAs. Most of the field offices we surveyed considered similar factors in determining AML, such as rangeland conditions and climate data; however, BLM has not provided specific formal guidance to the field offices on how to set AML. BLM has been working on revising the program’s handbook to provide such guidance since 2006. With increased retirements, field offices reported losing the experienced personnel most familiar with the informal practice of determining AML. Until BLM finalizes the handbook or issues other guidance, it cannot ensure that the factors considered in future revisions of AML determinations are consistent across HMAs. At the national level, BLM reported that it was closer to meeting AML in 2007 than in any other year since AMLs were documented in 1984. Specifically, as of February 2007, BLM estimated the population at 28,563, which was about 1,000 animals over AML. To reach this level, BLM has reduced the nationwide population in the wild by about 40 percent since 2000. However, the population estimates are higher for 2008, and BLM has not met its goal of meeting AML for each HMA. The fact that not all HMAs have met AML remains a concern because of the damage excessive populations can cause on the range. Twenty of 26 field officials we surveyed told us that conducting gathers to remove excess animals is among their top challenges to maintaining AML because delayed gathers can cause animal populations to quickly exceed AML. In our 1990 report we concluded that BLM’s decisions on the number of wild horses and burros to remove were made without adequate information about range carrying capacity or the impact of the animals on range conditions. In August 2005, BLM updated its formal policy on gathers and removals and specified the key factors that should be considered in the decision making process. The extent to which BLM has actually met AML depends on the accuracy of BLM’s wild horse and burro population counts. Nineteen of the 26 field officials we surveyed used a method that consistently undercounts animals and does not provide a statistical range of population estimates. Alternative counting methods may be more expensive, but undercounting a population can lead to overpopulation and costlier gathers in future years. BLM has made significant progress in setting AML using rangeland monitoring data for the HMAs. As of February 2008, BLM has set AML for 197 out of 199 HMAs, compared to 2002 when about two-thirds of HMAs had set AML. Prior to 1984, many of the initial AMLs were not based on rangeland data but on factors such as initial herd population counts or administrative convenience. For example, the original AML established for Beaty’s Butte HMA in Oregon in 1977 was based on the number of horses found in that area on December 15, 1971. In Wyoming, AMLs for about one-third of the HMAs were based on agreements with local grazing interests because they owned private lands that were interspersed with BLM lands where wild horses were found in 1971. Only 10 out of the 26 field offices we surveyed identified the use of rangeland data to determine their initial AMLs. But since 1984, in accordance with the Dahl v. Clark decision, BLM officials told us that field managers have generally based AML decisions on monitoring data and an in-depth analysis. Most of the current AMLs for the 199 HMAs were set after 1984 (see table 3). Although some current AMLs were set many years ago, they are generally reviewed every 4 years or so as part of the recurring process to gather and remove excess animals. If during this process, and through monitoring, it is determined that an AML is no longer appropriate, field offices will consider changing it. For example, table 17 in appendix III shows how the current AMLs for the 26 HMAs in our sample have been changed, as applicable, since they were initially set. Most of the field offices we surveyed considered similar factors in determining AML. According to BLM National Program Office officials, field office staff should consider at least four factors in making AML determinations—climatic data, utilization data, actual use data, and trend data. Climate data measures the amount of precipitation within a specific area. In addition, temperature and wind data may be collected to evaluate the effect of climate on vegetation; utilization data measures the percent of forage consumed by livestock, wild horses and burros, wildlife, and insects during a specified period; actual use data is the number of grazing animals that used an area within a certain amount of time; and trend data measures the direction of change in ecological status or resource rating observed over time. Our survey results indicate that these four key AML determination factors were considered by some, but not all, of the BLM field offices responsible for setting AML for our sample of 26 HMAs (see table 4). Almost all of the field offices considered trend (25) and utilization (23) data, but only 19 considered climate and actual use data for livestock, while 14 considered actual use data for wildlife (see table 5). In addition to the four factors mentioned by BLM National Program Office officials, field offices considered other factors to help make their AML determinations, including census inventory, water resource availability, herd health, and unique local conditions. For instance, in Arizona, one field office reduced the AML for burros on an HMA because they found that burros were foraging on the same willows critical to the survival of the endangered Southwestern Willow Flycatcher. In determining AML, field office staff must also consider rangeland conditions for wild horses and burros in conjunction with other users of the range, including livestock and wildlife. Determining which species is responsible for rangeland damage is an important task to properly managing the HMA and in determining the number of wild horses and burros to permit on the range. For example, if field staff determine that cattle are primarily responsible for damaging an area, they may pursue several management options, including fencing out cattle, reducing the number of cattle, or changing the time of year cattle are allowed to graze in a particular area. BLM lacks similar management techniques to control wild horse and burro use due to their free-roaming nature. BLM’s direct management actions are limited to dealing with livestock and wild horses and burros, since individual states are responsible for managing wildlife. We recognized the difficulty in distinguishing the difference between impacts that wild horses and burros have on the range versus other users in our 1990 report. Some advocacy groups have criticized BLM because they believe that BLM unfairly faults wild horse and burros for damage to the range to justify their removal and reductions in AML. Several BLM officials told us ascribing range impacts can be difficult, but 20 out of the 26 field offices that we surveyed said they had a procedure in place do so. When the damage is caused by all the user groups or when the damage can not be attributed to a specific user group, BLM will generally make across- the-board reductions in the number of animals allowed on the range based on the historic proportion of each user group on the range. For example, if wild horses and burros historically accounted for 10 percent of the forage consumption on the range, then wild horses and burros would bear 10 percent of the necessary reductions. BLM has also made steady reductions in cattle grazing on BLM land as drought conditions in much of the West have worsened, resulting in the reduction of forage and water availability. For example, in Nevada, the state that manages for the greatest number of wild horses, permitted livestock use was reduced from about 2.5 million animal unit months in 1990 to a little over 2 million in 2006. The actual use during this same period, however, decreased from 1.8 million animal unit months to 1.2 million. In addition to the factors considered in making AML determinations, the age of the data, or how current it is, can also be important. The meaning of “current” data collection depends on the ecosystem and may vary across HMAs. BLM national program officials explained that data used to support AML decisions should be collected on a frequent basis. In general, they told us climate, utilization, and actual use data should be collected annually and trend data should be analyzed and reviewed within 4 years of setting AML. However, of the respondents who provided the age for data used, fewer than half collected their data for actual use for livestock and wildlife within 1 year of their AML determination; half collected their data for utilization within 1 year of the determination; and more than half collected their data for climate within 1 year of the determination. Fifteen of the 19 respondents who provided the age for data used considered trend data within 4 years of the determination (see table 6). Although field offices use many factors to make their AML determinations, BLM has no guidance or policy about the specific factors they must consider in determining AML. This is in contrast to the BLM policy that exists for a similar type of analysis that is conducted for removals. According to BLM’s 2005 gather policy, the determination to remove animals must be supported by the following factors: climatic data, utilization data, actual use data, trend data, and current census data. While 22 out of 26 BLM field offices responded that the data used to make their AML determination were moderate to very sufficient, several BLM officials told us that with increased retirements, field offices are losing the experienced personnel most familiar with the informal practice of determining AML. Therefore, without clear guidance, BLM cannot ensure that the factors considered in future revisions of AML determinations will be consistent across HMAs. To make the informal AML determination process official and to help ensure consistency among BLM field offices, BLM officials have been working on drafting a new handbook for the program since 2006, which specifies the factors field offices should use in making AML determinations. Due to higher priorities and limited resources, the handbook is still in draft form and is undergoing final revision. BLM officials told us they expect the handbook to be completed in fall 2008. Since 2000, BLM has made significant progress toward meeting AML. At the national level, BLM was closer to meeting AML in 2007 than in any other year since 1984 (when AML levels were first reported by BLM), with a population of 28,563, or about 1,000 animals over the upper limit of AML (see fig. 5). Meeting AML has been a challenge for most of the lifetime of the program. In 1985, in reporting on the Department of the Interior and related agencies’ appropriations, the Senate Committee on Appropriations recommended more than tripling the program’s funding above the original budgeted amounts to, according to the committee, permit BLM to maintain nearly 14,000 animals in corrals through the end of fiscal year 1986 and to remove 17,000 excess animals during fiscal year 1986. The program’s funding was tripled in fiscal year 1986, and with the increased funding, BLM removed 18,959 excess animals. In fiscal year 2001, BLM began implementing a 4-year strategy to aggressively remove animals from the range to reach the upper limit of AML by 2005. However, just before initiating the strategy—which relied heavily on specific assumptions about the number of animals removed, adopted, and held in short-term and long- term holding—emergency drought and fire conditions called for the removal of wild horses and burros in numbers far greater than anticipated. These additional removals and decreases in adoption targets changed BLM’s assumptions and made it clear the agency would not be able to meet the targets set forth in their plan. In 2004, BLM again revisited targets and management options that would help them to achieve and maintain the midpoint of AML by 2006. Over the past several years, the program is closer to meeting AML as a result of increases in the number of wild horses and burros removed from the range, but it continues to face challenges in maintaining that level. According to BLM data, the population now exceeds the upper limit of AML by an estimated 5,886 animals. BLM attributes most of the increase in population to more accurate population census counts. While the national statistics appear to indicate that BLM is close to meeting its goal, it is important to note that, under the act, BLM is required to maintain HMAs at a level that is at or below the upper limit of AML. To stay below the upper limit of AML, HMAs should be gathered to the lower limit of AML approximately every 3 to 5 years. However, only 7 of the 26 BLM field offices we surveyed said they were typically able to gather to this low level. When animals are not gathered to the low level of AML, a population can quickly rise well above the upper limit of AML. Fewer than half (10) of the field offices surveyed said they were usually able to manage the population of wild horses and burros on their HMAs within the limits of AML. Fifteen field offices said they managed populations that were typically above AML. We are not reporting in detail on the extent to which individual HMAs have met AML because we do not believe that BLM’s data are precise enough to accurately make such a determination. BLM’s estimates of the number of HMAs that are at or below AML may be overstated because, for reporting purposes, BLM considers the HMAs where the population is not more than 10 percent over the upper limit of the AML to be at AML. BLM officials told us that this is done to account for those HMAs that may slightly exceed AML. For example, in 2008, BLM reported that 61 of the 102 HMAs in Nevada were at or below AML. Without the 10 percent adjustment factor, we calculated that 52 HMAs were at or below AML. Because of this adjustment factor and questions about the accuracy of BLM’s animal counting methods, we concluded that the data on whether or not individual HMAs had met AML were not sufficiently reliable to report because an error of plus or minus one or two animals could change the status of an HMA from being under or over AML. Aside from the precise issue of whether or not an HMA is within or over AML, it is clear from the data that some HMAs are significantly over AML. For example, as of February 2008, BLM reported that 87 HMAs were over AML. About half of these HMAs were over AML by 50 percent or less, about a quarter were over AML by between 51 and 100 percent, and about another quarter of the HMAs were over AML by more than 100 percent. Populations that exceed AML can harm the health of the range. For example, in 2004, the Calico HMA in Nevada exceeded AML by about 200 percent. The herds were found to concentrate in sensitive areas, affecting the threatened Lahontan cutthroat trout and contributing to the nonattainment of grazing allotment objectives and standards for rangeland health. As of February 2008, the wild horse population in this HMA exceeded the upper limit of AML by 160 percent. The excess population levels and continued drought are expected to continue to negatively impact sensitive riparian areas relied upon by the Lahontan cutthroat trout. The overpopulation of wild horses and burros on the range may negatively impact herd health, rangeland health, and livestock and wildlife that depend on the range. An over-obligation of the vegetative resources can result in declines in the healthy vegetative condition that may take years to recover. See figure 6 for our survey results on the possible negative impacts of populations that exceed the upper limits of AML. In addition to the effects on the range, overpopulation in HMAs also results in costlier gathers because a greater number of animals would have to be removed to maintain AML in future years. Although there has been an increased effort to meet AML, there have been many challenges in meeting and maintaining that level. Twenty of the 26 field officials we surveyed identified limitations to gathers to remove excess animals as one of their top challenges to meeting or maintaining AML. One limitation identified by these respondents included limited funding available to conduct gathers. Another limitation identified by respondents was unplanned gathers that alter the gather schedule as resources are directed to HMAs in critical need. Reasons for unplanned gathers include escalating problems and emergencies. An HMA with an escalating problem is defined as an area where deteriorating rangeland conditions, such as declining availability of forage or water, will negatively affect animal condition and rangeland health. Emergency situations are unexpected situations that threaten the immediate health of wild horses and burros or their habitat, such as fire, disease, or other catastrophic events. In addition to using gathers and removals to manage the population on the range, BLM may also use fertility treatment to manage the reproductive rates of wild horses. BLM is using this tool on a limited number of HMAs. However, some animal fertility researchers and wild horse advocates believe that this tool should be used more widely. They say that unless the reproductive rate is curtailed, the need to gather a large number of animals from the range will continue. See appendix II for more information about BLM’s use of this treatment. Removals are used as a primary method for managing wild horse and burro populations on the range; however, the data used to support these removal decisions have been criticized. Specifically, our 1990 report concluded that BLM’s decisions on the number of wild horses and burros to remove were made without adequate information about range carrying capacity or the impact of the animals on range conditions. In August 2005, BLM issued an update to their 2002 policy on gathers that determinations to support gathers and removals must be based on a National Environmental Policy Act analysis and a gather plan that consider five key factors—utilization, trend, actual use, climatic data, and current census. Eleven of the 26 field offices we surveyed considered all five key factors in their most recent gather plan (see table 7). However, many of these field offices conducted their most recent gathers prior to the issuance of the 2005 policy that specified which factors to consider in their decision making process. Specifically, 11 field offices conducted their most recent gathers between 1990 and 2005. Additionally, some field offices’ most recent gathers were conducted as a result of an emergency situation. In those cases, a field office may not have had enough time to consider all five criteria due to the critical time response necessary to remove the animals. Regardless of when the most recent gathers were conducted, 25 of the 26 field offices we surveyed considered the data used to support their removals for specific HMAs as moderately to very sufficient. See table 8 for the number of field offices that considered each of the factors we asked about in our survey. Unlike our previous report, which stated that data to justify removals was outdated, most respondents who provided the year in which their data was collected indicated that their data was current as of the year of their most recent gather or less than 4 years old (see table 9). Half of the survey respondents identified impediments to conducting gathers as a major challenge in managing their HMAs to achieve healthy herd populations that are in balance with the range and other multiple uses. Only 7 of the 26 field offices surveyed said that they were able to typically gather to their lower limits of AML. While several BLM officials explained that gathers can be delayed as a result of funding restrictions or emergency gather priorities, only four of the field offices surveyed indicated that their most recent gather was delayed. Accurate animal population counts are critical to BLM’s ability to properly manage wild horse and burro herds and in determining whether AML targets were met. However, many field offices use a population counting method—the direct-count method—that researchers consider inaccurate. This method generally calls for one person to count each animal they spot from an airplane or helicopter. According to researchers, it consistently undercounts animals and does not provide a statistical range of estimates. Nineteen of the 26 field officials we surveyed used the direct-count method for conducting their most recent census. Regardless of which method is used, counting wild horses and burros can be challenging, particularly when the animals are obscured by trees or when the rangeland is covered with snow. Because counting poses such challenges, researchers are investigating alternative counting methods to assist BLM in collecting accurate population data to form statistically valid population estimates. Each method the researchers are evaluating includes some range of statistical error, whereas direct count only reports the raw number spotted on the ground. Researchers believe that the most effective method will likely be a combination of two or more counting techniques. BLM’s population counts of wild horses and burros have long been questioned by managers and advocacy groups alike. By employing alternative methods that account for a range of error, BLM would have a more defensible way of determining population estimates. In the most recent 2008 BLM population estimates, for example, population counts exceeded those in 2007 by approximately 4,500 animals. As a result, on a nationwide level, BLM is once again well over the upper limit of AML, which brings into question earlier population estimates and whether or not those previous years were as close to meeting AML as once thought. BLM is working with the Department of the Interior’s U.S. Geological Survey’s Fort Collins Science Center and the Colorado State University to develop these methods to achieve greater accuracy in population counts. Some BLM offices have begun to employ some of these methods. For example, in Arizona, managers use the simultaneous double-count method to improve population counts and avoid underestimating burro populations. Some field offices, however, are reluctant to use alternate counting methods because they are concerned that they would require too much additional staff or would be too expensive. Researchers agree that other methods may be slightly more expensive, given the greater number of staff needed. When a population is undercounted, BLM is likely to remove fewer animals than is needed to control overpopulation. For example, in 2002, a direct count was used to census the wild horse population located on the Jackson Mountain HMA in northern Nevada, an area that has been affected by severe drought. When a gather was conducted in 2003, staff believed they removed the adequate number of wild horses to reach AML. Funds to conduct their scheduled census in 2006 were not available, and BLM was unable to conduct its population count until the summer of 2007. It was at this point that staff realized that their 2002 census was incorrect and that they miscalculated the population in 2007 by approximately 640 wild horses. They found that the actual population in 2007 was about five times greater than what they determined was sustainable. In the winter of 2007, BLM began to monitor water availability more regularly. The BLM field staff member who managed that HMA told us that although the herd condition was weakened, the horses did not appear to be in extremely poor condition. Nevertheless, more than 150 of the wild horses removed from this HMA died in a short-term holding facility due to disease that was able to overtake the animals in their weakened state. The number of wild horses and burros removed from the range is far greater than the number adopted or sold. Since 2001, about 74,000 animals have been removed from the range, while only about 46,400 have been adopted or sold. This has resulted in significant spending increases due to a greater number of animals in short- and long-term holding. Thirty-six percent fewer wild horses and burros were adopted in 2007, compared to average adoption rates in the 1990s—a trend BLM officials attribute to the decrease in adoption demand and increasing hay and fuel costs. Since 2004, when BLM was directed to sell excess wild horses and burros without limitation, BLM has sold about 2,700 animals—far fewer than expected, despite the low average selling price of $15. As of June 2008, BLM was holding 30,088 animals in short- and long-term holding facilities, compared with the estimated 9,807 held in 2001. To accommodate the increase in animals removed from the range and the decline in adoptions and sales, BLM has increased the number of short- and long-term holding facilities. This has resulted in an increase in spending for short- and long- term holding facilities. BLM has historically managed wild horses and burros removed from the range through adoptions to the general public. Adoption has been regarded as the most economical way to provide humane long-term care to animals that have been removed from the range. In the 1990s, the number of animals removed from the range was about equal to the number of animals adopted. The average number of animals adopted each year in the 1990s was about 7,500. Since 2000, the number of animals removed has outpaced the number of animals adopted or sold due to an increase in removals and a steady decline in adoption demand and sales. Since 2001, about 74,000 animals have been removed from the range, compared to about 46,400 adopted or sold. The average number removed annually from 2001 to 2007 was about 10,600, compared to the average adoption rate of about 6,300 annually. According to BLM’s 2004 Report to Congress, at least 7,000 adoptions were needed annually to assist in achieving and maintaining AML. However, only about 4,700 animals were adopted in 2007. Although BLM has increased efforts to market adoptions, demand continues to decline for wild horses, even though the price for adopting them has remained at the minimum fee of $125 since 1997. BLM officials attribute the steady wild horse adoption decline in recent years to increases in hay and fuel costs associated with horse care, the large number of domesticated horses that are currently flooding the adoption market, a general urbanization of rural areas, and a shift toward other forms of recreation. For example, according to one official, individuals who once had corrals with two or three horses may now own one horse and four all-terrain vehicles. Figure 7 compares the number of wild horses and burros removed from the range with the number adopted from 1989 through 2007. One alternative for managing unadoptable excess wild horses and burros, as provided for by the 2004 amendment to the 1971 act, is to sell the animals “without limitation.” The act directs BLM to offer excess animals for sale that are more than 10 years old or that have been offered unsuccessfully for adoption at least three times. At the time of the amendment, BLM estimated that approximately 8,400 animals were eligible for sale. To date, BLM has sold only about 2,700 animals—far fewer than originally expected, despite the low average selling price in 2006 of $15 (see table 10). In 2005, the first sale was made to a wild horse protection group in Wyoming who purchased 200 horses that would otherwise have likely ended up in long-term holding under BLM’s care. A few other animals that were sold, however, ultimately ended up in slaughterhouses. To reduce the likelihood that a buyer would purchase these animals and then sell them for slaughter, BLM changed its sales process to require buyers to sign a “statement of intent” that they do not intend to sell the animals for slaughter. This limitation, as well as a decrease in demand, has contributed to the small number of sales. As of June 2008, BLM was holding a combined 30,088 animals in short- term and long-term holding facilities, compared to 9,807 animals in 2001. To accommodate the increase in animals needing care once removed from the range, the number of short-term and long-term holding facilities has increased. Spending on combined short-term and long-term holding has also increased from about $7 million in 2000 to about $20.9 million in 2007. From 2001 through 2008, the number of short-term holding facilities increased from 14 to 24, and the number of animals held in these facilities increased from 6,514 animals to 7,987 by June 2008. These holding facilities provide the animals with vaccinations and other care prior to their being adopted, sold, or sent to long-term holding. The average cost of animals in short-term holding increased from $3.00 per horse per day in 2001 to $5.08 per horse per day in 2008. From 2000 to 2001, the cost for short-term holding increased from $6.4 million to $11.2 million. From 2001 through 2007, the cost remained relatively stable, but for 2008, costs are anticipated to increase to $16.2 million. According to several BLM officials, the escalating cost for caring for animals in short-term holding is primarily a result of the dramatic increase in hay and fuel prices from 2007 to 2008. For example, hay prices for one short-term holding facility in Nevada increased from about $160 per ton in 2007 to almost $300 per ton in 2008. Decreases in adoption and sales and a lack of capacity in long- term holding has not only increased the number of animals held in short- term holding, but has also increased the time animals are held there. According to one state official, animals in his state spent 45 to 60 days in short-term holding facilities in the late 1990s. Beginning in 2000, this official told us, it was not uncommon to hold animals for more than a year. Nationwide, according to BLM, the average length of stay in short-term holding in 2008 has been 210 days. This is far longer than the 90 days BLM projected animals would spend in short-term holding in their 2001 initiative to meet AML. Similarly, the number of long-term holding facilities has increased, as has the cost. The number of facilities increased from 1 in 1988 to 11 as of June 2008, and the number of animals cared for increased from 1,500 in 2000 to 22,101 as of June 2008 (see table 11). These long-term holding facilities have reached their capacity—currently 22,100—despite the increase in numbers of facilities. BLM anticipates it will need greater long-term holding capacity and is working to contract for additional facilities. BLM pays private contractors an average of $1.27 per horse per day to maintain the animals for the remainder of their lifespan, unless removed from long-term holding for adoption or sale. While this fee has increased by only 7 cents since 2000, the number of animals cared for has also increased, resulting in a significant increase in BLM spending on long-term holding. In 2000, BLM spent approximately $668,000 in 2000, compared to more than $9.1 million in 2007 to care for wild horses in long-term holding. The long-term holding facilities are primarily located in Oklahoma and Kansas, where forage is typically more abundant than on HMAs of the West. Table 12 lists the 11 long-term holding facilities. For at least two decades, BLM’s primary strategy to manage excess unadoptable wild horses has been to increase long-term holding, despite warnings in our 1990 report that these facilities were likely to be more expensive than envisioned and to be only a temporary solution to the disposal of unadoptable animals. In 1994, the Department of the Interior’s Office of Inspector General also issued a report that strongly discouraged long-term holding as a solution to managing horses removed from the range due to the large costs. BLM continues to look for more facilities but faces difficulty attracting new contractors that can sustain a large number of animals and that will accept the fee BLM offers, compared to perhaps more profitable land uses. BLM has implemented multiple controls to help ensure the humane treatment of wild horses and burros, including standard operating procedures and agreements with all three slaughterhouses in the United States before they closed in 2007. A variety of controls are used at various stages in the management of wild horses and burros, including for those animals that are gathered, in short-term holding facilities, in long-term holding facilities, adopted, or sold. BLM’s controls for gathers include standard operating procedures, inspections, and data collection. While BLM state offices collect detailed data on animals that die during gathers, the information is not compiled by BLM headquarters in its centralized database, nor is it reported to the public. In addition, BLM does not regularly provide the information it tracks on the treatment of animals in short- and long-term holding and adoption inspections to the public. Making more of this data available to the public may help inform them about the treatment of the animals and improve transparency. Beginning in 1998, until the last horse slaughterhouse in the United States shut down in 2007, BLM sought agreements with all three slaughter facilities to alert BLM of wild horses that entered their facilities. According to BLM data, since 2002, about 2,000 wild horses whose legal titles were obtained by private citizens either through adoption or purchase were slaughtered. During that same period, another 90 wild horses whose title still belonged to BLM were retrieved from slaughterhouses by BLM and by wild horse groups. We reviewed the basic controls BLM has in place, but we did not evaluate their effectiveness. While BLM is required to implement controls to help ensure the humane treatment of wild horses and burros, such controls cannot provide absolute assurance that all agency objectives will be met. A variety of controls are used at various stages in the management of wild horses and burros, including for those animals that are gathered, in short- term holding facilities, in long-term holding facilities, adopted, or sold. BLM’s controls for gathers include standard operating procedures, inspections, and data collection. Data collected from 6 of the 10 states from fiscal years 2005 through 2007 indicate that mortality as a result of gathers is about 1.2 percent. Similarly, controls for short- and long-term holding include standard operating procedures, inspections, and data collection. BLM did not report any deaths due to neglect or abuse at holding facilities, aside from one animal that was repossessed by BLM after having been abused by an adopter. BLM has controls over the adoption of wild horses and burros, and data indicate that from 2005 to 2007, about 9 percent of adopters were not in compliance with BLM’s standards of care. BLM’s controls over humane treatment primarily apply to horses and burros before ownership is passed to private individuals, but BLM has also implemented some controls to protect horses and burros once ownership passes, such as when wild horses and burros are sold. For animals that are sold, since spring 2005, BLM has required buyers to sign a statement that they do not intend to slaughter the animals. BLM does not consistently track information on treatment during gather operations through a central database, nor does it report information about the treatment of animals during gathers, holding, or adoption inspections to the public. BLM has established controls, such as standard operating procedures and tracking systems, to help ensure humane treatment during gather operations. BLM hires contractors to remove wild horses and burros from the range. These contractors generally use helicopters to herd the animals into capture pens on the range (see fig. 8). Due to the stress caused to wild animals by gathering them into pens, gather operations have the potential to cause harm to wild horses and burros, such as nervous agitation; conflict between captured animals; or more rarely, animal death. Because of the potential for harm and to help ensure the safe and humane handling of all animals captured, BLM has implemented a range of standard operating procedures for its gather contractors. Prior to the start of gather operations, BLM personnel evaluate the site of the gather to determine whether it is suitable based on environmental and safety concerns. They also approve gather facility plans ensuring, among other things, that they do not present puncture or laceration hazards and that they prevent animals from seeing humans, vehicles, and other perceived threats. During the herding of the animals, BLM sets limitations on the distance and speed the animals will travel, depending on the condition of the animals and other factors. As the animals are herded into the gather site, BLM requires contractors to segregate horses by age and sex to reduce the possibility of conflict and to ensure that very young horses and burros are not left behind to fend for themselves on the range. Finally, as the captured animals are transported from the gather site to short-term holding facilities, contractors are required to follow procedures to ensure animal safety, such as using adequately sized motorized equipment that has been inspected for safety. BLM has managed gathers with standard operating procedures since the passage of the act in 1971. Although BLM’s controls are designed to enhance the safety of wild horses and burros during gather operations, some animals are accidentally killed in the course of gathers or are euthanized because of ill health or prior injury. Six of the 10 BLM state offices reported data about the number of animals that die as a result of their gather operations. Data collected from 6 of the 10 states from fiscal years 2005 through 2007 indicate that, of the 24,855 animals removed from these states during this period, about 1.2 percent were either euthanized or died accidentally (see table 13). Horses and burros sometimes die due to accidents during gather operations on the range or after they are brought to the holding pens. For example, wild horses will sometimes panic and break their necks against capture pens. Animals found with conditions that make it unlikely they will be able to live their life without significant pain, such as lameness or club feet, are euthanized. Although BLM national and state officials told us that they sometimes record data about the animals accidentally killed or euthanized during gathers at the BLM state office level, BLM does not centrally compile or report these data to the public on a regular basis on a national level. A BLM official told us that although their main tracking database has the capability to record the number of animals that are killed or euthanized during gathers, they generally do not use the database to do so because it was originally intended to track adoptions. Moreover, BLM has not regularly reported to the public how many wild horses and burros are killed in the course of gathers, although BLM officials have cited the data during public hearings. Some advocates and members of the public believe that gathers are held in secret and highlight individual cases of apparent mistreatment as evidence that inhumane treatment is widespread. However, a BLM official told us that it is BLM’s standard practice to allow the public and the media to observe gather operations, and BLM is required to hold public hearings prior to scheduled gathers using helicopters. If BLM does not improve its transparency by presenting reliable data to members of the public, BLM will continue to be vulnerable to accusations that gathers are generally cruel and inhumane. BLM has issued standard operating procedures to help ensure that wild horses and burros held in short-term holding facilities are well cared for. They include procedures for minimizing the excitement of the animals to prevent injury; separating horses by age, sex, and size; observation of the animals on a regular basis; and recording information about the animals that BLM later uses for tracking the animals in BLM’s database. BLM’s short-term holding facilities are mostly maintained and directly managed by BLM, either on government property or on leased property. Several are at state prisons, and a few others are maintained by contractors in privately-owned feedlots or ranches that BLM has leased. According to BLM staff, they regularly inspect the short-term holding facilities and the animals they hold. They inspect to see that the corral equipment is up to code and that animals are treated with appropriate veterinary care. For example, staff check to see that the horses’ hooves are regularly trimmed so that they do not become too long and cause injury. At two of the short- term holding facilities we visited, we observed specially constructed chutes that hold and rotate horses in place so that horses’ hooves can be trimmed more quickly, easily, and with less risk to the animals and the employee than other methods, such as using tranquilizer darts or roping (see fig. 9). BLM data indicate that the wild horses and burros held in short-term holding facilities from 2003 to 2007 had a mortality rate of about 5 percent. Specifically, for 2007, BLM reported 936 deaths in short-term holding facilities out of a total of 17,363 animals that passed through short-term holding facilities in that year. BLM reported that none of the animals in its care died of neglect or abuse between 2005 and 2007, aside from one case in 2006, where a reclaimed adopted horse died in BLM care due to the effects of abuse suffered while it was in the care of an adopter. BLM data showed that the animals generally died due to sickness, broken limbs, or injuries sustained accidentally during gathers. BLM does not report this information regularly to members of the public who remain concerned that the agency does not adequately care for animals in short-term holding. BLM has similar controls in place for its long-term holding facilities. BLM staff inspect long-term holding facilities annually to count the number of animals held. Staff also monitor pasture conditions, winter feeding, and animal health throughout the year. According to BLM staff, during these visits they ensure the contractors comply with BLM provisions and discuss possible problems that can be corrected. In addition, veterinary staff from the Department of Agriculture’s Animal and Plant Health Inspection Service inspect long-term holding facilities annually; these inspections involve a full count of the horses held there, an inspection of the horses’ general health, and written reports. Animal and Plant Health Inspection Service reports from 2007 indicate that the horses kept in long-term holding sanctuaries are generally in “good” or “excellent” condition. These reports, however, highlight some areas for possible improvement. At one facility, one area of improvement included the proper disposal of the remains of animals that have died of natural causes. To help ensure the animals are well cared for, a contract veterinarian provides care when needed at BLM direction and expense. In addition to inspecting the facilities for the well being of wild horses in long-term holding, contractors are required to count and report the number of horses held on a weekly basis for billing and payment purposes. In 2007, long-term holding contractors were paid an average fee of $1.27 per horse per day, or about $460 per horse per year. While this contract fee structure is not in itself a control that guarantees humane treatment, it provides a profit incentive for contractors to ensure the continued health of the horses. According to one BLM official, BLM does not regularly document the results of its inspections. This official told us that the agency would take actions and record them if it found problems, but the official generally has not found problems with the contractors that have warranted action beyond informal conversations to address minor issues. BLM collects data on how wild horses are cared for in long-term holding, including the number of animals that die in long-term holding. The average mortality rate of wild horses in long-term holding from 2003 through 2007 was about 8 percent, but it fluctuated from a low of 5 percent to a high of 14 percent during that time period. Specifically, for 2007, BLM reported 938 deaths in long-term holding facilities. The number of wild horses in long-term holding in 2007 was 19,652. The animals that die in long-term holding are generally found in the pastures, and unless there is evidence of foul play, BLM does not investigate the cause of death. According to BLM, barring any evidence to the contrary, it is assumed that the animals in long-term holding die of old age. Officially, BLM reported about 95 percent of the animal deaths in long-term holding as “undiagnosed.” Some of the other causes of deaths reported included old age and respiratory illness. No animals in long-term holding died from neglect or abuse, according to BLM reports. While BLM collects this data, it does not report this data regularly to the public. In the absence of this data, some members of the public who advocate greater protection for wild horses have repeatedly expressed their concern that BLM does not adequately care for animals in long-term holding. The act requires BLM to determine that adopters have provided humane conditions, treatment, and care for adopted animals for at least 1 year before BLM transfers ownership to the adopter. To implement the act, BLM has established policies for inspecting adopted horses or burros in this first year through telephone calls or personal visits. BLM inspections focus on the condition of the animal; the condition of the facilities; and whether the adopter has notified BLM if the adopted animal has been moved, was stolen, has escaped, or has died. Prior to taking possession of an adopted animal, BLM requires that adopters describe the facility where they will maintain the adopted animal. This is documented in their application, which confers penalties for providing false information. According to BLM data, from 2005 through 2007, an average of about 9 percent of adopted wild horses and burros that still belong to the government have not been treated in compliance with BLM standards (see table 14). BLM randomly selects a sample from the universe of approximately 5,000 adopters per year who have not yet received title of their adopted animal for inspection. BLM inspects these adopters in order to generate a statistical sample of the likely percentage of adopted animals kept under conditions that do not comply with BLM’s policies and standards. The most common conditions in need of improvement included the failure to report changes in the animal’s location or status and substandard facilities, such as inadequate fencing or shelter. Less common conditions included lack of care of the animal, such as inadequate feeding or failure to trim the animal’s hooves before they grew too long. In addition, BLM policy directs that officials or certified volunteers conduct personal inspections of all adopted animals whenever BLM receives complaints about mistreatment or when an individual or organization adopts more than four wild horses or burros at one time. Similar to the data collected on the animals in short- and long-term holding, BLM does not provide information on the results of its adoption inspections to the public. The information regularly provided to the public on the treatment of these animals is in contrast to the comparatively large amount of information BLM provides on the program’s Web site regarding information on AML and population estimates for each HMA. In the case of animals that were legally sold, BLM has implemented limitations to prevent these animals from being resold to slaughter facilities. In 2004, the act was changed and directed BLM to sell, “without limitation,” excess wild horses and burros more than 10 years of age or that had been offered unsuccessfully for adoption at least three times, until all excess animals for sale are sold or until AML is met in all HMAs. However, shortly after BLM began to sell wild horses and burros without limitation, in early 2005, it was discovered that 41 of these wild horses had been slaughtered. In April 2005, BLM suspended its wild horse sales program and resumed sales in May 2005, after adding controls intended to restrict the sale of animals for the purpose of selling them for slaughter. These controls included BLM’s requirement that buyers sign a statement they do not intend to sell the animals for slaughter and verification that potential buyers would provide adequate care for the animals. Although BLM is no longer required to protect animals after ownership has passed to adopters or buyers, BLM implemented controls to help prevent their slaughter beginning in 1998. BLM had negotiated agreements with all three U.S. facilities that operated horse slaughterhouses. The slaughterhouses agreed to alert BLM to all wild horses that entered their facilities and refrain from slaughtering those wild horses whose title still belonged to BLM. According to BLM data, which it was able to provide since 2002, about 2,000 wild horses whose legal titles were obtained by private citizens through adoption or purchase were slaughtered. During that same time period, at least 90 adopted wild horses that were still owned by the government were brought to these slaughterhouses, and all were retrieved by BLM and interested wild horse groups. As of fall 2007, all horse slaughter facilities in the United States had been shut down following unsuccessful legal challenges to state laws effectively banning the practice. In January 2007, the U.S. Court of Appeals for the Fifth Circuit ruled that a 1949 Texas law banning the sale, possession, or transfer of horsemeat applied to the two slaughterhouses in Texas. In September 2007, the U.S. Court of Appeals for the Seventh Circuit upheld an Illinois ban. These rulings effectively closed the plants and ended horse slaughter in the United States. Even though all horse slaughter facilities in the United States have been closed, it is still possible for wild horses and burros to be sold to facilities outside the United States. Prior to the closure of all U.S. horse slaughter facilities, about 50,000 domestic horses were brought to slaughter in the United States annually between 2001 and 2004. Generally, exporting horses and burros to other countries for slaughter, such as Canada or Mexico, is not prohibited; for example, about 3,000 horses per month were exported for slaughter in 2007, according to Department of Agriculture information. We attempted to determine how many of these horses were at one time wild, but we were not able to do so. The Department of Agriculture, which certifies the inspections of horses and other livestock exported to other countries, is not required and does not report how many of the horses exported to other countries were once wild horses. The long-term sustainability of BLM’s Wild Horse and Burro Program depends on the resolution of two significant challenges. First, holding costs are overwhelming the program’s ability to manage animals on the range and will continue to do so if BLM does not consider alternatives to holding. Second, BLM has limited options for dealing with unadoptable animals off of the range because its alternatives under the act—humane destruction of the animals or selling the animals without limitation—are thought to be unacceptable to the public. As a result, BLM has placed over 30,000 wild horses and burros in holding. The portion of the Wild Horse and Burro Program’s spending that is directed toward short- and long-term holding has increased from 46 percent of the program’s direct costs in 2000 to 67 percent in 2007. This increase leaves a smaller portion of the budget available for on-the-range management activities. Much of the increase has occurred because accelerated removals implemented to reach AML have coincided with a decline in adoption demand. Because long-term holding facilities are at capacity, BLM has had little choice but to hold excess unadoptable horses in more expensive short-term holding. BLM’s spending on short- and long- term holding has increased from about $7.0 million in 2000, or 46 percent of the program’s direct costs, to about $20.9 million in 2007, or 67 percent of the program’s direct costs (see fig. 10). In 2008, BLM anticipates that holding costs will account for about 74 percent of the program’s direct costs. To deal with its long-term holding problem, BLM has primarily sought increased funding to open additional long-term holding facilities. However, funding is not likely to increase in the future, and limited funding is forcing BLM to make difficult choices. For example, in January 2008, BLM considered canceling all remaining removals scheduled for the fiscal year because of the amount needed for short- and long-term holding. As of July 2008, BLM was seeking the funds to continue these removals by redirecting money from other BLM activities to the Wild Horse and Burro Program. As a result, under current funding levels, BLM must now choose between either managing the range to prevent overpopulation or exercise one or both of its other options—destroying animals or selling them without limitation. To continue to reduce overpopulation on the range by using gathers alone, BLM projects that the program’s budget would have to increase to about $77 million by fiscal year 2012, from about $36 million in 2008. If BLM does not receive this increase or exercise its other options to reduce populations off the range, then it will not have sufficient funds to manage wild horses and burros on the range, and populations will sharply increase. BLM’s current projections indicate that caring for unadoptable animals would reduce the agency’s ability to gather horses to an average of about 4,500 animals per year, which would only be enough to prevent animals from dying from the effects of overpopulation and drought. At these removal levels, BLM projects that the on-the-range population would reach 50,000 animals by 2012—about 80 percent greater than the upper limit of AML. This on-the-range population level would be greater than the population level prior to the beginning of BLM’s 2001 strategic plan. Since 2004, BLM has had the goal of reducing the total population on the range to the midpoint of AML. If it were to reach this level, which is currently about 22,588 animals, an annual population growth rate of 20 percent would require the annual removal of about 4,500 animals to maintain that level, approximately equal to the recent adoption rate. Assuming that rate remained constant, fewer animals would be sent to long-term holding. However, even if BLM is able to reach a balance between animals removed and those adopted, it still has the challenge of dealing with 30,088 animals that are currently held in short-term and long- term holding facilities across the country. Furthermore the number of animals held in holding would exceed 40,000 animals if BLM were to remove the approximately 11,000 animals necessary to reach the midpoint of AML. BLM has a number of research projects under way and ideas in development that could slow the increase in the population on the range. These include fertility control efforts, such as the development of a fertility vaccine (see app. II for more information on this vaccine) and releasing sterilized male horses back to the range after capture. Given that many existing HMAs are already over AML, releasing a large number of sterilized male horses or nonreproducing herds back to the range as a means of trying to reduce future holding costs would likely require changing existing land use decisions within BLM’s existing authority to increase AMLs, expand existing HMAs or designate new HMAs; or through seeking new legislative authority. Under the 1971 act, the land available for the management of wild horses and burros is limited to the areas where they existed at the time of the act. The originally designated herd areas consisted of 53.5 million acres compared to the existing HMA acreage of 34.3 million, a difference of 19.2 million acres. Specifically, the BLM owned acreage managed for wild horses and burros has changed from 42.2 million acres to 29.0 million acres, a difference of 13.2 million acres. As we mentioned earlier, BLM is in the process of compiling a history of actions that led to these changes. At this point, however, it is not clear how much of the 13.2 million acres is still public land under BLM’s control. While BLM could change AMLs, expand existing HMAs, or designate new HMAs within its existing authority, BLM is a multiple use agency and it weighs the needs of wild horses and burros against other competing uses. Alternatively, should BLM chose to do so, new legislative authority could be pursued to allow nonreproducing herds to be relocated to areas where they were not found at the time of the act. We believe that it is important to consider increasing AML or expanding HMA acreage only as a means to accommodate nonreproducing herds. Increasing the number of reproducing animals on the range without corresponding solutions for fertility control or declining adoption demand will, in the long run, only exacerbate BLM’s problems with dealing with excess animals. Despite these budget problems, BLM has avoided using two options in the act for dealing with unadoptable animals because of concerns over the public and congressional reaction to the large-scale slaughter of thousands of healthy horses. The Wild Free-Roaming Horses and Burros Act, as amended, requires that excess animals, for which the adoption demand is not sufficient to absorb all the animals removed from the range, be destroyed in the most humane and cost-efficient manner possible or, under certain circumstances, be sold without limitation. The 1978 amendments to the original 1971 act directed that “he Secretary shall cause additional excess wild free-roaming horses and burros for which an adoption demand by qualified individuals does not exist to be destroyed in the most humane and cost efficient manner possible.” From 1981 to 1982, BLM destroyed at least 47 excess animals. BLM decided not to destroy excess unadoptable animals in 1982 after the Director issued a policy prohibiting the destruction of healthy animals because of public dismay. Furthermore, from fiscal year 1988 through fiscal year 2004, Congress prohibited BLM from using its Management of Lands and Resources appropriations to destroy excess healthy, unadoptable wild horses and burros. In our 1990 report, we found that keeping excess animals in long-term holding was costly and recommended that BLM examine alternatives, such as sterilizing animals and releasing them back into the wild. Although BLM was prohibited from using its Management of Lands and Resources appropriations for humanely destroying excess animals through euthanasia at the time of that report, we also recommended that BLM consider this action as a last resort in the event that Congress lifted the prohibition in the future. The recurring prohibition in the annual appropriations bills ended after fiscal year 2004. Since then, BLM has no longer been prohibited from using its Management of Lands and Resources appropriations for carrying out the requirement to destroy excess animals. BLM still has not used this option. In 2004, Congress provided BLM with an alternative to destroying unadoptable excess animals by amending the act to state that “ny excess animal or the remains of an excess animal shall be sold if—(A) the excess animal is more than 10 years of age; or (B) the excess animal has been offered unsuccessfully for adoption at least 3 times.” Furthermore, the amendment stipulated that the excess animals “shall be made available for sale without limitation.” BLM has instead imposed limitations on the sales of excess animals in an effort to reduce the risk that animals purchased at a low price would be resold to slaughterhouses for profit. As a result, BLM is not in compliance with the act. BLM officials told us that they have chosen not to destroy excess animals or sell them without limitation because of concerns about public and congressional reaction to the large-scale slaughter of thousands of healthy horses. Various BLM officials at different levels of responsibility also told us that the agency has not complied with these provisions because doing so would cause an immediate threat to the careers of any officials involved, due to the anticipated negative reaction of the public and Congress. Nevertheless, as of June 2008, budget constraints forced BLM to reconsider all of its options, officials told us. Specifically, for fiscal year 2009, BLM is considering euthanizing about 2,300 horses from short-term holding— about one-third of the animals currently in short-term holding. In addition, they are considering selling without limitation about 8,000 animals from both short- and long-term holding. However, as of August 31, 2008, legislation was pending in the 110th Congress that would repeal the directive for BLM to sell animals without limitation, but not the requirement to destroy unadoptable excess horses. Other than one pilot project, BLM has not initiated strategies to reduce the number of horses they currently manage in long-term holding and has not formally considered other possible solutions to indefinitely caring for horses in long-term holding. BLM officials who lead state Wild Horse and Burro Programs suggested several actions that could be taken to alleviate off-the-range costs to the program, but many of these changes would require changes in the law or BLM regulations. The most common suggestion, made by 4 of the 10 state leads, was that the federal government should provide incentives for private individuals or organizations to care for unwanted wild horses, such as monetary incentives or tax deductions. In 2003, BLM initiated a pilot project in Wyoming to pay private ranchers a one-time lump sum to care for unadoptable excess animals. This pilot project ended because of a lack of up-front funds. In addition, a BLM official familiar with the project told us that private ranchers had less interest in the project as the market for cattle grazing improved. Implementing tax deductions would likely require changes in the tax law. Another suggestion made by three of the state leads was that the act should be changed to allow the government to manage unadoptable wild horses and burros on public or private lands outside areas where they were originally found. The act currently does not allow BLM to relocate wild horses and burros to areas of public lands where they were not found when the act was passed. To date, BLM has not sought the legislative changes that would make these suggestions possible. The management of a program consisting of wild free-roaming animals is unique within BLM, and it presents distinct management challenges. While BLM has made significant progress in increasing the number of HMAs that have set AML and in moving toward meeting AML, its recent removal efforts have resulted in the agency managing almost the same number of animals off of the range as they manage in the wild. By spending an ever increasing amount of funding on caring for animals off the range, little funding is left to conduct important on-the-range management activities, as originally envisioned in the act. Now that BLM is closer to meeting AML, it is important for field offices to have the resources necessary to maintain those levels and to monitor whether those levels indeed create the “thriving natural ecological balance” called for in the act. Future changes to AML determinations should be based on consistent factors across HMAs. With the turnover of the more experienced senior BLM staff that set the existing AMLs to newer more junior staff, it is important that the newer staff have clear official guidance to follow on making AML determinations. It is also important for the management of the program that BLM have the most accurate population estimates possible. While counting wild free-roaming animals is an inherently challenging task, the widespread use of statistically based counting methods across more HMAs, as appropriate, would provide a scientifically sound basis for compensating for possible undercounts. BLM provides a great deal of information about the Wild Horse and Burro Program through its Web site, including information on AML and population estimates for each HMA. However, despite public concerns about the humane treatment of these animals, BLM has not provided the public with easily accessible information about their treatment. In some cases, BLM headquarters does not centrally compile information on the treatment of animals during gathers. Providing the public with additional information on the treatment of animals during gathers and once they are removed from the range would help inform the public about their treatment. In our 1990 report, we noted that given the amount of federal resources needed to maintain unadoptable excess horses in long-term holding, BLM would need to seek alternative options. At the time, we recommended that BLM consider a variety of disposal options for these horses that were not being used, including sterilization and euthanasia. Today, about 20 years after the first long-term holding facility opened, with adoption demand declining and alternative disposal options still not being used, BLM is continuing to open new long-term holding facilities to care for unadoptable wild horses, and the costs continue to escalate. Cost-effective alternatives for long-term holding are still needed. BLM is faced with a dilemma as it attempts to comply with the act. On one hand, the act directs BLM to protect and preserve wild horses and burros, and on the other hand the act directs BLM to destroy excess animals for which an adoption demand does not exist or, under certain circumstances, to sell them without limitation, which has led to the slaughter of some animals. BLM has committed to caring for these animals, even though the law requires their humane destruction or sale without limitation and the cost for their care off-the-range is now overwhelming the program. The program is at a critical crossroads. Within the program’s existing budget, BLM cannot afford to care for all of the animals off the range, while at the same time managing wild horse and burro populations on the range. Resource limitations are forcing BLM to reconsider all available management options, and a workable solution must be developed to bring BLM into compliance with the act. We make five recommendations to the Secretary of the Interior. To improve the management of BLM’s Wild Horse and Burro Program, we make four recommendations that the Secretary of the Interior direct BLM to: finalize and issue the new Wild Horse and Burro Program Handbook that establishes a policy for setting AML to ensure that AML is determined based on consistent factors across HMAs into the future; continue to adopt and employ statistically based methods to estimate animal populations across HMAs, such as those being evaluated by animal population researchers, to improve the accuracy of population estimates integral to BLM’s management of wild horses and burros on the range and in planning for capacity needed for excess animals once they are removed from the range; track the number of animals harmed or killed during the gather process in a centralized database system and determine what information on the treatment of gathered animals, short-term and long-term holding animals, and adopted animals could easily be provided to the public to help inform them about the treatment of wild horses and burros; and develop cost-effective alternatives to the process of caring for wild horses removed from the range in long-term holding facilities and seek the legislative changes that may be necessary to implement those alternatives. To address BLM’s noncompliance with the act, as amended, we recommend that the Secretary of the Interior direct BLM to discuss with Congress and other stakeholders how best to comply with the act or amend it so that BLM would be able to comply. As part of this discussion, BLM should inform Congress of its concerns with (1) the act’s requirement for the humane destruction of excess animals and (2) the possible slaughter of healthy horses if excess animals are sold without limitation, under certain circumstances, as the act requires. We provided a draft of this report to the Department of the Interior for review and comment. The department concurred with our findings and recommendations and believes they will help to improve the Wild Horse and Burro Program. In addition, the department provided several technical clarifications, which we incorporated as appropriate. Appendix IV contains the Department of the Interior’s comment letter. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of the Interior, the Director of BLM, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-3841 or nazzaror@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. We examined (1) the Bureau of Land Management’s (BLM) progress in managing wild horses and burros on the range through setting and meeting appropriate management levels (AML); (2) BLM’s management of wild horses and burros off of the range through adoption, sales, and holding facilities; (3) the controls BLM has in place to help ensure humane treatment of wild horses and burros; and (4) what challenges, if any, BLM faces in managing for the long-term sustainability of the Wild Horse and Burro Program. We were also asked to review how and why the acreage available for wild horses and burros had changed since the 1971 act. We did not examine the acreage issue because BLM is in the process of compiling a history of acreage determinations. BLM officials expect their review to be completed by March 2009. To examine how BLM manages wild horses and burros on and off of the range and to identify the challenges facing BLM, we reviewed relevant laws, regulations, BLM policy, and BLM strategic plans. We also surveyed, and analyzed documents from, 26 of the 44 BLM field offices that manage wild horses and burros. We collected and reviewed relevant resource management decision documents from the surveyed field offices to help corroborate their responses about specific questions, including those about factors used to make AML determinations and gather decisions. We surveyed field offices in all 10 western states that manage HMAs. The field offices we surveyed represent 82 percent of all BLM acres managed for wild horses and burros, 74 percent of all BLM managed wild horses, and 69 percent of burros on the range at the time of the survey. Our survey sample included 100 percent of the BLM field offices that manage HMAs in Nevada, including the Tonopah Field Station (seven offices); three randomly selected field offices from each of the five states whose field offices or district offices manage a population of wild horses and burros that fall between 1,000 and 10,000 horses (Arizona, California, Oregon, Utah, and Wyoming); and one randomly selected field office from each of the four states whose field offices manage a population of wild horses and burros that is less than 1,000 (Colorado, Idaho, Montana, and New Mexico). Because most of our survey questions focused on the management of a particular HMA, we judgmentally selected an HMA for each field office to consider in responding to our survey. We considered a variety of factors in making these HMA selections, including herd population size and whether the HMA had met or not met AML (according to 2007 BLM Statistics). Table 15 lists the 26 BLM field offices and HMAs we selected as part of our survey. The survey included several open-ended responses aimed at determining the primary challenges associated with meeting and maintaining AML, the primary challenges facing the Wild Horse and Burro Program as a whole, and suggestions for ways to improve the program. Two GAO analysts independently reviewed these open-ended survey responses, agreed upon the categories for coding each response, and resolved any disagreements in coding to determine what the respondents as a whole thought about these issues. The practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to the respondents, or in how the data are entered into a database or were analyzed can introduce unwanted variability into the survey results. We took steps in the development of the questionnaire, the data collection, and the data analysis to minimize these nonsampling errors. For example, survey specialists designed the questionnaire in collaboration with GAO staff with subject matter expertise. Then, the draft questionnaire was pretested with officials from five BLM field offices in four different states to ensure that the questions were relevant, clearly stated, and easy to comprehend. We also conducted follow-up phone calls to clarify ambiguous or incomplete responses. We received usable responses from all field offices that we surveyed—a 100 percent response rate. See appendix III for a summary of the survey responses not presented elsewhere in the report. We also interviewed agency officials at BLM Headquarters; the National Program Office in Reno, Nevada; and Wild Horse and Burro Program State Leads from each of the 10 states that manage wild horses and burros. In addition, we conducted site visits at two field offices that manage HMAs in Nevada and Colorado; one long-term holding facility in Oklahoma; three short-term holding facilities in Colorado, Nevada, and Wyoming; and attended two adoption events in Arizona and Colorado. To examine humane treatment, we reviewed relevant laws, regulations, and BLM policies. We collected and analyzed reports from BLM Headquarters, state offices, and data from BLM’s compliance database. We also interviewed BLM compliance officials from two states, a veterinarian from the Department of Agriculture’s Animal and Plant Health Inspection Service, and public citizens and advocacy groups that work to promote the well being of wild horses and burros. As part of our overall methodology, we interviewed a range of stakeholders interested in BLM’s management of the Wild Horse and Burro Program, including, but not limited to, the American Wild Horse Preservation Campaign, the Animal Welfare Institute, the Cloud Foundation, the Humane Society of the United States, the National Cattlemen’s Beef Association, and Nevada Bighorns Unlimited. We conducted this performance audit from September 2007 to October 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the information provided in this report to answer our primary objectives, we encountered two other issues related to BLM’s management of the Wild Horse and Burro Program. The issues primarily relate to BLM’s on-the-range management activities, including fertility control and genetic variability. BLM has been pursuing a fertility control vaccine called porcine zonae pellucida since 1992 to use as a tool for slowing the reproductive rate in wild horse populations. A slower reproductive rate would reduce the number of animals that would have to be gathered and removed, adopted, and held. BLM officials do not consider this treatment as the best short- term management tool to achieve AML but believe that once HMAs are at AML, fertility treatment can help to maintain that level. Much research has been conducted about the use of the vaccine in domestic and wild horses. The Department of the Interior’s National Park Service has used this treatment to successfully manage wild horse populations at two national seashores. BLM field offices have been directed to consider the use of fertility control as an alternative in their gather plans, but they are not required to choose this research tool. The vaccine is considered experimental, and as such, there are barriers to its use. Since 2004, 47 HMAs have used fertility treatments, and a total of about 1,800 wild horses have been injected with the treatment. BLM considers the use of this treatment as a research tool; however, according to a prominent wild horse fertility researcher, BLM should more actively pursue its use as a management tool. According to BLM officials, fertility control may offer the possibility of reducing reproduction rates and costs, but BLM will still need to place horses in long-term holding in the future. Herd health is another important component of BLM’s on-the-range management of wild horses and burros. Specifically, it is important to maintain a degree of genetic variability to decrease the likelihood of disease and to maintain the biological fitness of the population. The amount of genetic variability that is sufficient to maintain a healthy population, however, is difficult to discern. Some groups have criticized BLM for setting AMLs at levels that are less than 100 or 150 animals. As of February 2008, 135 of the 199 HMAs had an upper limit of 150 or less for AML (see table 16). Several of these smaller HMAs, however, are part of a complex of HMAs that are managed as one unit where there is regular genetic interchange. For example, 13 complexes in Nevada encompass 45 of their 102 HMAs. According to a leading researcher in the field of wild horse genetics, however, a herd that has a population of less than 100 can be maintained with the introduction of at least one or two horses every 6 to 7 years, including those whose herd size are as small as 10 to 15 horses. BLM manages a few herds that show strong evidence of old Spanish heritage which no longer exists outside of the Americas. For example, the Kiger Mustangs of Oregon and the Pryor Mustangs of Montana have some colonial Spanish traits. For most of the HMAs, however, genetic variability is important primarily in maintaining the health of the herd, rather than managing for a specific genetic trait or bloodlines. The following tables summarize responses collected through our survey instrument that was sent to 26 BLM field offices that manage HMAs. See appendix I for a complete explanation of which offices were chosen and the methodology used to select those field offices and specific HMAs. Our survey was divided into two sections. The first asked questions specific to the field offices’ management of particular HMAs. The second section asked questions related to the field offices’ general management of all HMAs. In addition to the individual named above, Jeffery D. Malcolm, Assistant Director; Ulana Bihun; Kevin Bray; Lee Carroll; Benjamin Shouse; Gregory Wilmoth; and Elizabeth Wood made key contributions to this report. Also contributing to the report were Beverly Ross and Monica Wolford.
The Department of the Interior's Bureau of Land Management (BLM) manages about 33,100 wild horses and burros on 199 Herd Management Areas (HMA) in 10 western states. Under the Wild Free-Roaming Horses and Burros Act of 1971, as amended, BLM is to protect wild horses and burros, set appropriate management levels (AML), maintain current inventory counts, and remove excess animals to prevent overpopulation and rangeland damage. Over the years, various stakeholders have raised issues about BLM's management of the animals on and off the range. GAO examined (1) BLM's progress in setting and meeting AML; (2) BLM's management of animals off the range through adoptions, sales, and holding facilities; (3) BLM's controls to help ensure the humane treatment of animals; and (4) what challenges, if any, BLM faces in managing for the long-term sustainability of the program. GAO surveyed and analyzed documents from 26 of the 44 BLM offices that manage wild horses and burros. BLM has made significant progress toward setting and meeting AML (the optimum number of animals which results in a thriving natural ecological balance and avoids range deterioration). BLM has set AML for 197 out of 199 HMAs. Most of the field offices GAO surveyed considered similar factors in determining AML, such as rangeland conditions; however, BLM has not provided specific formal guidance to the field offices on how to set AML. Without clear guidance, BLM cannot ensure that the factors considered in future AML revisions will be consistent across HMAs. At a national level, in 2007, BLM was closer to meeting AML (about 27,200 animals) than in any other year since AMLs were first reported in 1984. The extent to which BLM has actually met AML depends on the accuracy of BLM's population counts. Nineteen of the 26 field officials GAO surveyed used a counting method which, researchers say, consistently undercounts animals and does not provide a statistical range of population estimates. Undercounting can put animals at risk and lead to increased program costs. The number of animals removed from the range is far greater than the number adopted or sold, which has resulted in the need for increased short-term and long-term holding. Since 2001, over 74,000 animals have been removed from the range, while only about 46,400 have been adopted or sold. Thirty-six percent fewer animals were adopted in 2007 than compared to the average adoption rates in the 1990s. As of June 2008, BLM was holding 30,088 animals in holding facilities, up from 9,807 in 2001. To accommodate the increased removals and declining adoptions and sales, BLM has increased the number of short-term and long-term holding facilities. BLM has implemented multiple controls to help ensure humane treatment, including random checks on adopted horses and agreements with adopters and buyers to prevent slaughter. Although BLM state offices collect data on the treatment of the animals, BLM does not always compile the information in its central database or report it to the public. Providing additional information to the public on the treatment of these animals could help inform the public about their treatment and improve transparency. The long-term sustainability of BLM's Wild Horse and Burro Program depends on the resolution of two significant challenges: (1) If not controlled, off-the-range holding costs will continue to overwhelm the program. The percentage of the program's direct costs for holding animals off the range increased from $7 million in 2000 (46 percent) to $21 million in 2007 (67 percent). In 2008, these costs could account for 74 percent of the program's budget. (2) BLM has limited options for dealing with unadoptable animals. The act provides that unadopted excess animals shall be humanely destroyed or, under certain circumstances, sold without limitation. However, BLM only manages these animals through sales with limitations. BLM is concerned about the possible reaction to the destruction of healthy animals.
Congress first provided a general definition of homeless individuals in 1987 in what is now called the McKinney-Vento Act. In 2002, Congress added a definition for homeless children and youths to be used in educational programs. Prior to the enactment of the HEARTH Act, the McKinney-Vento Act generally defined a homeless individual (McKinney- Vento Individual) as someone who lacks a fixed, regular, and adequate nighttime residence and has a nighttime residence that is a supervised shelter designed to provide temporary accommodations; an institution providing a temporary residence for individuals awaiting institutionalization; or a place not designed for, nor ordinarily used as, a regular sleeping accommodation. However, in the provisions on education of children and youths, the McKinney-Vento Act also specifically included children and youths who are sharing the housing of other persons due to loss of housing, economic hardship, or a similar reason (that is, are doubled up); living in motels, hotels, trailer parks, or camping grounds due to the lack of alternative adequate accommodations awaiting foster care placement; or living in substandard housing (McKinney-Vento Children and Youth). For its homeless assistance programs, HUD has interpreted the McKinney- Vento Act definitions so that a homeless individual is someone who resides in places not meant for human habitation, such as in cars, abandoned buildings, housing that has been condemned by housing officials, or on the street, in an emergency shelter or transitional or supportive housing, or any of these places, but is spending a short time (up to 30 consecutive days) in a hospital or other institution. Additionally, individuals are considered homeless if they are being evicted within a week from a private dwelling and no subsequent residence has been identified and the person lacks the resources and support networks needed to obtain housing; discharged within a week from an institution in which the person has been a resident for 30 or more consecutive days and no subsequent residence has been identified; or fleeing a domestic violence situation. The HEARTH Act includes changes in the general definition of homelessness, but the new definition and associated regulations had not taken effect by June 2010. The HEARTH Act broadened the general definition and provided greater statutory specificity concerning those who should be considered homeless but did not change the McKinney-Vento Children and Youth definition. For example, the HEARTH Act definition includes individuals and families that will be evicted in, or who can otherwise demonstrate that they will not be able to remain in their current living place for more than, 2 weeks. The HEARTH Act definition includes some individuals, families, and youths who would have been considered homeless under the McKinney-Vento Children and Youth definition but not under the prior individual definition. Some federal programs that were authorized outside of the McKinney- Vento Act use other definitions of homelessness. For example, the Runaway and Homeless Youth Act, first introduced as the Runaway Youth Act of 1974, defined a homeless youth as being generally from the ages of 16 to 22, unable to live in a safe environment with a relative, and lacking any safe alternative living arrangements. Within various programs, the definition of homelessness determines whether individuals are eligible for program benefits. For the Education of Homeless Children and Youth program, meeting the definition entitles the student to certain benefits; however, in other cases, such as HUD’s homeless assistance programs or HHS’s Runaway and Homelessness Youth programs, benefits are limited by the amount of funds appropriated for the program. For these programs, meeting the definition of homelessness does not necessarily entitle individuals or families to benefits. In addition, programs have other eligibility criteria, such as certain income levels, ages, or disability status. As illustrated in table 1, programs that provide targeted assistance primarily to those experiencing homelessness have different purposes, definitions of homelessness, and funding levels. One of these programs, HUD’s Homeless Prevention and Rapid Rehousing Program, was created under the American Recovery and Reinvestment Act (Recovery Act) of 2009, and many others received additional funding under that act. HUD’s Homeless Assistance Programs comprise a number of individual programs. These include the Emergency Shelter Grant Program, under which funding is provided on a formula basis, and competitive programs funded under the umbrella of the Continuum of Care (Continuum). The latter include the Supportive Housing, Shelter Plus Care, and Single Room Occupancy Programs. A Continuum is a group of providers in a geographical area that join to provide homeless services and apply for these grants. The Continuum is also responsible for planning homeless services, setting local priorities, and collecting homelessness data. Additionally, many federally-funded mainstream programs provide services for which those experiencing homelessness may be eligible. Some of these programs are required to provide services to those experiencing homelessness and may define it, others allow local providers to choose to target certain services to those experiencing homelessness or provide homelessness preferences using locally determined definitions, and still other programs do not distinguish between those experiencing homelessness and those not experiencing it (see table 2). The McKinney-Vento Act also authorized the creation of the U. S. Interagency Council on Homelessness (Interagency Council). Initially, the main functions of the Interagency Council revolved around using public resources and programs in a more coordinated manner to meet the needs of those experiencing homelessness. The McKinney-Vento Act specifically mandated the council to identify duplication in federal er programs and provide assistance to states, local governments, and oth public and private nonprofit organizations to enable them to serve those experiencing homelessness more effectively. In the HEARTH Act, the council, which includes 19 agencies, was given the mission of coordinat the federal response to homelessness and creating a national partnersh at every level of government and with the private sector to reduce and end ip homelessness. This act also mandates that the Interagency Council develop and annually update a strategic plan to end homelessness. Several agencies overseeing targeted homelessness programs are required to collect data on segments of the homeless population. As illustrated in table 3, HUD, HHS, and Education all have met their requirements through their own data collection and these sources differ in housing data collected and level of aggregation. In addition, the data collected necessarily reflect the definitions of homelessness included in the statutes that govern the relevant programs. Under the McKinney-Vento Act, HUD is to develop an estimate of homeless persons in sheltered and unsheltered locations at a 1-day point in time, so HUD requires Continuums to conduct a count of the sheltered and unsheltered homeless in their jurisdictions in January of every other year. Additionally, pursuant to the 2001 amendments to the McKinney-Vento Act, HUD was to develop a system to collect and analyze data on the extent of homelessness and the effectiveness of McKinney-Vento Act programs. As a result, HUD developed a set of technical data standards for the Homelessness Management Information System (HMIS), which sets minimum privacy, security, and technical requirements for local data collection on the characteristics of individuals and families receiving homelessness services. HMIS data standards allowed communities to continue using locally developed data systems and adapt them to meet HUD standards. Local Continuums are responsible for implementing HMIS in their communities, and Continuums can choose from many HMIS systems that meet HUD’s data standards. HUD officials said that by allowing Continuums to choose from multiple systems, more service providers participate and Continuums and service providers can modify existing systems to meet HUD standards and the community’s goals. Continuums report aggregated data to HUD annually. Results from analysis of the point-in-time count and HMIS data are reported in HUD’s Annual Homelessness Assessment Report to Congress. Pursuant to the Runaway and Homeless Youth Act, HHS requires all service providers to collect data on youths who receive services through the Runaway and Homeless Youth Program. Grantees submit these data every 6 months to the Runaway and Homeless Youth Management Information System (RHYMIS), a national database that includes unidentified individual-level data. To demonstrate compliance with the Elementary and Secondary Education Act of 1965, Education requires states to complete Consolidated State Performance Reports that include data on homeless children and youths being served by Elementary and Secondary Education Act programs and the Education of Homeless Children and Youth Program, as amended. The McKinney-Vento Act requires local school districts to have homelessness liaisons, who work with other school personnel and those in the community to identify homeless children and youths, provide appropriate services and support, and collect and report data. States aggregate local data and report to Education annually cumulative numbers of homeless students enrolled in public schools by grade and primary nighttime residence. As part of its decennial population and housing census, the U.S. Census Bureau has programs designed to count people experiencing homelessness. The Census counts people at places where they receive services (such as soup kitchens or domestic violence shelters), as well as at targeted nonshelter outdoor locations. While the Census makes an effort to count all residents, including those experiencing homelessness, the 2010 Census does not plan to report a separate count of the population experiencing homelessness or a count of the population who use the services. Although federal agencies collect data on those experiencing homelessness, these data have a number of shortcomings and consequently do not capture the true extent and nature of homelessness. Some of these shortcomings derive from the difficulty of counting a transient population that changes over time, lack of comprehensive data collection requirements, and the time needed for data analysis. As a result of these shortcomings, the data have limited usefulness. Complete and accurate data are essential for understanding and meeting the needs of those who are experiencing homelessness and to prevent homelessness from occurring. According to HUD, communities need accurate data to determine the extent and nature of homelessness at a local level, plan services and programs to address local needs, and measure progress in addressing homelessness. HUD needs accurate data to fulfill its reporting obligations to Congress and to better understand the extent of homelessness, who it affects, and how it can best be addressed. HUD’s point-in-time count is the only data collection effort designed to obtain a national count of those experiencing homelessness under the McKinney-Vento Individual definition, and approximately 450 Continuums conduct a point-in-time count in January of every other year. However, service providers and researchers we interviewed expressed concerns about the ability of HUD’s point-in-time count to fully capture many of those experiencing homelessness for reasons including the following: People experiencing homelessness are inherently difficult to count. They are mobile, can seek shelter in secluded areas, and may not wish to attract the notice of local government officials. Point-in-time counts do not recognize that individuals and families move in and out of homelessness and can experience it for varying lengths of time. These counts more likely count those experiencing homelessness for long periods rather than those experiencing it episodically or for short periods. Although homelessness can be episodic, the count is done biennially in January, which might lead to an undercount of families because landlords and others may be reluctant to evict families when the weather is cold or school is ongoing. Count methodologies vary by Continuum, can change from year to year, and might not be well implemented because counters are volunteers who may lack experience with the population. Large communities do not necessarily attempt to count all of those experiencing homelessness but rather may use estimation procedures of varying reliability. HUD provides technical assistance to communities, which helps them to develop and implement point-in-time count methodologies, and HUD officials said that methodologies and the accuracy of the count have improved. Additionally, HUD officials said that as part of their quality control efforts, they contacted 213 Continuums last year to address errors or inconsistencies in their data from fiscal year 2008. A communitywide point-in-time count demands considerable local resources and planning, and communities rely on volunteers to conduct counts of the unsheltered population. Continuums do not receive any funding from HUD to conduct the point-in-time counts, and using professionals or paid staff to conduct the count could be costly. Other federal data collected on those experiencing homelessness primarily or only captures clients being served by federally-funded programs. As a result, federal data do not capture some people seeking homeless assistance through nongovernmental programs, or others who are eligible for services but are not enrolled. For instance, while HUD grantees are required to participate in HMIS, participation is optional for shelters that do not receive HUD funding. HUD can use the annual Continuum funding application to assess the extent to which those shelters not receiving HUD funding participate in HMIS. In their funding applications Continuums provide an inventory of shelter beds in their community and also provide the percentage of those beds that are located in shelters that participate in HMIS. HMIS participation rates vary widely across communities and shelter types. For example, one of the locations we visited reported data for less than 50 percent of its beds for transitional shelters, while another reported data for more than 75 percent of its beds. HUD officials said that while some Continuums have been slower to implement HMIS and receive full participation from their providers than others, according to HUD’s 2009 national housing inventory data of homeless programs, 75 percent of all shelter beds were covered by HMIS, including programs that do not receive HUD funding. The Violence Against Women Act prohibits service providers from entering individual-level data into HMIS for those in domestic violence shelters. Similarly, RHYMIS collects data on those clients using its residential systems, but these serve only a small percentage of the estimated number of youths experiencing homelessness. HHS officials stated that nationwide, they only fund approximately 200 transitional living centers for young adults. Education also does not fully capture the extent of homelessness among school-aged children because all of the districts we visited used a system of referrals and self-reporting to identify those children. In one of the school districts we visited, an official said that, based on estimates of the number of children experiencing homelessness under the McKinney-Vento Children and Youth definition, the district was serving about half of those students. Many of the school officials and advocates with whom we spoke said the term homelessness carried a stigma that made people reluctant to be so identified, and two school systems had removed the word from the name of their programs. Additionally, federal data systems on homelessness may count the same individual more than once. HUD designed HMIS to produce an unduplicated count—one that ensures individuals are counted only once— of those experiencing homelessness within each Continuum. Providers in the same Continuum use the same HMIS system and some Continuums have designed an open system, where providers can view all or part of an individual’s record from another provider within the Continuum. This is useful to providers because it helps them to understand an individual or family’s service needs. It also allows them to produce an unduplicated count of those using services in the Continuum because every person receiving services in the Continuum has a unique identifier in HMIS. However, it is difficult to share data across Continuums and this can be done only if Continuums have signed agreements that protect privacy and are using the same HMIS system. Thus, clients may be entered into HMIS in more than one Continuum and counted more than once. Nonetheless, some states have constructed statewide HMIS systems to help avoid duplication in the data. Because RHYMIS has individual data on all program recipients in a single database, HHS can obtain an accurate count on the number of youths served by its residential programs. Education data also may be duplicative. Because students generally are counted as homeless if they experience homelessness at any point during the school year, if they change school districts during the year, they could be counted as homeless in both systems. While each agency makes efforts to avoid duplication in its data, it is not possible to determine how many total unique individuals federal homelessness programs have served because HUD, HHS, and Education data systems generally do not interface or share data. Further, the data in HMIS may not always accurately reflect the demographic information on families and individuals seeking shelter. For example, HMIS provides data for individuals and families but the system may not accurately identify family members and track the composition of families over time. Using HMIS, service providers associate individuals entering into a shelter with a family if family members enter the shelter together. However, some families split up to obtain shelter, so the system would not track all families over time. In one of our site visit locations, we met a mother and son who were split up and placed in separate shelters. Because the mother was in an individual shelter, and the son was in a youth shelter, HMIS would not associate these two as a family. Further, one service provider we spoke with said that HMIS may not always accurately track demographic information on individuals seeking emergency shelter. Some researchers and advocates told us that HMIS’s design limited its usefulness, and the extent to which service providers found that the HMIS system their Continuum had implemented was useful varied across the four locations we visited. For example, a researcher who has extensively used HMIS told us that if service providers used the data they collected for HMIS to manage their programs, they would implement processes to help ensure data quality. But in three of the four locations we visited, many providers said they were unable to use HMIS for program administration and client case management. Many providers noted that they often had to enter information in several different databases, and they generally used their own database to administer their programs. Additionally, we found only two providers who developed data export tools that allowed their private systems to upload data to HMIS, and in both cases, the providers were unable to use their new tools after the Continuum switched HMIS software. HHS officials told us that they support providers’ development of tools to link data systems, but they do not provide funding for this endeavor. In contrast, service providers in one location we visited reported that the HMIS system they had adopted had options that allowed them to conduct comprehensive case management for clients and produce all of the reports required by the various organizations funding their programs and operations. HUD officials said that a community’s success in using HMIS for program administration and client case management depends on a variety of factors including staff capability and the quality of the HMIS software that they chose to purchase or develop. HUD and Education data also have shortcomings and limited usefulness because of the time lag between initial data collection and the reporting of the data. HUD published the most recent report to Congress, which provided data for October 2008–September 2009, in June 2010. Education expected to publish data for the 2008–2009 school year in June 2010. Because of the time lag in availability of HUD and Education data, they have limited usefulness for understanding current trends in homelessness and the ongoing effects of the recession. However, HUD officials report that they have made progress in reducing the time it takes to analyze data and publish its annual report to Congress. The time lag from data collection to report issuance has decreased from almost 2 years to less than 1 year, but collecting data on homelessness and producing national estimates takes time, and HUD officials said there will always be some time lag. Additionally, in recognition of these shortcomings, HUD recently introduced the Homeless Pulse Project, which collects quarterly homeless shelter data from nine communities. These communities volunteered to submit data on a more frequent basis, but they are not representative of Continuums nationwide. HUD plans to expand the Pulse Project and add approximately 30 Continuums that have volunteered to participate. HHS’ RHYMIS data are more timely because grantees submit data every 6 months, and HHS makes the data available online approximately 1 month after the end of the reporting period. Although a researcher with special expertise in HMIS and several advocates we interviewed cited some examples of incompleteness or inaccuracy in HMIS data, agencies and Continuums have been trying to improve the completeness and accuracy of their data. For example, HUD provides incentives for Continuums to increase HMIS participation rates. In its competitive grant process, HUD evaluates the level to which Continuums participate in HMIS. HUD officials have also provided technical assistance to Continuums to assist them in increasing local HMIS participation rates. HMIS rates have increased over time. Several Continuums we contacted have been conducting outreach to private and faith-based providers to encourage them to use HMIS to improve data on homelessness. Additionally, according to HUD officials, HMIS data have been used to conduct research on the prevalence of homelessness, the costs of homelessness, and the effectiveness of interventions to reduce homelessness. Further, HUD supplements HMIS data with point-in-time data to enhance the information available on those experiencing homelessness. HHS has begun a project to get some of its homelessness programs to use HUD’s HMIS system. For example, as discussed in more detail further on, in December 2009, HHS established an agreement with HUD requiring PATH providers to use HMIS. To address the issues faced by emergency shelters in quickly collecting and entering data on individuals, some Continuums issue identification cards containing demographic information to clients during their initial intake into the shelter system. Clients can swipe the cards as they enter a facility, and HMIS automatically captures the data. Despite the limitations discussed above, HUD uses data from point-in-time counts to estimate the number of those experiencing homelessness on a single night in January. HUD reported that approximately 660,000 individuals and persons in families experienced sheltered and unsheltered homelessness on a single night in January 2008. However, this estimate does not include people who do not meet the definition of homelessness for HUD’s programs but do meet definitions of homelessness for other programs. For example, HUD’s counts would not include families living with others as a result of economic hardship, who are considered homeless by Education. Figure 1 shows the count of sheltered and unsheltered persons s experiencing homelessness on a single night in January for the past 4 years. experiencing homelessness on a single night in January for the past 4 years. HUD also samples a number of communities and uses their HMIS data to estimate those experiencing homelessness in shelters during the year. HUD estimated that in 2008, 1.18 to 2 million people met HUD’s definition of homelessness and were sheltered at some time in the year. The estimate has a broad range because HUD uses a sample of 102 communities and not all of those communities can provide usable data. For those Continuums related to the communities that can provide data on at least half of the beds in their inventory, HUD assumes that the remaining beds would be occupied in similar ways to estimate shelter use for those Continuums that cannot provide such data. HUD officials noted that response rates have been steadily improving and the estimate’s range has decreased. For example, in 2008, 87 of the 102 communities in HUD’s sample provided usable data and another 135 communities voluntarily submitted data; while in 2005, 55 communities in a sample of 80 communities provided usable data and another 9 communities contributed data voluntarily. HUD estimates that individuals without children make up about two-thirds and families with children under 18 about one-third of the estimate. However, HMIS only captures individuals and families who are defined as homeless under the McKinney-Vento Individual definition. Additionally, as previously noted, concerns exist about HMIS’s ability to accurately record family status. HUD, HHS, and Education also report on other populations experiencing homelessness. HUD estimated that over the course of 2008, unaccompanied youths accounted for 2 percent of the sheltered homeless population, or approximately 22,000 unaccompanied youth who were homeless and sheltered. According to HHS, over the course of fiscal year 2008, approximately 48,000 youths experienced homelessness and received services from HHS’ Basic Center Program or Transitional Living Program, which have different eligibility criteria from HUD’s programs. Some youths may be in shelters funded by both HHS and HUD, and therefore be counted in both HMIS and RHYMIS, while others might be in shelters funded only by HUD or only by HHS and only included in the corresponding database. As shown in figure 2, Education reported that more than 770,000 homeless children received services in the 2007–2008 school year, but less than one quarter of these children—about 165,000—were living in shelters. HUD reported for that same year that approximately 150,000 children aged 6 to 17 were in shelters. Federally-funded mainstream programs, whose primary purpose is to provide a range of services and funds to low-income households, often provide these services and funds to those who are experiencing or have experienced homelessness or to those defined as being at risk of becoming homeless. Thus, while homelessness is not the primary focus of these programs, data collected by them could be useful for understanding the nature of homelessness. Further, several researchers and advocates with whom we spoke noted that they could better understand the dynamics of homelessness if these programs collected individual client-level data on homelessness and housing status as part of their routine data collection activities. However, these programs have not consistently collected data on homelessness and housing status. A few programs have collected individual data, some have collected aggregate data, and others collect no data on housing status at all. We identified several federally-funded mainstream programs that collect or are beginning to collect and report client-level data on persons experiencing homelessness to the federal agency overseeing the mainstream program. Public Housing Authorities (PHA) collect data on homelessness status of households at the time the PHA admits the household to a housing assistance program, which includes both Public Housing and Housing Choice Voucher programs; they report these data to HUD’s Office of Public and Indian Housing. HHS’s Substance Abuse Prevention and Treatment Block Grant program requires grantees to report participants’ living arrangements at entry and exit. DOL’s WIA Adult and Youth grantees also collect and report individual-level data on enrolled participants including whether the client is homeless. HHS’s John H. Chafee Foster Care Independence Program has developed a survey that states must begin using by October 2010 to gather data for the National Youth in Transition Database—a data collection required by the Foster Care Independence Act of 1999. States are required to survey foster care youths at ages 17, 19, and 21 to collect data on the services provided to, and outcomes of, youths in the foster care system. The survey includes a question asking youths if they have experienced homelessness over the relevant time period; however, as previously noted, the social stigma attached to the word homeless often limits self-identification. States administer USDA’s SNAP program, document if a person or family is homeless, and report a sample of data to USDA, which uses the data to assess the accuracy of eligibility decisions and benefit calculations. A number of other programs require that grantees report aggregate data to their funding agency on the number of persons experiencing homelessness that they served: Head Start grantees report the number of homeless families served annually to HHS. Health Center Program grantees collect limited data on the homelessness status of program participants and report the total number of participants known to be homeless to HHS. Community Mental Health Services Block Grant grantees collect and submit data to HHS on persons served by the program, including “homeless or shelter.” The Ryan White HIV/AIDS Program collects and reports to HHS limited aggregate data on the living arrangements (permanent, homeless, transient, or transitionally housed) of clients served. HHS has numerous other mainstream programs that provide funds to states to provide services to certain low-income populations, including those experiencing homelessness, but data collection and reporting on homelessness or housing status varies by program and across states. Medicaid and TANF are the two largest programs, but states are not required to collect or submit information to HHS on the number of individuals or families experiencing homelessness that they served. States determine eligibility requirements and develop program applications for TANF and Medicaid. A recent HHS study that surveyed all the states found that all states collected minimum housing status data on their Medicaid and TANF applications, such as home address and if the applicant resides in public or subsidized housing, a long-term care facility, or a medical or rehabilitation facility. Twenty eight states collected indicators of homelessness—such as whether an individual resides in a shelter, stays in a domestic violence shelter, or has a permanent home—on their applications. Thirteen states collected information on risk factors often associated with homelessness—such as whether an individual lives with friends or relatives, or has an eviction notice—on their applications. However, these states did not collect this information using consistent definitions and used the data in limited ways. According to the HHS report, most states responding to HHS’s survey said that they did not know whether they had procedures in place to improve the quality of the items collected and thus how complete their homelessness data were. Additionally, while data on homelessness indicators and risk factors resided in statewide databases in many states, the data were not routinely confirmed or verified, making it unclear how reliable the data might be for analysis of homelessness. Further, as previously discussed, homelessness status changes over time, and data collected at one point in time may not accurately capture these changes. Nonetheless, in Michigan, New York City, and Philadelphia, researchers and state officials have been able to use identifying data in mainstream databases to match data in HMIS, and have thus been able to identify patterns in mainstream service usage for homeless populations. Several other mainstream programs provide services for persons experiencing homelessness, but do not provide aggregate or individual- level data on homeless clients served. The Community Services Block Grant, Social Services Block Grant, Maternal and Child Health Block Grant, and the Children’s Health Insurance Program all provide HHS with regular program reports. However, these reports do not include data on the number of clients experiencing homelessness or other housing status data. Although child welfare agencies often collect data on housing status and stability in the process of reviewing family reunification cases, this information is not reported to HHS. Community Development Block Grants often fund services that may benefit those experiencing homelessness, but grantees do not track the number of homeless served by the program. Additionally, local PHAs can give preferences to individuals and families experiencing homelessness; however, PHAs do not have to submit data on these preferences to HUD. HUD sampled Public Housing and Housing Choice Voucher Program to determine how many of them have a preference for those experiencing homelessness. The analysis showed that in 2009, approximately 27 percent of all PHAs had a homeless preference. Finally, agencies have not always consistently collected or analyzed data on housing stability or homelessness because these are not the primary purposes of their programs. In addition, data collection may be expensive, and agencies must weigh the costs and benefits of getting more detailed information. Collecting data on homelessness or housing status for programs such as TANF and Medicaid could be further complicated by the need to work with 50 different state offices to implement a new data collection effort. However, HHS recently reported that of the 28 states that do collect homelessness data, almost all of them indicated that it is not burdensome or costly to collect such data, and about half of the states that collect data said they would comply with requests to make some homelessness data available to HHS for research purposes. Yet even among the willing states, there were some concerns about resource constraints for responding to such requests and concerns about the reliability of the data. However, not having complete and accurate data limits the understanding of the nature of homelessness—a better understanding of which could be used to inform programs and policies designed to improve housing stability and thus reduce homelessness. The 45 research studies analyzing factors associated with homelessness that we reviewed used different definitions or measurements of homelessness, although many of the studies used definitions or measures that were more closely affiliated with the McKinney-Vento Individual definition than with the broader McKinney-Vento Children and Youth definition (see appendix II for a list of the 45 studies). As a result, study findings are difficult to compile or compare. In the absence of a consistent definition and measurement, “homelessness” can mean or designate many conditions. For example, homelessness can refer to long-term homelessness, short stays in shelters, living in nontraditional housing, or living with relatives, friends, or acquaintances. These definitional differences especially limited research on some specific populations, such as “runaway or homeless” youths. The research we reviewed also varied in how it defined and measured the factors that may be associated with the likelihood of experiencing homelessness. For instance, studies that examined families and youths used different definitions or, in some cases, failed to clearly define what they meant by families and youth. Several studies measured variables such as marital status, social or family support, or domestic violence differently. For example, in assessing relationships between family structure and homelessness, one study examined whether a father of a child was cohabitating with a woman, while another study looked at whether the individual was presently married, although it is possible the two categories overlapped. Studies also used various age categories to define youths, including under 17, from 14 to 23, or from 12 to 22. In addition, some studies did not consider factors that figured prominently in other studies, such as the economic conditions of the surrounding area or how childhood experiences influenced later episodes of homelessness. To contribute to a broad-based and reliable understanding of what factors are associated with the likelihood of experiencing homelessness, studies we reviewed and experts with whom we spoke noted research would need to use data that accurately reflect the population studied, track the same individuals or families over time, and consider a broad population over diverse locations. Further, such studies would need to consider a range of both structural factors, such as area poverty level, and individual factors, such as a person’s age. However, the majority of the studies we reviewed did not meet these criteria. As a result, the body of literature we reviewed cannot be used to predict with accuracy who among those at risk of homelessness would likely experience it. Studies we reviewed used samples from several types of data, such as providers’ administrative databases or surveys, but were not always able to ensure that data accurately reflected the population they studied. Approximately half of the studies used information from administrative records or other service-oriented data, such as standardized self- assessments. The remaining studies used information collected in interviews, surveys, or questionnaires. Studies using administrative data may be especially vulnerable to biased sampling or undercounting of street homeless populations because of the myriad issues described previously, such as collecting data only on those receiving services. Some researchers noted that data from secondary sources such as administrative data may be less accurate than data collected by research staff and targeted for research purposes. However, survey data collected for research purposes also are subject to undercounting and biased sampling, because populations experiencing homelessness are difficult to reach. Because people move in and out of homelessness and experience it for different periods, studies we reviewed and experts with whom we spoke noted that data would need to be collected on the same individuals or families over time to more clearly identify which factors could lead to an episode of homelessness or help determine homelessness experiences over longer periods. Like HUD’s point-in-time homeless counts, these studies more likely captured those individuals or families who had been homeless for long periods as opposed to those who experienced it episodically or for short periods, and thus do not give a clear understanding of factors associated with homelessness. These studies also could not determine whether factors associated with being homeless at a point in time caused homelessness. For example, one study found an association between poor physical health and homelessness, but could not say whether poor physical health contributed to experiencing homelessness or whether homelessness contributed to or worsened physical health. Nineteen studies in our review used data that did follow individuals or families over time. However, several of these used administrative data that suffered from the shortcomings described previously, followed individuals or families for relatively short periods, or considered populations in narrow geographic locations. A few studies also used national databases such as the Fragile Families and Child Wellbeing Study and one used the 1997 National Longitudinal Survey of Youth that annually tracks a sample of youth and their parents over time. In addition, most of the studies we reviewed defined their target populations—or the group of people to whom findings can be generalized—narrowly, making it difficult to generalize results to broader populations or to compare or compile them. Much of the research we reviewed focused on small subsets of the population experiencing homelessness in smaller geographic regions, such as those with mental illness or substance abuse problems in a single shelter or city. For example, one study published in a journal on Community Mental Health focused on African Americans admitted to a state psychiatric hospital in New York, and another study published in a journal on youth and adolescence looked at youths aged 14 to 21 years who needed the services of a homeless drop-in center. In part, the target groups studied reflected the wide variety of disciplines—psychology, public policy, public health, and economics—of those conducting the studies. Although researchers have argued that it is necessary to consider structural or macro-level factors (such as employment rate, surrounding poverty level, and availability of affordable housing) as well as individual- level factors to arrive at a full understanding of which factors are associated with the likelihood of experiencing homelessness, only about one-third of the studies we reviewed considered these factors. Structural factors help to explain the prevalence of homelessness across a wider setting, while individual-level factors may explain the immediate circumstances surrounding an episode of homelessness. In addition, over three quarters of the service providers, researchers, advocates, and government agency officials we interviewed identified a structural factor—the lack of affordable housing—as a major barrier to serving those experiencing homelessness. However, most of the studies did not look at structural factors and focused on individual-level factors such as demographic characteristics, individual income, the presence of a mental illness, or substance abuse. Because the majority of the studies that we reviewed examined populations in one or a few cities, it was not possible for them to examine the role played by structural factors, such as unemployment rates and surrounding poverty levels, in a wider context. Although limitations in the studies we reviewed posed challenges for drawing comparisons and often focused on narrow populations in smaller areas, we identified two that tracked homeless families over time and considered structural and individual-level factors across wide geographical areas. One study that defined homelessness as living in a shelter, on the street, or in an abandoned building or automobile, but also considered the population that was doubled up, examined factors associated with individual and family homelessness using nationwide data from the Fragile Families and Child Wellbeing database, which was collected over several years. The study analyzed data on mothers when their children were one and three years old. One hundred and twenty-eight mothers reported experiencing homelessness at the one-year birthday, while 97 reported being homeless when their child turned three. A larger number of mothers reported being doubled up—343 at their child’s one-year birthday and 223 when their child turned three. The study found that the availability of affordable housing—a structural factor—reduced the odds of families experiencing homelessness and doubling up. A number of individual-level factors were associated with experiencing homelessness or doubling up. Specifically, access to small loans and childcare, having a strong family and friend support network, and living longer in a given neighborhood were associated with lowered odds of experiencing homelessness. Additionally, receiving public assistance reduced the likelihood that someone would live doubled up. Another study considered families homeless if they were living on the street, in temporary housing, or in a group home, or had spent at least one night in a shelter or other place not meant for regular housing in the past 12 months. This study, which used the Fragile Families and Child Wellbeing database found that families with higher incomes who received housing assistance had a reduced likelihood of experiencing homelessness. Physical and mental health problems, reports of domestic violence, and substance abuse issues appeared to place them at greater risk for homelessness. Receipt of TANF and poorer surrounding economic conditions—a structural factor—also were positively related to the likelihood of experiencing homelessness but, according to the authors, likely were proxies for individual need and lack of income rather than directly associated with homelessness. Two other studies looked at the association of structural factors and rates of homelessness across geographical areas over time, but did not track specific individuals or families: One nationwide study that tracked homelessness rates over time primarily examined how structural factors affected rates of homelessness. The study found that relatively small changes in housing market conditions could have substantial effects upon rates of homelessness or the numbers of persons in shelters. Their results imply that relatively small increases in housing vacancy rates, combined with small decreases in market rents, could substantially reduce homelessness. Another study that focused on the impact of structural factors on homelessness in 52 metropolitan areas found that poverty levels strongly related to the number of persons experiencing homelessness in an area. No other structural factors—such as unemployment rates, the number of government benefit recipients, or availability of affordable housing in the area—were found to be statistically significant predictors of homelessness. Together, the four studies underlined the importance of structural factors and identified some individual factors associated with homelessness; however, they did not address some issues of importance. None addressed the extent to which childhood experiences were associated with adult homelessness, and only one examined those living in doubled up situations. We reviewed 11 other studies that examined how childhood experiences were associated with experiencing homelessness in adulthood; however, these studies generally relied on people’s recollections. While the studies used varying methodologies and definitions of homelessness and other factors, most highlighted the influence of early childhood experiences on the likelihood of later experiencing homelessness. Results varied by study, but several studies found that factors such as running away from home, being in foster care, having a dysfunctional family, or being sexually molested as a child increased the odds an adult would experience homelessness. Similarly in 1996, the National Survey of Homeless Assistance Providers and Clients found that homeless adults reported many significant adverse childhood experiences. That survey did not compare those experiencing homelessness with those that were not. However, the findings from the studies we reviewed that did compare the two groups generally were consistent with the survey’s findings. Conversely, another study found that a range of childhood experiences (including residential stability: adequacy of income; dependence on public assistance; family violence; and parental criminality, mental illness, or substance abuse) were not significantly associated with adult homelessness. Recognizing that the relationships between living doubled up or in shelters or on the street are important to understanding homelessness, we identified a few studies that analyzed whether doubling up could predict future time spent in a shelter or on the street, or that measured differences at a point in time between those living doubled up and those living in shelters or on the street. However, the results of the studies were inconclusive. Of the two that examined how doubling up affects later homelessness in a shelter or on the street, one found that it was significant and the other found it was not significant. Of the four studies that compared persons on the street or in a shelter with those doubled up, two found few differences in demographic characteristics or backgrounds. A third found some differences between the two groups. For example, receiving public assistance lowered the chance of doubling up but was not significantly associated with homelessness. The fourth study found significant differences between doubled up and homeless mothers. Doubled up mothers were more likely to be younger and working and to have high school degrees, fewer children, and more relatives who could help with finances, housing, and child care. Among the majority of the advocates, government officials, service providers, and researchers we interviewed that identified differences in definitions of homelessness as an important barrier to providing services, several noted that families and youths living in some precarious situations were not eligible for federal assistance under a narrow definition of homelessness. Some said that families and youths who were doubled up or living in hotels because of economic hardship often had similar or greater needs for services than those who met narrower definitions, but were being excluded from receiving government-funded services. For example, those working in educational programs that have broader federal definitions of homelessness noted that those who do not meet the narrow definition have difficulty accessing housing services. One of the school liaisons we visited described visiting a house with a caved-in floor and no front door. This family met the criteria of substandard housing under the McKinney-Vento Children and Youth definition of homelessness, but it is unclear whether the house would have been considered abandoned or condemned, and if the family would have qualified as homeless under the narrower individual definition prior to the HEARTH Act. According to a research study presented at the HUD-HHS homelessness research symposium in 2007, a formal condemnation process for substandard properties does not typically exist in rural areas, and, as a result, properties that would meet the HUD definition of abandoned because they have been condemned in urban areas may not meet that definition in rural areas. HHS provides grants for Head Start programs to collaborate with others in the community to provide services for children and their families; however, officials noted that in the 2009 program year, less than half of the families in Head Start who experienced homelessness acquired housing. HHS has attributed this to a lack of affordable housing and long waiting lists for housing assistance. However, officials for at least one service provider said that the waiting list for housing assistance in their city was much longer for those that do not meet the narrow definition of homelessness. Many of those involved in homeless programs with whom we spoke were particularly concerned about the exclusion of families and youths from programs that addressed the needs of chronically homeless individuals— those unaccompanied individuals who have disabilities and have been continuously homeless for a year or homeless four times in the last 3 years. Before the passage of the HEARTH Act, families that otherwise met the criteria for chronic homelessness programs were not able to participate because chronic homelessness was defined to include only unaccompanied individuals. People in all of the categories we interviewed noted that the emphasis on funding programs for chronic homelessness has meant that families have been underserved. A youth service provider further noted that youths effectively were excluded from programs for those experiencing chronic homelessness because youths generally did not live in shelters or keep records of where they had been living or for how long. Those that cited differences in definitions as a barrier said that families and youths with severe shelter needs had to be on the street or in shelters to access some federally-funded homeless assistance, but shelters were not always available or appropriate for them. Researchers we interviewed noted that families have to obey a number of rules to stay in a shelter and families with the greatest challenges might be less able to follow those rules. Additionally, some facilities do not provide shelter for males above a certain age, so that couples or families with male teenage children may not be able to find shelter together. Similarly for youths, a researcher and a service provider suggested that adult shelters were not appropriate for unaccompanied youths or young adults, and shelters specifically for them were very limited. Some of the people we interviewed also noted that some narrow definitions of homelessness limited services that could be provided to individuals experiencing homelessness. For example, getting one service sometimes precluded individuals from getting another service for which they would otherwise have qualified. Officials at DOL told us that if veterans obtain housing vouchers through HUD-VASH, they no longer meet the narrow statutory definition of homelessness under which they would be eligible for job training funded by the Homeless Veterans Reintegration Program (HVRP). However, if veterans first apply for HVRP and then for vouchers, they can qualify for both programs. Similarly, those in transitional housing programs cannot be considered eligible for programs serving those experiencing chronic homelessness even if they meet the other requirements, such as being homeless for a year and having a disability. In addition, although HUD has recognized in its documents that helping people make successful transitions to the community as they are released from foster care, jails, prisons, and health care, mental health, or substance abuse treatment facilities requires systems to work together to ensure continuity of care and linkages to appropriate housing and community treatment and supports, the definitions of homelessness may hinder these transitions. In August 2009, one advocate noted that HUD’s definition of homelessness includes those that spend 30 days or less in prison if they had been homeless prior to entering prison, but those spending more than 30 days cannot be considered homeless until the week before their release. The advocate said that this limits the incentive for prison staff to work with homeless service providers to allow for a smooth transition from prison to housing and that if an individual leaving prison spends time on the street or in an emergency shelter, the likelihood of recidivism increases. Some of those arguing for a broader definition also have said that the definition of homelessness should not depend on available funding. Officials at one large service provider said that broadening the definition would not necessarily spread a fixed amount of resources across a larger group. Instead, targeting resources to specialized populations more effectively and concentrating on earlier intervention and prevention could lower the cost of serving individual clients. However, they also noted that this might require a better understanding of the needs of particular subgroups experiencing homelessness. Some local officials, homeless service providers, and researchers noted that choosing between a narrow or a broad definition of homelessness was less important than agreeing on a single definition, because multiple definitions made it more difficult or costly to provide services and created confusion that sometimes led to services not being provided to those legally eligible for them. Many researchers, government officials, and advocates with whom we spoke noted the importance of combining services and housing to meet the needs of those experiencing homelessness, and some of these noted that this was more difficult and costly when programs defined homelessness differently. They also noted that obtaining funding for services from sources other than HUD has become more necessary because the proportion of HUD funding for services has declined. Officials at HUD noted that this was a result of HUD having provided incentives to communities to increase the ratio of housing activities to supportive service activities in their funding applications to encourage the development of more housing resources for individuals and families experiencing homelessness. Not only do some targeted programs that provide services use different definitions of homeless, but some state and local grantees receiving federal funds under mainstream programs that can be used to provide certain services for those experiencing homelessness (such as TANF) develop their own local definitions of homelessness. Officials at a lead Continuum agency said that having these different definitions makes putting together funding for permanent supportive housing—the best solution for ending chronic homelessness—especially difficult. Officials at two entities that provide service to and advocate for those experiencing homelessness noted that, given the multiple definitions, scarce resources that could have been used to provide services instead went to eligibility verification. Furthermore, many of those involved in activities related to homelessness said that having multiple definitions created confusion, and government officials overseeing programs that use a broader definition and a service provider in one of these programs noted that this confusion could lead to services not being provided to those that are eligible for them. A school liaison and a youth service provider said that school administrative personnel often apply a narrower definition of homelessness than McKinney-Vento Children and Youth and thus may deny students access to services to which they are entitled. Additionally, Education has cited a state education agency for the failure of local education agencies’ to identify, report, and serve eligible homeless children and youths including youths in doubled-up situations that meet the broader definition of homelessness. Similarly, officials at HHS acknowledged that Head Start programs across the country sometimes were not using the appropriate definition of homelessness to identify children who qualified for those services. As a result, some homeless families would not be receiving Head Start services. However, some government officials, researchers, advocates, and service providers thought that having multiple or narrow definitions of homelessness had certain benefits. Some HHS officials in programs that address homelessness and others noted that having multiple definitions of homelessness allowed programs to tailor services and prioritize them for specific populations. HUD officials and some researchers and advocates said that having a narrow definition for homeless programs that provide shelter for specific populations and broader definitions for programs such as those designed to serve the educational needs of children and youths allowed programs to meet their goals best. HUD officials noted that having a broader definition for certain education programs is appropriate because those that meet the definition are entitled to the service, and the program does not provide housing. Alternatively, it is appropriate for programs such as HUD’s to have a narrower definition because its services are not entitlements and must target those most in need, such as those that are chronically homeless. HUD, HHS, VA, and DOL began redirecting resources to this narrowly defined group in 2003, and according to HUD point-in-time data, chronic homelessness fell by approximately 27 percent from the January 2005 count to the January 2008 count. HUD, HHS, and VA focused on this group, in part, because a research study had shown that they used an inordinate amount of shelter resources. One researcher noted that having a precise definition was essential to ensure that the same kinds of people are being counted as homeless in different locations, which would be important for measuring program outcomes. Supporters of a narrow definition also said that if the definition were broadened, limited resources might go to those who were easier to serve or had fewer needs, specifically to those families with young children who were doubled up rather than to those identified as chronically homeless. Finally, some advocates for those experiencing homelessness and government officials overseeing programs targeted at those experiencing homelessness noted that if the definition of homelessness were broadened for some programs without an increase in resources, many of those that would become eligible for services would not get them. In the HEARTH Act, Congress provided a broader definition of homelessness for those programs that had been serving individuals and families and using the McKinney-Vento Individual definition; however, it is still not as broad as the McKinney-Vento Children and Youth definition, so different definitions will still exist when the HEARTH Act is implemented. In addition, the HEARTH Act mandated that the Interagency Council convene experts for a one-time meeting to discuss the need for a single federal definition of homelessness within 6 months of the issuance of this report. However, having one definition of homelessness would not necessarily mean that everyone who met that definition would be eligible for all homeless assistance programs or that those not defined as homeless would be ineligible. Some of the people we interviewed suggested alternatives—one based on a narrow definition of homelessness and others based on a broader definition. For example, one local official suggested defining homelessness using the narrow McKinney-Vento Individual definition and defining another category called “temporarily housed” that would include those who are doubled up or in hotels. While some programs might only be open to those experiencing homelessness, others such as the Education of Homeless Children and Youth program could be open to both groups. Alternatively, one researcher directed us to a classification scheme developed by the European Federation of National Associations Working with the Homeless. Under that classification scheme, homelessness was defined broadly as not having a suitable home or one to which a person was legally entitled, but then a typology was created that defined subcategories of living situations under headings such as “roofless” or “inadequate” that could be addressed by various policies. Officials at a large service provider we interviewed made similar distinctions saying that it is best to think of people as experiencing functional homelessness—that is, living in situations that could not be equated to having a home—rather than to think of them as literally homeless or doubled up. However, these officials said that subcategories of need would have to be developed based on a better understanding of homelessness, because all persons experiencing homelessness should not be eligible for the same services. In 2007, HHS convened a symposium to begin discussing the development of a typology of homeless families, and in May 2010, they convened about 75 federal and nonfederal participants to discuss issues related to children experiencing homelessness. The lack of affordable housing (whether housing was not available or people’s incomes were not high enough to pay for existing housing) was the only barrier to serving those experiencing homelessness cited more frequently by researchers, advocates, service providers, and government officials we interviewed than definitional differences. Some researchers have shown that more housing vouchers could help eradicate homelessness, but a research study also has shown that generally federal housing subsidies are not targeted to those likely to experience homelessness. Those with certain criminal records or substance abuse histories may not be eligible for federal housing assistance, and these factors sometimes are associated with homelessness. Although certain federal programs target vouchers to those who are most difficult to house, local service providers may still refuse to serve those who have been incarcerated or have substance abuse problems. For example, while the HUD-VASH program is to be available to many of these subpopulations, HUD officials and others told us that local service providers still refuse to serve them. In addition, while HUD estimates that 27 percent of PHAs have preferences for those experiencing homelessness, many of them restrict these programs to those who may be easier to serve. Service providers, advocates, researchers, and government officials that we interviewed also cited eligibility criteria for mainstream programs as a main barrier to serving those experiencing homelessness. In 2000, we reported on barriers those experiencing homelessness faced in accessing mainstream programs, and this is a continuing issue. To obtain benefits, applicants need identification and other documents, which those experiencing homelessness often do not have. Without documentation, they sometimes cannot enter federal and state buildings where they would need to go to get documentation or obtain benefits. Those that cited access as a barrier particularly noted difficulties with SSI/SSDI programs. Service providers and government officials noted that those experiencing homelessness may not receive notices about hearing dates or other program requirements because they lack a fixed address. At least one researcher told us that an initiative, SSI/SSDI Outreach, Access and Recovery (SOAR), has improved performance. The initiative’s Web site says that those experiencing homelessness normally have a 10–15 percent chance of receiving benefits from an initial application, but that SOAR has increased success to 70 percent in areas it serves. However, one local agency in an area served by SOAR told us in January 2010 that most applicants were rejected initially. Some of those we interviewed also noted that Medicaid applicants have some similar problems. For example, one advocate noted that it is difficult for those experiencing homelessness to get through the application process and, when necessary, prove disability; however, because Medicaid is a state- run program, these problems are worse in some states than in others. Another provider noted that Medicaid requires that information be periodically updated, and those experiencing homelessness may not receive notices of this. As a result, they may lose their benefits and be required to travel a long distance to get them reinstated. Finally, service providers said that PHAs often restrict federal housing assistance to those without substance abuse issues or certain criminal records and that programs generally have long waiting lists. Because homelessness is a multifaceted issue and a variety of programs across a number of departments and agencies have been designed to address it, collaborative activities are essential to reducing homelessness in a cost-effective manner. In prior work, we have determined that certain key activities, such as setting common goals, communicating frequently, and developing compatible standards, policies, procedures, and data systems, characterize effective interagency collaboration. In addition, we found that trust is an important factor for achieving effective collaboration. Efforts to address homelessness often have stressed the need for local, communitywide collaboration. For instance, entities applying to HUD for Homeless Assistance Grants have to come together as a Continuum to file applications. Other agencies or individuals, such as the school systems’ homeless liaisons, also are required to coordinate activities in the community. In addition, from 2002 to 2009, Interagency Council staff encouraged government officials, private industry, and service providers to develop 10-year plans to end homelessness or chronic homelessness and provided tools to communities to assist with the development of these plans. Many communities have developed these plans, but whether plans have been implemented or have been achieving their goals is unclear. The Interagency Council reports that 332 of these plans have been drafted. All of the locations we visited had drafted plans at the state or local level, however, in two of the four sites—California and South Carolina—plans that had been drafted had not been adopted by appropriate local or state government entities and thus had not been implemented. Some of the people with whom we spoke said that differences in definitions of homelessness limited their ability to collaborate effectively or strategically across communities. Local officials or researchers in three of the four locations we visited noted that certain elements of collaboration were difficult to achieve with different definitions of homelessness. In one location we visited, local agency officials who had extensive experience with a broad range of homelessness programs and issues noted that multiple definitions impeded those involved in homelessness activities from defining or measuring a common problem and were a major obstacle to developing measures to assess progress in solving the problem. Further, they noted that the trust of the local community in officials’ ability to understand the problem of homelessness was eroded when recent point-in-time counts showed that numbers of families experiencing homelessness under one definition declined while the number of families receiving homeless services in other programs that defined homelessness more broadly increased. In two other locations, local government officials and a researcher involved in evaluating local programs said that having multiple definitions of homelessness impeded their ability to plan systematically or strategically for housing needs or efforts to end homelessness at the community level. Congress also recognized the importance of federal interagency collaboration when it authorized the Interagency Council in the original McKinney-Vento Act and reauthorized it in the HEARTH Act. Some of the people we interviewed further noted that collaboration among federal programs was essential because addressing homelessness required that those in need receive a holistic package of services that might encompass the expertise and programs of a number of agencies. They also said that collaboration was necessary to prevent people from falling through gaps created by certain events, such as entering or leaving hospitals or prisons, aging out of foster care or youth programs, or otherwise experiencing changes in family composition. Further, they noted that, with HUD’s emphasizing housing rather than services in its funding priorities, the need for effective collaboration was greater now than in the past. Finally, officials at HUD, HHS, and Education noted that at a time of budget austerity collaboration among agencies was an effective way to leverage scarce resources. While we noted in 1999 and again in 2002 that homeless programs could benefit from greater interagency coordination, many of the government officials, researchers, advocates, and service providers we interviewed who were knowledgeable about multiple federal agencies said that collaboration among federal programs and agencies had been limited or did not exist at all. Generally, those we interviewed in our current work said that, from 2002-2009, the Interagency Council had focused on that part of its mission that required it to foster local collaboration rather than on that part that required it to foster collaboration among federal agencies. In addition, some of those we interviewed said that federal program staff had focused largely on their own requirements and funding streams rather than on collaborative approaches to addressing homelessness. In 1994, the Interagency Council issued an interagency plan to address homelessness that called for federal agencies to streamline and consolidate programs, when appropriate, and introduced the concept of a Continuum of Care, but did not include any longer-term mechanism to promote interagency collaboration, such as joint funding of programs. Following issuance of this plan, the Interagency Council did not again receive funding until 2001, although it did undertake some joint activities including coordinating and funding a survey of service providers and persons experiencing homelessness. In 2002, an executive director was appointed and, according to some of those involved with the Interagency Council, the council turned its attention largely to helping communities draw up 10-year plans to end chronic homelessness. In the HEARTH Act, Congress called on the Interagency Council to develop a strategic plan to end homelessness that would be updated annually, and in November 2009, a new executive director took office. In preparation for the strategic plan and in response to new staffing and funding at the Interagency Council and elsewhere, agencies and the Council appear more focused on interagency coordination. The Interagency Council issued its strategic plan on June 22, 2010. The plan says that it is designed to neither embrace nor negate any definition of homelessness being used by a program. Federal agencies have also not collaborated effectively outside the Interagency Council. Those we interviewed noted that agencies have focused on their own funding streams and have not coordinated dates for applying for grants that could be combined to provide housing and services for those experiencing homelessness. Service providers must apply for grants at different times, and grants run for different periods and have different probabilities of being continued. A provider might receive funding to build permanent housing but might not receive funding needed for certain support services, or vice versa. One group knowledgeable about an array of housing programs said that recently an HHS grant tried to link its funding to HUD’s, but a lack of full collaboration between the agencies created confusion and discouraged some service providers from applying for the HHS grant. The HHS grant required that applicants have an executed grant from HUD when they applied for the HHS grant. However, HHS applications were due before HUD had executed any of its grants. HHS officials then relaxed their grant criteria, saying that they would evaluate the lack of an executed grant contract with HUD on a case- by-case basis. HUD officials said that the grant criteria were relaxed to include recognition of HUD’s conditional grant award letters. Two groups with whom we spoke also noted that funding from multiple agencies often focused on demonstration projects and that grant processes for these also were not well coordinated and funding ended abruptly. Officials at HUD noted that lack of coordination on grants across agencies is likely the result of the statutes that authorize programs and agency regulations that implement them. Some of the service providers, advocates, and government officials we interviewed cited specific examples of successful programmatic collaboration, such as the HUD-VASH program, and federal agency officials directed us to a number of initiatives that illustrate a greater emphasis on interagency collaboration. HUD officials noted that they have been partnering with HHS and VA to improve and align their data collection and reporting requirements for federally-funded programs addressing homelessness. For example, HUD and HHS announced in December 2009 plans to move toward requiring that HHS’s PATH program use HMIS for data collection and reporting for street outreach programs. They noted that the agencies had agreed to align reporting requirements by establishing common outputs and performance outcomes. The plan called for HHS to begin providing technical assistance and training activities for PATH programs on individual-level data collection and reporting and alignment with HMIS in 2010, and to seek approval for a revised annual report to include HMIS data in 2011. In February 2010, officials from HUD, HHS, and Education—key agencies for addressing homelessness for nonveterans—outlined proposals on homelessness included in the proposed FY 2011 budget. These included a demonstration program that combines 4,000 HUD housing vouchers with HHS supportive services and another program that calls for HUD, HHS, and Education to be more fully engaged in stabilizing families. The latter proposal calls for HUD to provide 6,000 housing vouchers on a competitive basis. We also found that federal agency staff did not effectively collaborate within their agencies. For example, in January 2010, staff at one of HUD’s field offices told us that while collaboration between those involved in the Homeless programs and those involved in Public Housing programs would be beneficial, any coordination between these two HUD programs was “haphazard.” In February 2010, the Assistant Secretaries for the Offices of Public and Indian Housing and Community Planning and Development, which includes homeless programs, reported that they are meeting weekly and looking for ways to better coordinate programs. In another example, staff at HHS who developed the National Youth in Transition Database, which includes looking at experiences with homelessness, had not consulted with staff in the Family and Youth Services Bureau, who administer the Runaway and Homeless Youth Programs and generally were recognized as having some expertise on youths experiencing homelessness. Finally, we observed that while coordination has been limited, it was more likely to occur between those parts of agencies that were using a common vocabulary. For example, state McKinney-Vento education coordinators and local education liaisons are required to coordinate with housing officials and providers in a number of ways; however, the McKinney-Vento Homeless Education Program coordinator in one of the states we visited said that while she has coordinated locally with staff from Head Start, an HHS program that also uses the McKinney-Vento Children and Youth definition of homelessness, she has found it very hard to coordinate with local HUD staff that use a different definition of homelessness, because they did not see how the education activities relate to their programs. In addition, those agencies that have agreed on a definition of chronic homelessness—HUD, HHS, DOL, and VA—have engaged in some coordinated efforts to address the needs of those that met the definition. For many years, the federal government has attempted to determine the extent and nature of homelessness. As part of this effort, Education, HHS, and HUD have systems in place that require service providers involved in the homelessness programs they administer to collect data on those experiencing homelessness and report these data in various ways to the agencies. However, while the data currently being collected and reported can provide some useful information on those experiencing homelessness, because of difficulties in counting this transient population and changes in methodologies over time, they are not adequate for fully understanding the extent and nature of homelessness. In addition, the data do not track family composition well or contribute to an understanding of how family formation and dissolution relate to homelessness. Further, because of serious shortcomings and methodologies that change over time, the biennial point-in-time counts have not adequately tracked changes in homelessness over time. While these data systems have improved, it still is difficult for agencies to use them to understand the full extent and nature of homelessness, and addressing their shortcomings could be costly. For example, one shortcoming of HUD’s point-in-time count is that it relies on volunteer enumerators who may lack experience with the population, but training and utilizing professionals would be very costly. In part because of data limitations, researchers have collected data on narrowly defined samples that may not be useful for understanding homelessness more generally or do not often consider structural factors, such as area poverty rates, which may be important in explaining the prevalence and causes of homelessness. In addition, because complete and accurate data that track individuals and families over time do not exist, researchers generally have not been able to explain why certain people experience homelessness and others do not, and why some are homeless for a single, short period and others have multiple episodes of homelessness or remain homeless for a long time. However, those who have experienced or might experience homelessness frequently come in contact with mainstream programs that are collecting data about the recipients of their services. While homelessness is not the primary focus of these programs, if they routinely collected more detailed and accurate data on housing status, agencies and service providers could better assess the needs of program recipients and could use these data to help improve the government’s understanding of the extent and nature of homelessness. Researchers also could potentially use these data to better define the factors associated with becoming homeless or to better understand the path of homelessness over time. Collecting these data in existing or new systems might not be easy, and agencies would incur costs in developing questions and providing incentives for accurate data to be collected. Collecting such data may be easier for those programs that already collect some housing data on individuals, families, and youths who use the programs and report those data on an individual or aggregate basis to a federal agency, such as HHS’s Substance Abuse Treatment and Prevention Block Grant program or Head Start. For those mainstream programs that do not currently report such data, collecting it may be a state or local responsibility, and the willingness of states to collect the data may vary across locations. For example, HHS has reported that about half of the states that do collect homelessness data do not consider it burdensome to do so through their TANF and Medicaid applications, and would be willing to provide data extracts to HHS for research purposes. States or localities and researchers could find these data useful even if they are not collected on a federal or national level. However, concerns exist about resource constraints and data reliability. Therefore, the benefits of collecting data on housing status for various programs would need to be weighed against the costs. Federal efforts to determine the extent and nature of homelessness and develop effective programs to address homelessness have been hindered by the lack of a common vocabulary. For programs to collect additional data on housing status or homelessness or make the best use of that data to better understand the nature of homelessness, agencies would need to agree on a common vocabulary and terminology for these data. Not only would this common vocabulary allow agencies to collect consistent data that agencies or researchers could compile to better understand the nature of homelessness, it also would allow agencies to communicate and collaborate more effectively. As identified in 2011 budget proposals, Education, HHS, and HUD are the key agencies that would need to collaborate to address homelessness, but other agencies that also belong to the Interagency Council—a venue for federal collaborative efforts— such as DOL and DOJ might need to be involved as well. However, agency staff may find it difficult to communicate at a federal or local level when they have been using the same terms to mean different things. For example, agencies might want to avoid using the term homelessness itself because of its multiple meanings or the stigma attached to it. Instead, they might want to list a set of housing situations explicitly. The agencies could begin to consider this as part of the proceedings Congress has mandated that the Interagency Council convene after this report is issued. Once agencies have developed a common vocabulary, they might be able to develop a common understanding of how to target services to those who are most in need and for whom services will be most effective. In addition, with a common vocabulary, local communities could more easily develop cohesive plans to address the housing needs of their communities. To improve their understanding of homelessness and to help mitigate the barriers posed by having differences in definitions of homelessness and related terminology, we recommend that the Secretaries of Education, HHS, and HUD—working through the U. S. Interagency Council on Homelessness—take the following two actions: 1. Develop joint federal guidance that establishes a common vocabulary for discussing homelessness and related terms. Such guidance may allow these and other agencies on the Interagency Council on Homelessness to collaborate more effectively to provide coordinated services to those experiencing homelessness. 2. Determine whether the benefits of using this common vocabulary to develop and implement guidance for collecting consistent federal data on housing status for targeted homelessness programs, as well as mainstream programs that address the needs of low-income populations, would exceed the costs. We provided a draft of this report to the Departments of Education, Health and Human Services, Housing and Urban Development, Labor, and Justice and the Executive Director of the Interagency Council for their review and comment. We received comments from the Assistant Secretary of the Office of Elementary and Secondary Education at the Department of Education; the Assistant Secretary for Legislation at the Department of Health and Human Services; the Assistant Secretary of Community Planning and Development at the Department of Housing and Urban Development; and the Executive Director of the Interagency Council. These comments are reprinted in Appendixes III through VI of this report respectively. The Departments of Labor and Justice did not provide formal comments. Education, HUD, and the Executive Director of the Interagency Council explicitly agreed with our first recommendation that Education, HHS, and HUD--working through the Interagency Council--develop federal guidance that establishes a common vocabulary for discussing homelessness and related terms. HHS did not explicitly agree or disagree with this recommendation. Instead, HHS commented extensively on the advantages of having multiple definitions of homelessness. While we discuss the challenges posed by, and the advantages of, having multiple definitions of homelessness in this report, our report recommends a common vocabulary rather than either a single or multiple definitions. In their interagency strategic plan to prevent and end homelessness issued on June 22, 2010, the agencies acknowledge the need for a common vocabulary or language when they say that a common language is necessary for the interagency plan to be understandable and consistent and that this language does not negate or embrace the definitions used by different agencies. Education explicitly addressed our second recommendation that agencies consider the costs and benefits of using a common vocabulary to develop and implement guidance for collecting consistent federal data on housing status for targeted homelessness and mainstream programs in their written response. Education wrote that a discussion of such costs and benefits of using more of a common vocabulary, as it relates to data collection, should be an agenda item for the Interagency Council. The Executive Director of the Interagency Council also supported further exploration of how to accurately and consistently report housing status in mainstream programs. Although we recommend that the agencies work through the council to address this recommendation, decisions about individual program data collection will necessarily be made by the agency overseeing the program. Although HHS did not comment explicitly on our second recommendation, they did provide comments on data collection. They commented that GAO appears to assume that programs identify people who are homeless only to have a total count of the homeless population. We do not make that assumption. We recognize that programs collect data specifically for the program’s use; however, data collected for programs also can contribute to a broader understanding of the extent and nature of homelessness. For example, while HMIS has certain shortcomings described in the report, service providers collect HMIS data in some cases to better manage their programs, and HUD also uses those data to attempt to understand the extent and nature of homelessness. HHS also noted that homelessness data systems are costly and complicated to develop and linking them presents challenges. We acknowledge that while collecting more consistent data on housing status for targeted and mainstream programs would have benefits, there would be implementation costs as well. Additionally, HHS, HUD, and the Executive Director of the Interagency Council raised other concerns about this report that did not relate directly to the two recommendations. HHS commented on the history of the National Youth in Transition Database, developed in response to the Chafee Foster Care Independence Act of 1999. HUD commented that the report did not present a complete view of HUD’s data collection and reporting efforts and did not recognize the strides that have been made in this area, the value of the data currently being collected and reported, or that their Annual Homeless Assessment Report is the only national report to use longitudinal data. The Executive Director of the Interagency Council also wrote that the report did not adequately recognize what is possible today that was not possible 5 years ago. The objective of this report was to determine the availability and completeness of data that currently are collected on those experiencing homelessness, not on the extent to which these data have improved over time. In addition, HUD’s data are not longitudinal in that they do not follow specific individuals over time; rather HUD collects aggregated data that track numbers of homeless over time. Nonetheless, in the report we discuss actions that HUD has taken to improve its homelessness data over time and note the inherent difficulties of collecting these data. The report also notes that HUD’s point-in-time count represents the only effort by a federal agency to count all of those who are experiencing homelessness, rather than just those utilizing federally-funded programs. HUD made a number of other comments related to their data and the definition of homelessness. HUD commented that the report did not recognize that data collection is driven by statutory definitions or that HUD’s point-in-time and HMIS systems are in some sense complementary. We have addressed this comment in the final report by making it clearer that data collected necessarily reflect the definitions included in the statutes that mandate data collection. We also added a footnote to show that while point-in-time counts focus on those who are homeless for long periods of time, HMIS may capture those who are homeless for shorter periods of time or move in and out of homelessness. HUD also commented that the report did not adequately describe the statutory history of homelessness definitions. We do not agree; the report describes the statutory history to the extent needed to address our objectives. Additionally, HUD commented that the report did not provide proper context about HMIS development and implementation at the local level, adding that a community’s success in using HMIS to meet local needs depends on a variety of factors, such as staff experience and the quality of software selected. We revised the report to acknowledge that a community’s success in using HMIS depends on these other factors. Further, the report acknowledges that in setting HMIS data standards, HUD allowed communities to adapt locally developed data systems or to choose from many other HMIS systems that meet HUD’s standards. Finally, HUD wrote that we attribute the lack of collaboration among federal agencies solely to differences in definitions. Similarly, the Executive Director of the Interagency Council wrote that many greater obstacles to effective collaboration exist than the definitional issue—such as “siloed” departmental and agency structures, uncoordinated incentives and measures of effectiveness, difficulties communicating across very large bureaucracies, and different program rules for releasing and administering funds. The report does not attribute the lack of collaboration solely to the differences in definitions. Instead we note that agencies have not collaborated and that having a common vocabulary could improve collaboration. The report focuses on definitional differences, in part, because it was a key objective of our work and an issue frequently raised in discussions of barriers to effectively providing services to those experiencing homelessness. Education, HHS, and HUD also provided technical comments which we addressed as appropriate. We are sending copies of this report to the Secretaries of Education, Health and Human Services, Housing and Urban Development, Labor, and Justice; the Executive Director of the U.S. Interagency Council on Homelessness; and relevant congressional committees. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. The objectives of our report were to (1) assess the availability, completeness, and usefulness of data on homelessness collected by federal programs; (2) assess the extent to which research identifies factors associated with homelessness; and (3) analyze how differences in the definitions of homelessness and other factors, such as the level of agency collaboration, may impact the effectiveness of programs serving those experiencing homelessness. To address all of our objectives, we reviewed relevant laws such as the McKinney-Vento Homeless Assistance Act, as amended, and the HEARTH Act, as well as a range of prior GAO reports that addressed homelessness or related issues such as reviews of the Social Security Administration’s Supplemental Security Income (SSI) and Supplemental Security Disability Income (SSDI) programs. We also reviewed regulations and government reports across a number of programs specifically targeted to address issues related to homelessness as well as mainstream programs, such as Temporary Assistance for Needy Families (TANF), Head Start, and Public Housing, that often provide services to people experiencing homelessness. Finally, we reviewed research on homelessness retrieved during a wide- ranging search of the literature. During our review, we conducted interviews with at least 60 entities, including officials of six federal government agencies, representatives of at least 15 state and local government entities, staff and officials at 27 service providers, 11 researchers, and officials at 10 groups that advocated for positions related to homelessness. These sum to more than the 60 interviews because some entities fall into more than one category. Specifically, we interviewed officials at the Departments of Education (Education), Health and Human Services (HHS), Housing and Urban Development (HUD), Justice (DOJ), and Labor (DOL), and the U.S. Interagency Council on Homelessness (Interagency Council). We also conducted in-depth interviews with advocates and researchers, as well as service providers, state and local government officials, and HUD field staff that had extensive experience with homeless programs. Many of our interviews were conducted as part of four site visits to large and medium- sized urban areas that were geographically distributed across the United States. We visited these locations to determine the extent to which views on homelessness were specific to particular locations or regions because of local laws, population concentration, or weather. We chose locations to represent each of the major regions of the United States—the Midwest, Northeast, South, and West—and to reflect differences in population concentration and weather. We chose specific urban areas in part because they had reported recent large changes in homelessness among families— two had seen a marked increase, while a third had noted a decrease. In the fourth location, homelessness had been relatively stable. Using these factors, we chose cities in California, Illinois, Massachusetts, and South Carolina. Generally, we did not consider issues specific to rural areas because Congress had mandated a separate study of them. We chose the specific organizations we interviewed to include a range of activities and views, but did not seek to interview a given number of agencies or individuals in each area or to develop a sample from which we could generalize our findings. We also undertook a number of activities specific to each objective: To address the first objective on the availability, completeness, and usefulness of data on homelessness collected by federal programs, we reviewed statutes, regulations, guidance, technical standards, and reports on federal data from targeted homelessness programs. We focused our review of federal data on the Housing and Urban Development Department’s (HUD) Homeless Management Information System (HMIS) and point-in-time counts, Health and Human Services’ (HHS) Runaway and Homeless Youth Management Information System (RHYMIS), and data submitted to the Department of Education through Consolidated State Performance Reports. We interviewed selected service providers to learn about the data systems they use to collect and store information on the homeless populations they serve, the procedures they use to ensure data reliability, and the usefulness of existing data systems for program management and administrative purposes. In addition, we interviewed selected federal, state, and local officials to identify the data used in their oversight of programs for families and individuals that are experiencing homelessness, the procedures they use to verify data reliability, and the extent to which existing data provide sufficient information for program management. Further, we spoke with researchers, individuals with special expertise with federal data systems, and government contractors, to determine the reliability and usefulness of existing data sources on the homeless, as well as to identify potential areas for improvement in data on the homeless. We also analyzed estimates of the extent of homelessness that were derived from federal data systems. In determining the reliability of the data for this report, we identified several limitations with the data– namely, that persons experiencing homelessness are hard to identify and count; that other than the point-in-time count, the three federal data sources for targeted homelessness programs primarily capture data on program participants; and that duplication can exist because the population is mobile and dynamic–which are noted in the report. Nevertheless, because these are the only available data and the relevant departments use them to understand the extent and nature of homelessness, we present the data with their limitations. We also reviewed two HHS reports on homelessness and housing status data collected from federal mainstream programs, to determine the availability of such data. We reviewed research that estimated the size of the population that is doubled up with family and friends. We used data from the 2008 American Community Survey to develop our own estimate of the number of people who were experiencing severe to moderate economic hardship and living with an extended family or nonfamily member in 2008. The survey is conducted annually by the U.S. Census Bureau, and it asks respondents to provide information for housing information and employment income for households. We made several assumptions about what comprises severe or moderate economic hardship. Severe economic hardship was assumed to mean that households had housing costs of at least 50 percent of household income and that household income was below 50 percent of the federal poverty line and moderate economic hardship was assumed to mean that the households had housing costs that were at least 30 percent of household income and household income was below the federal poverty line. We also made assumptions about what comprises extended family; we assumed that extended family households were those where some people in the household were not part of the head of household’s immediate family, and we included spouse, live-in partners, children, grandparents, and grandchildren in our definition of immediate family members. We cannot determine from the available data whether the individuals that are living with extended family or nonfamily members and experiencing severe or moderate economic hardship would meet the McKinney-Vento Children and Youth definition of homelessness, which requires that individuals be doubled up because of economic hardship. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95-percent confidence interval. This is the interval that would contain the actual population values for 95 percent of the samples we could have drawn. As a result, we cannot determine whether the people in our estimate would be eligible for the benefits if the McKinney-Vento Individual definition of homelessness were expanded to include those doubled up because of economic hardship. To address the second objective, we conducted a literature review to identify research studies that considered factors associated with the likelihood that families, youths, and individuals would experience homelessness. We also used various Internet search databases (including EconLit, ERIC, Medline, and Proquest) to identify studies published or issued after 1998. We chose 1998 as a starting point because welfare reform—which impacted some homeless families—had been implemented by that date and may have affected research findings. We sought to identify additional studies with persons we interviewed (that is, government officials, researchers, and advocacy groups) and from studies’ bibliographies. In this initially broad search, we identified more than 600 studies, although we cannot be certain that we captured all relevant research that met our screening criteria. We screened the papers we identified using a multilevel process to gauge their relevance and evaluate their methodology. We excluded papers that did not specifically focus on our objective, were published or issued before 1998, lacked quantitative analysis, had a target population sample size of less than 25, did not conduct some form of statistical testing, did not use a comparison or control group or some other means to compare the target population (or group of persons to whom the research hopes to generalize findings) such as regression analyses, focused on homeless populations outside of the United States, or were dissertations. We retained 45 studies after screening and reviewed their methodologies, findings, and limitations. Nine GAO staff (four analysts and five methodologists) were involved in the systematic review of each of the 45 studies selected, which were determined to be sufficiently relevant and methodologically rigorous. More specifically, two staff members—one analyst and one methodologist—reviewed each study and reached agreement on the information entered in the database. As noted in this report, many of these studies are subject to certain methodological limitations, which may limit the extent to which the results can be generalized to larger populations. In some cases, studies did not discuss correlation among the factors and are thus limited in their ability to explain which factors might lead to homelessness. In addition, at least four studies used data that were more than 10 years old from the date of publication. Findings based on such data may be limited in explaining the characteristics and dynamics of current homeless populations. Further, collecting comparable information from individuals who have not been homeless (a comparison group) is important in determining which variables distinguish those experiencing homelessness from those that do not, and is essential in determining whether certain at-risk individuals and families experience homelessness and others do not. Although we generally excluded studies that did not use a comparison or control group to test their hypotheses, several studies in our literature review used a comparison group that was another homeless population rather than a nonhomeless control group. In addition to the literature review, we gathered opinions from researchers, advocates, service providers, and government officials on the factors associated with the likelihood of experiencing homelessness. To address the third objective, we took several steps to develop a list of potential barriers to providing services for those experiencing homelessness. First, we reviewed our prior work on barriers facing those experiencing homelessness. Second, we held initial interviews with researchers, service providers, and government officials in our Massachusetts location where potential barriers were raised. Third, in conjunction with a methodologist, we developed a list of potential barriers. The list, which included affordable housing, differences in definitions of homelessness used by various federal agencies, eligibility criteria other than income for accessing mainstream programs, the complexities of applying for grants, and lack of collaboration among federal agencies as well as a number of other potential barriers, was included in a structured data collection instrument to be used in the remaining interviews. We asked those we interviewed to select the three most important barriers from that list but did not ask them to rank order their selections. Interviewees were also able to choose barriers not on the list. To ensure that interviewees were interpreting the items on the list in the same way that we were interpreting them, we had interviewees describe the reasons for their choice. We determined the relative importance of the barriers chosen by summing the number of times an item was selected as one of the three most important barriers. When those we interviewed did not choose differences in definitions of homelessness as one of the three main barriers, we asked them for their views on definitional issues and asked all those we interviewed about the advantages of having multiple definitions of homelessness. Similarly, for collaboration among federal agencies, we asked those we interviewed about the agencies they worked with and, if they worked with multiple agencies, about their experiences. We also asked for examples of successful interaction among federal agencies. collaboration. As previously noted, lack of interagency collaboration was also on the list of barriers. In addition, we interviewed the acting and newly appointed executive directors of the Interagency Council on Homelessness and reviewed certain documents related to their activities; interviewed agency officials at Education, HUD, HHS, DOL, and DOJ; and reviewed agency planning and performance documents to identify coordination with other agencies. We conducted this performance audit from May 2009 to June 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted a review of 45 research studies that analyzed factors associated with homelessness. Most of the studies we reviewed examined factors associated with the likelihood of entering an episode of homelessness or the rates of homelessness in a given area, while a few examined factors associated with the duration of homelessness. Twenty- nine studies examined adult individuals, 14 studies examined families, and 7 studies examined only youths. To assess factors associated with homelessness, studies used a range of analytical techniques—including measures of association or correlation between single factors and methods that accounted for some of the interrelationships among factors. The 45 studies are listed below: Allgood, Sam, and Ronald S. Warren, Jr. “The Duration of Homelessness: Evidence from a National Survey.” Journal of Housing Economics 12 (2003): 273-290. Anderson, Debra Gay, and M. K. Rayens. “Factors Influencing Homelessness in Women.” Public Health Nursing 21, no. 1 (2004): 12-23. Bassuk, Ellen L., Jennifer N. Perloff, and Ree Dawson. “Multiply Homeless Families: The Insidious Impact of Violence.” Housing Policy Debate 12 (2001): 299-320. Bendheim-Thoman Center for Research on Child Wellbeing and Columbia Population Research Center. “Predictors of Homelessness and Doubling- up Among At-risk Families.” Fragile Families Research Brief, no. 43 (August 2008). Caton, Carol L. M., Boanerges Dominguez, Bella Schanzer, et al. “Risk Factors for Long-Term Homelessness: Findings from a Longitudinal Study of First-Time Homeless Single Adults.” American Journal of Public Health 95 (2005): 1753-1759. Caton, Carol L. M., Deborah Hasin, Patrick E. Shrout, et al. “Risk Factors for Homelessness among Indigent Urban Adults with No History of Psychotic Illness: A Case-Control Study.” American Journal of Public Health 90 (2000): 258-263. Collins, Cyleste C., Claudia J. Coulton, and Seok-Joo Kim. Family Homelessness in Cuyahoga County. White paper published for the Sisters of Charity Foundation, Center on Urban Poverty and Community Development, Cleveland, Ohio: Case Western Reserve University, 2009. Cousineau, Michael R. “Comparing Adults in Los Angeles County Who Have and Have Not Been Homeless.” Journal of Community Psychology 296, no. 6 (2001): 693-701. Culhane, Dennis P., and Stephen Metraux. “One-Year Rates of Public Shelter Utilization by Race/Ethnicity, Age, Sex and Poverty Status for New York City (1990 and 1995) and Philadelphia (1995).” Population Research and Policy Review (1999): 219-236. Culhane, Dennis P., Stephen Metraux, Stephen R. Poulin, and Lorlene M. Hoyt. “The Impact of Welfare Reform on Public Shelter Utilization in Philadelphia: A Time-Series Analysis.” Cityscape: A Journal of Policy Development and Research, U.S. Department of Housing and Urban Development, Office of Policy Development and Research 6, no. 2 (2003): 173-185. Early, Dirk W. “An Empirical Investigation of the Determinants of Street Homelessness.” Journal of Housing Economics 14 (2005): 27-47. Early, Dirk W. “The Determinants of Homelessness and the Targeting of Housing Assistance.” Journal of Urban Economics 55 (2004): 195-214. Early, Dirk W. “The Role of Subsidized Housing in Reducing Homelessness: An Empirical Investigation Using Micro-Data.” Journal of Policy Analysis and Management 17, no. 4 (1998): 687-696. Eyrich-Garg, Karin M., John S. Cacciola, Deni Carise, et al. “Individual Characteristics of the Literally Homeless, Marginally Housed, and Impoverished in a U.S. Substance Abuse Treatment-Seeking Sample.” Social Psychiatry and Psychiatric Epidemiology 43 (2008): 831-842. Eyrich-Garg, Karin M., Catina Callahan O’Leary, and Linda B. Cottler. “Subjective Versus Objective Definitions of Homelessness: Are There Differences in Risk Factors among Heavy-Drinking Women?” Gender Issues 25 (2008): 173-192. Fertig, Angela R., and David A. Reingold. “Homelessness among at-Risk Families with Children in Twenty American Cities.” Social Service Review 82, no. 3 (2008): 485-510. Fitzgerald, Scott T., Mack C. Shelley II, and Paula W. Dail. “Research and Homelessness: Sources and Implications of Uncertainty.” American Behavioral Scientist 45, no. 1 (2001): 121-148. Folsom, David P., William Hawthorne, Laurie Lindamer, et al. “Prevalence and Risk Factors for Homelessness and Utilization of Mental Health Services Among 10,340 Patients with Serious Mental Illness in a Large Public Mental Health System.” American Journal of Psychiatry 162, no. 2 (2005): 370-376. Greenberg, Greg A., and Robert A. Rosenheck. “Homelessness in the State and Federal Prison Population.” Criminal Behaviour and Mental Health 18, no. 2 (2008): 88-103. Gubits, Daniel, Jill Khadduri, and Jennifer Turnham. Housing Patterns of Low Income Families with Children: Further Analysis of Data from the Study of the Effects of Housing Vouchers on Welfare Families. Joint Center for Housing Studies of Harvard University, 2009. Ji, Eun-Gu. “A Study of the Structural Risk Factors of Homelessness in 52 Metropolitan Areas in the United States.” International Social Work 49, no. 1 (2006): 107-117. Johnson, Timothy P., and Michael Fendrich. “Homelessness and Drug Use - Evidence from a Community Sample.” American Journal of Preventive Medicine 32 (2007): S211-S218. Kingree, J. B., Torrance Stephens, Ronald Braithwaite, and James Griffin. “Predictors of Homelessness among Participants in a Substance Abuse Treatment Program.” American Journal of Orthopsychiatry 69, no. 2 (1999): 261-266. Kuhn, Randall, and Dennis P. Culhane. “Applying Cluster Analysis to Test a Typology of Homelessness by Pattern of Shelter Utilization: Results from the Analysis of Administrative Data.” American Journal of Community Psychology 26 (1998): 207-232. Leal, Daniel, Marc Galanter, Helen Dermatis, and Laurence Westreich. “Correlates of Protracted Homelessness in a Sample of Dually Diagnosed Psychiatric Inpatients.” Journal of Substance Abuse Treatment 16, no. 2 (1999): 143-147. Lee, Barrett A., Townsand Price-Spratlen, and James W. Kanan. “Determinants of Homelessness in Metropolitan Areas.” Journal of Urban Affairs 25 (2003): 335-355. Lehmann, Erika R., Philip H. Kass, Christiana M. Drake, and Sara B. Nichols. “Risk Factors for First-Time Homelessness in Low-Income Women.” American Journal of Orthopsychiatry 77, no. 1 (2007): 20-28. Metraux, Stephen, and Dennis P. Culhane. “Family Dynamics, Housing, and Recurring Homelessness among Women in New York City Homeless Shelters.” Journal of Family Issues 20, no. 3 (1999): 371-396. Molino, Alma C. “Characteristics of Help-Seeking Street Youth and Non- Street Youth.” 2007 National Symposium on Homelessness Research, 2007. O’Flaherty, Brendan, and Ting Wu. “Fewer Subsidized Exits and a Recession: How New York City’s Family Homeless Shelter Population Became Immense.” Journal of Housing Economics (2006): 99-125. Olsen, Edgar O., and Dirk W. Early. “Subsidized Housing, Emergency Shelters, and Homelessness: An Empirical Investigation Using Data from the 1990 Census.” Advances in Economic Analysis & Policy 2, no. 1 (2002). Orwin, Robert G., Chris K. Scott, and Carlos Arieira. “Transitions through Homelessness and Factors That Predict Them: Three-Year Treatment Outcomes.” Journal of Substance Abuse Treatment 28 (2005): S23-S39. Park, Jung Min, Stephen Metraux, and Dennis P. Culhane. “Childhood Out- of-Home Placement and Dynamics of Public Shelter Utilization among Young Homeless Adults.” Children and Youth Services Review 27, no. 5 (2005): 533-546. Quigley, John M., Steven Raphael, and Eugene Smolensky. “Homeless in America, Homeless in California.” The Review of Economics and Statistics 83, no. 1 (2001): 37-51. Rog, Debra J. C., Scott Holupka, and Lisa C. Patton. Characteristics and Dynamics of Homeless Families with Children. Final report to the Office of the Assistant Secretary for Planning and Evaluation, Office of Human Services Policy, U.S. Department of Health and Human Services (Rockville, Md.: Fall 2007). Shelton, Katherine H., Pamela J. Taylor, Adrian Bonner, and Marianne van den Bree. “Risk Factors for Homelessness: Evidence from a Population- Based Study.” Psychiatric Services 60, no. 4 (2009): 465-472. Shinn, Marybeth, Beth C. Weitzman, Daniela Stojanovic, and James R. Knickman. “Predictors of Homelessness among Families in New York City: From Shelter Request to Housing Stability.” American Journal of Public Health 88, no. 11 (1998): 1651-1657. Slesnick, Natasha, Suzanne Bartle-Haring, Pushpanjali Dashora, et al. “Predictors of Homelessness among Street Living Youth.” Journal of Youth Adolescence 37 (2008): 465-474. Stein, Judith A., Michelle Burden Leslie, and Adeline Nyamathi. “Relative Contributions of Parent Substance Use and Childhood Maltreatment to Chronic Homelessness, Depression, and Substance Abuse Problems among Homeless Women: Mediating Roles of Self-Esteem and Abuse in Adulthood.” Child Abuse & Neglect 26, no. 10 (2002): 1011-1027. Sullivan, G., A. Burnam, and P. Koegel. “Pathways to Homelessness among the Mentally Ill.” Social Psychiatry and Psychiatric Epidemiology 35 (2000): 444-450. Tyler, Kimberly A., and Bianca E. Bersani. “A Longitudinal Study of Early Adolescent Precursors to Running Away.” Journal of Early Adolescence 28, no. 2 (2008): 230-251. The Urban Institute, Martha R. Burt, Laudan Y. Aron, et al. Homelessness: Programs and the People They Serve: Findings of the National Survey of Homeless Assistance Providers and Clients. 1999. Vera Institute of Justice, Nancy Smith, Zaire Dinzey Flores, et al. Understanding Family Homelessness in New York City: An In-Depth Study of Families’ Experiences Before and After Shelter 2005. Whaley, Arthur L. “Demographic and Clinical Correlates of Homelessness among African Americans with Severe Mental Illness.” Community Mental Health Journal 38, no. 4 (2002): 327-338. Yoder, Kevin A., Les B. Whitbeck, and Dan R. Hoyt. “Event History Analysis of Antecedents to Running Away from Home and Being on the Street.” American Behavioral Scientist 45, no. 1 (2001): 51-65. In addition to the individual named above, Paul Schmidt, Assistant Director; Nancy S. Barry; Katie Boggs; Russell Burnett; William Chatlos; Kimberly Cutright; Marc Molino; Barbara Roesmann; Paul Thompson; Monique Williams; and Bryan Woliner made major contributions to this report.
Multiple federal programs provide homelessness assistance through programs targeted to those experiencing homelessness or through mainstream programs that broadly assist low-income populations. Programs' definitions of homelessness range from including primarily people in homeless shelters or on the street to also including those living with others because of economic hardship. GAO was asked to address (1) the availability, completeness, and usefulness of federal data on homelessness, (2) the extent to which research identifies factors associated with experiencing homelessness, and (3) how differences in definitions and other factors impact the effectiveness of programs serving those experiencing homelessness. GAO reviewed laws, agency regulations, performance and planning documents, and data as well as literature on homelessness, and spoke with stakeholders, such as government officials and service providers, about potential barriers. Federal agencies, including the Departments of Education (Education), Health and Human Services (HHS), and Housing and Urban Development (HUD), collect data on homelessness. However, these data are incomplete, do not track certain demographic information well over time, and are not always timely. HUD collects data and estimates the number of people who are homeless on a given night during the year and the number who use shelters over the course of the year; these estimates include the people who meet the definition of homelessness for HUD's programs, but do not include all of those who meet broader definitions of homelessness used by some other agencies' programs. For example, HUD's counts would not include families living with others as a result of economic hardship, who are considered homeless by Education. Data from federally-funded mainstream programs such as HHS's Temporary Assistance for Needy Families could improve agencies' understanding of homelessness, but these programs have not consistently collected or analyzed information on housing status because this is not their primary purpose. Because research studies GAO reviewed often used different definitions of homelessness, relied on data collected at a point-in-time, and focused narrowly on unique populations over limited geographical areas, the studies cannot be compared or compiled to further an understanding of which factors are associated with experiencing homelessness. Furthermore, although researchers GAO interviewed and most studies noted the importance of structural factors such as area poverty rates, and those that analyzed these factors found them to be important, few studies considered them. Most of the studies analyzed only the association of individual-level factors such as demographic characteristics, but these studies often did not consider the same individual-level factors or agree on their importance. Many of the government officials, service providers, advocates, and researchers GAO interviewed stated that narrow or multiple definitions of homelessness have posed challenges to providing services for those experiencing homelessness, and some said that having different definitions made collaborating more difficult. For example, some said that persons in need of services might not be eligible for programs under narrower definitions of homelessness or might not receive services for which they were eligible because of confusion created by multiple definitions. Different definitions of homelessness and different terminology to address homelessness have made it difficult for communities to plan strategically for housing needs and for federal agencies such as Education, HHS, and HUD to collaborate effectively to provide comprehensive services. As long as agencies use differing terms to address issues related to homelessness, their efforts to collaborate will be impeded, and this in turn will limit the development of more efficient and effective programs. Commenting on a draft of this report, HHS and HUD raised concerns about its treatment of homelessness data. We characterize and respond to those comments within the report. GAO recommends that Education, HHS, and HUD (1) develop a common vocabulary for homelessness; and (2) determine if the benefits of collecting data on housing status in targeted and mainstream programs would exceed the costs. To the extent that the agencies explicitly addressed the recommendations in their comments, they agreed with them.
Executive Order 12291, which was issued by President Reagan in 1981, authorized OMB to review all proposed and final federal regulations, except those of independent regulatory agencies. The order also required OMB to monitor agencies’ compliance with the order’s requirements and to coordinate its implementation. OIRA’s reviews under this order were highly controversial, with critics contending that OIRA exerted too much control over the development of rules and that decisions were being made without appropriate public scrutiny. Executive Order 12866 revoked Executive Order 12291 but continued the basic framework of the regulatory review process. It also reaffirmed the legitimacy of OIRA’s centralized review function and its responsibility for providing guidance to the agencies. However, Executive Order 12866 also made changes to address criticisms of the regulatory program under Executive Order 12291. In its recent draft report to Congress on the costs and benefits of federal regulations, OMB said that one of these changes was “to increase the openness and accountability of the review process.”Specifically, section 6 of Executive Order 12866 requires OIRA to “make available to the public all documents exchanged between OIRA and the agency during the review by OIRA under this section.” Section 6 of the order also requires agencies to (1) “dentify for the public, in a complete, clear, and simple manner, the substantive changes between the draft submitted to OIRA for review and the action subsequently announced” and (2) “dentify for the public those changes in the regulatory action that were made at the suggestion or recommendation of OIRA.” The order does not require agencies to document when no changes are made during OIRA’s review or at the suggestion or recommendation of OIRA. In October 1993, the OIRA Administrator issued guidance to the heads of executive departments and agencies regarding the implementation of Executive Order 12866. The section of that guidance on “Openness and Public Accountability” that discussed the order’s transparency requirements essentially repeated those requirements without elaboration. “Executive Order 12866 created a more open and accountable review process. The order called for more public involvement, and it specifically delineated who is responsible for what and when, so that interested parties would know the status and results of the Executive review. I have since heard no complaints about accountability and transparency—and I take that as a success.” However, in response to our testimony at the same hearing that EPA and DOT frequently had not documented the changes made to their rules that had been suggested or recommended by OIRA, the Administrator acknowledged that agencies had not “been scrupulously attentive” to that requirement in the order. S. 981, the proposed “Regulatory Improvement Act of 1997,” includes several provisions to strengthen and clarify the executive order’s requirements for public disclosure of and access to information on regulatory review actions. One section of the bill requires agencies to include in the rulemaking record (1) a document identifying in a complete, clear, and simple manner, the substantive changes between the draft submitted to OIRA and the rule subsequently announced; (2) a document identifying changes in the rule made at the suggestion or recommendation of OIRA; and (3) all written communications exchanged between OIRA and agencies during the review. The bill differs from the order in that it requires (1) agencies (not OIRA) to include in the rulemaking record all written communications (not “documents”) exchanged between OIRA and the agencies and (2) agencies to identify changes made to rules while they were at OIRA and changes made at the suggestion of OIRA in a single document. Our first two objectives were to determine whether EPA, DOT, HUD, and OSHA had (1) identified for the public the substantive changes between the draft submitted to OIRA for review and the regulatory action subsequently announced and (2) identified for the public those changes in the regulatory action that were made at the suggestion or recommendation of OIRA. Our third objective was to determine whether OIRA had made available to the public all documents exchanged between OIRA and the agency during the review process. We included in our review all of the four agencies’ regulations that were reviewed by OIRA before publication as final rules between January 1, 1996, and March 1, 1997. We obtained a list of all such rules and any related notices of proposed rulemaking from the Regulatory Information Service Center (RISC). We deleted from the list all rules that were withdrawn by the agencies and all proposed rules that were reviewed by OIRA before Executive Order 12866 was issued on September 30, 1993. The proposed rules and final rules comprised the universe of regulatory actions that we reviewed. Table 1 shows the number of proposed rules, final rules, and the total number of regulatory actions that we examined in each agency. We asked officials in each agency how to locate the information that is required by Executive Order 12866 for these rules. In almost all cases, the agencies said that the information was in their public rulemaking dockets. We then reviewed those dockets and other agency files to determine the extent to which documentation of changes made while under review at OIRA met the requirements of executive order. The order says that the agencies must “identify for the public, in a complete, clear, and simple manner, the substantive changes made” between the draft submitted to OIRA for review and the regulatory action subsequently announced. However, the order does not define these terms or provide criteria for determining whether agencies have complied with these provisions. To describe differences in the extent of documentation available to the public in the agencies’ files, we coded each regulatory action into one of the following three categories: (1) complete documentation, which could be a “redline/strikeout” version of the rule showing all changes made during the review, a memorandum to the file listing all of the changes, or a memorandum indicating that there were no such changes; (2) some documentation, which means we found indications of changes that had been made during OIRA’s review (e.g., memorandums or redline/strikeout versions), but the files did not indicate whether all such changes had been documented; and (3) no documentation, which means that there were no changes made during the review or that changes were made, but were not documented. The last two descriptive categories do not necessarily indicate whether the agencies have complied with the executive order. However, the categories do provide a relative sense of how transparent a regulatory review is to the interested public. If the agencies’ files indicated that all changes had been documented, we did not verify that assertion. We followed the same general procedure to describe the extent to which the agencies had documented for the public the changes made to the regulatory actions at the suggestion or recommendation of OIRA. The OIRA Administrator’s October 1993 guidance on the implementation of Executive Order 12866 indicated that the changes made to a regulatory action at the suggestion or recommendation of OIRA were a subset of changes made during the period of OIRA’s review. However, in this review we examined the implementation of these requirements separately because changes made at OIRA’s suggestion or recommendation are not necessarily a subset of changes made during the period of OIRA’s review. Both OIRA and agency officials have said that OIRA frequently comments on draft rules before they are formally submitted for review. Changes made to rules as a result of those comments would not be the same as changes made “between the draft submitted to OIRA for review and the regulatory action subsequently announced.” Therefore, in this part of the review we looked for documentation of changes that were made at the suggestion or recommendation of OIRA whenever they occurred. We also noted the extent to which the agencies’ documents for the regulatory actions that we reviewed were actually accessible to the public in the agencies’ public dockets or elsewhere. Both EPA and DOT had a number of public dockets, generally corresponding with different subunits in the agencies. For example, within DOT we examined files in the dockets of eight departmental units: the Federal Aviation Administration (FAA), the Federal Railroad Administration (FRA), the Federal Highway Administration (FHWA), the Maritime Administration (MARAD), the National Highway Traffic Safety Administration (NHTSA), the Research and Special Programs Administration (RSPA), the United States Coast Guard (USCG), and the Office of the Secretary of Transportation (OST). In HUD, there was one public docket covering all of the rules in our review. In OSHA, the information for our review was not part of the public docket, but was provided by agency officials. As part of the second objective, we also examined EPA and DOT actions after September 1996 to document changes suggested by OIRA. In our September 1996 testimony, we reported that EPA and DOT frequently had not documented changes to their rules that OIRA had suggested or recommended. As a result of that testimony, both EPA and DOT issued guidance to certain employees emphasizing the executive order’s requirement for documenting such changes. We examined EPA and DOT actions after the hearing to determine whether the agencies’ staff were better documenting OIRA-suggested changes. We also determined whether OIRA had taken any actions after the hearing to require agencies to document changes made at OIRA’s suggestion. Regarding the third objective, which was to determine whether OIRA had made available to the public all of the documents exchanged between OIRA and the agencies during the reviews by OIRA, we first noted any evidence in the agencies’ files that documents had been exchanged between the agencies and OIRA during the rulemaking process. In this review, we defined “documents” to include not only drafts of the rule sent to OIRA, but also letters, faxes, memorandums of telephone conversations, and decision memorandums. We then examined OIRA’s public files for each final action for which the agencies’ files indicated documents had been exchanged. In addition, we reviewed OIRA’s files for selected other final actions for which the agencies’ files did not indicate that documents had been exchanged. These actions were selected to obtain dispersion across the agencies and, when combined with the files we were already examining, to review at least one-half of the 82 final regulatory actions. We did not examine OIRA files for any of the related proposed rules because OIRA had already sent most of these older files out to be archived. We coded each of the actions on the basis of whether (1) OIRA and agency files had the same documents, (2) OIRA files did not have documents that we found in the agency files, (3) OIRA files had documents that were not in the agency files, or (4) both OIRA and the agency had documents not found in the other’s files. We conducted this review between March and December 1997 in the Washington, D.C., headquarters offices of each of the four regulatory agencies and OIRA in accordance with generally accepted government auditing standards. We provided a draft of this report to the Director of OMB and the Secretaries of HUD, Labor, and DOT, and the Administrator of EPA for their review and comment. Their comments are reflected in the agency comments section of this report. Executive Order 12866 directs agencies to “identify for the public, in a complete, clear, and simple manner, the substantive changes between the draft submitted to OIRA for review and the action subsequently announced.” The 4 agencies had complete documentation of the changes for about 26 percent of the 122 regulatory actions that we reviewed. The agencies had some documentation of changes made during OIRA’s review for another 30 percent of the actions, but the files did not indicate whether all such changes had been documented. The remaining 44 percent of the actions had no documentation available to the public indicating whether changes were made to the draft rule submitted to OIRA. We considered agencies to have completely documented the changes made to the rules during OIRA’s review if the docket included memorandums to the file listing all of the changes made, drafts of the rules indicating all changes that had been made, or agency certifications that no changes had been made. For example, OSHA’s records for its two final rules contained a memorandum to the file that summarized telephone contacts and a meeting between OSHA and OIRA during the review, and all of the changes that were made to the rule resulting from these contacts. The memorandum also indicated whether the changes were in the body of the regulation or in its preamble, and identified some changes as simply minor word adjustments. Some of the DOT dockets, particularly those in FAA and OST, contained a certification signed by a senior agency official indicating that no changes had been made to the rules. The dockets for the regulatory actions that had only some documentation contained memorandums and other records in the files indicating that certain changes had been made to the rules in question, but it was unclear whether all of the changes made during OIRA’s review had been recorded. For example, in EPA’s Air and Radiation docket three documents identified changes that had been made to one of the final rules as a result of communications with OIRA at different phases of the review process. However, it was not clear whether these three documents reflected all of the changes that had been made to the rule during OIRA’s review, or whether other changes had been made but not documented. The dockets for other regulatory actions had no documentation of changes made during OIRA’s review. For example, NHTSA’s public rulemaking docket contained a great deal of information related to the development of the four NHTSA rules included in our review. However, the docket did not contain any documents indicating that the rules had been submitted to OIRA, or that changes had been made during or as a result of OIRA’s review. Some of the HUD files contained documents that had been submitted to OIRA, but did not indicate whether any changes were made to the rules. As figure 1 shows, some differences existed among the four agencies in the degree to which they had documented changes made to rules during OIRA’s review. Although all four of the agencies had at least some documentation for over one-half of their regulatory actions, the agencies differed in the degree to which the documentation was complete. Two of the agencies (DOT and EPA) had no documentation for about one-half of their regulatory actions. The remaining agencies (HUD and OSHA) had no documentation for about one-third of their regulatory actions. Executive Order 12866 also directs agencies to identify changes to each regulatory action made at the suggestion or recommendation of OIRA. About 24 percent of the regulatory actions that we examined in the four agencies had complete documentation of these changes. Another 17 percent of the actions had some documentation of changes that had been made to the rules, but the files did not indicate whether all such changes had been made at OIRA’s suggestion or whether all OIRA-suggested changes had been documented. The remaining 59 percent of the regulatory actions had no documentation of changes that had been suggested or recommended by OIRA. The manner in which the agencies completely documented the changes made to the rules at OIRA’s suggestion included memorandums to the file listing all such changes, drafts of the rules indicating all changes made because of OIRA, and agency certifications that no OIRA-directed changes had been made. For example, the public docket for the two FRA actions that we examined had memorandums to the file detailing changes made to the draft “pursuant to meetings of appropriate OMB staff and FRA staff.” One of the HUD regulatory actions had a clear, simple memorandum to the file documenting not only the changes the agency made to the rule at the suggestion of OIRA, but also OIRA-suggested changes that the agency decided not to make. For an FAA final regulatory action, changes were noted in a redline/strikeout copy of the rule that identified them as “OMB changes.” An accompanying certification form indicated that all information required by the order was included, so we considered the documentation to be complete. Agencies’ public rulemaking dockets for other regulatory actions had some documentation of OIRA-suggested changes. For some of these actions, the dockets indicated that changes had been made to the rules in question, but it was unclear which specific changes could be traced to OIRA. In other cases, it was unclear whether all OIRA-suggested changes had been documented. For example, OSHA’s file for its methylene chloride final rule contained 10 documents indicating that a number of issues had been raised during the months that the rule had been reviewed at OIRA. Some of the documents indicated that specific changes had been made to the rule at OIRA’s suggestion, but the files did not indicate whether these documents reflected all of the changes that OIRA had suggested or recommended. Other documents indicated that OIRA had suggested certain changes to the rule, but it was unclear whether those changes had been made. As figure 2 shows, the agencies differed somewhat in the degree to which they documented OIRA-suggested changes. Also, a comparison of figures 1 and 2 indicates that HUD, DOT, and EPA were less likely to have any documentation of OIRA-suggested changes than documentation of changes made during OIRA’s review. In our September 1996 testimony on the implementation of Executive Order 12866, we reported that only a few of the rules that we examined at EPA and DOT had information in the agencies’ public rulemaking dockets that clearly indicated what changes had been made to the rules at the suggestion or recommendation of OIRA. As a result of our review, both agencies sent guidance to certain staff instructing them to better document OIRA-suggested changes to their rules. In September 1996, EPA’s Director of Regulatory Management and Information sent a memorandum to the agency’s steering committee representatives and regional regulatory contacts instructing them to ensure that the order’s transparency requirements were satisfied for all rules then under development. He suggested using redline/strikeout versions of the draft rule to satisfy these requirements. In a November 1996 memorandum, DOT’s Assistant General Counsel for Regulation and Enforcement reminded regulatory officers throughout the Department of their responsibilities under section 6 of Executive Order 12866 to identify for the public the drafts of rulemaking actions provided to OIRA and the substantive changes between the draft submitted to OIRA for review and the action subsequently announced. The memorandum also said that a signed, standard form certifying that these executive order requirements had been met would have to accompany any rules accepted in the rulemaking docket from OST. Those completing the form were required to indicate that the rule was not reviewed by OIRA, that no substantive changes had been made after the rule was submitted, or that the required information was attached. Although the certification form was required only for OST rules, the Assistant General Counsel suggested that other units within DOT use the same form. We examined documentation for EPA and DOT regulatory actions both before and after the September 1996 hearing to determine whether the agencies had better complied with Executive Order 12866 requirements on documenting OIRA-suggested changes. From January through September 1996, OIRA reviewed 18 EPA and 15 DOT final rules. In the period between October 1996 and March 1997, OIRA reviewed 12 EPA and 12 DOT final rules. As shown in table 2, the percentage of EPA rules with no documentation in the rulemaking dockets decreased in the later period, and the percentage of rules with some (but not complete) documentation increased. The percentage of DOT rules with no documentation also decreased, but the percentage with complete documentation increased. Although DOT did not issue its guidance until November 1996, use of the certification form suggested in that the guidance resulted in more rules with complete documentation in the later period. The rulemaking docket for all but one of the six FAA rules that we reviewed had the certification form that the DOT Assistant General Counsel for Regulation and Enforcement had suggested to indicate compliance with Executive Order 12866. However, in each case these certifications were added to the rulemaking dockets the day of or the day before our review of those dockets. Therefore, it appeared that the certifications were added for our benefit, not as a result of DOT’s guidance. As of December 1, 1997, OIRA had not taken any action since the September 1996 hearing to require agencies to document changes made pursuant to OIRA’s suggestion or recommendation. In fact, the OIRA Administrator has indicated that she does not support this transparency requirement. For example, in testimony before the Senate Governmental Affairs Committee on September 12, 1997, the OIRA Administrator said she opposed including a similar provision in S. 981 because “ased on my four-and-a-half years overseeing the regulatory review process, I strongly believe that this provision is counterproductive to everything we have sought to achieve in carrying out meaningful review.” She also said that “in our review process, it is very often not entirely clear who suggests or recommends a change in a regulation” and that having this requirement may “result in resistance to change lest the ‘record’ reflect a series of ‘gotchas’ by OMB.” One purpose of Executive Order 12866’s transparency requirements is to make information about the rulemaking process available to the public. However, some documents clearly identifying changes made during the OIRA review or at the suggestion of OIRA were not in the public rulemaking dockets. More frequently, however, documents describing changes to the rules were in the public dockets, but they were difficult to locate because the dockets either did not have indexes or had indexes that were difficult to use without special expertise. Some documents identifying changes made during OIRA’s review or at the suggestion of OIRA existed, but they were not available to the public. For example, two of the seven USCG regulatory actions that were included in our review had some documentation of changes made during OIRA’s review or at the suggestion or recommendation of OIRA in the agency’s rulemaking docket. However, it was unclear whether all such changes had been documented. Although, USCG had prepared detailed summaries for agency decisionmakers of all of the changes made during OIRA’s review, USCG officials said these summaries were internal communications and, therefore, not available to the public. OSHA had complete documentation of the changes made at the suggestion of OIRA for one of its three regulatory actions included in our review, and had some documentation for another action. However, OSHA maintained the information in files separate from the public rulemaking docket to ensure that it did not become part of the official rulemaking record and, therefore, subject to litigation. OSHA officials said that they would make the documentation available to the public upon request. However, for individuals to request the information, they must first know that the documents exist. The agencies’ public rulemaking dockets varied in the degree to which they could be easily used to find the information about changes in regulatory actions that Executive Order 12866 requires be identified for the public. The information in the dockets for some of the rules was quite voluminous, with numerous documents added to the files during the years in which the rules were being developed. Furthermore, many of the dockets did not have an index to the documents in the files, making it difficult to locate the information mandated by the order within those files. For example, the docket for 1 rule at FRA contained 19 folders of material related to the development of the rule, some of which were nearly a foot thick. FRA did not have a public index to this or other files in its docket, although the agency did have an internal listing of the contents of these folders. FRA officials said the agency’s internal index would become a part of a public, electronic index when the agency moves onto a DOT automated system that was under development at the time of our review. Several other dockets, including those at FAA, USCG, and HUD, did not have indexes for their rulemaking records. Even in the dockets that had indexes to the documents in the files, the indexes were not always very useful in identifying documents related to the OIRA review. Some of the agencies’ indexes (e.g., NHTSA’s index) were simply chronological lists of documents in the files. Although a chronological list is better than no list at all, these lists were often extensive, and the documents in the list were not always clearly identified. For example, in EPA’s Toxic Substances Control Act (TSCA) docket, the file index for one of the rules in our review identified communications between EPA and OIRA staff by the name of the OIRA staff member who was responsible for the review (e.g., “Memo from John Doe”). Therefore, a user of this index would have to know that “John Doe” was the name of an OIRA staff member to use the index to identify the documents reflecting OIRA-suggested changes. In contrast, other agencies have implemented procedures and practices that make locating and using information in their dockets much easier for the public. For example, EPA’s Air and Radiation docket had a consistently structured index for all of its rules, with specific sections in which information related to OIRA’s reviews could be found. OST also had a consistently structured index for rules in its dockets and had automated its docket so that both the indexes and the full text of many documents on the rulemaking process could be accessed electronically. Using the automated index greatly facilitated our access to information about documents in the rulemaking dockets. DOT officials told us that the automated system will eventually be extended across the entire Department and that all DOT dockets will be available on the Internet. EPA has also taken some steps to automate its dockets. Executive Order 12866 requires OIRA, at the conclusion of each regulatory action, to make available to the public all documents exchanged between OIRA and regulatory agencies during the review process. To determine whether OIRA had complied with this requirement, we first had to determine what documents had been exchanged between OIRA and the agencies during the review. Therefore, during our examination of the regulatory agencies’ files in relation to the first two transparency requirements, we also noted any evidence of documents that had been exchanged. Relatively few of the agencies’ files contained any indication that documents had been exchanged between the agencies and OIRA. This could indicate that documents are usually not exchanged during the review process or that documents are exchanged, but they are often not recorded in the agency files (because the order does not require the agencies to do so). OIRA officials said that there are relatively few documents exchanged during the review process, other than the rules themselves and any related economic analyses. Officials in one agency told us that most of OIRA’s interactions with the agency during the review process are by telephone or in face-to-face meetings, not by exchanging documents. Because we could not be sure that we had identified all of the documents that had been exchanged between the agencies and OIRA during the regulatory review process, we could not conclusively determine the extent to which OIRA had made such documents available to the public. However, the agencies’ files seemed to support the OIRA’s observations that the documents exchanged are most commonly the draft rules and draft economic analyses. Other documents that the agencies’ files indicated had been exchanged included letters, faxes, and memorandums documenting telephone calls or meetings with OIRA staff summarizing the issues discussed, questions raised, and positions taken by the agencies and OIRA. We examined OIRA’s public files for (1) each final action for which the agencies’ files indicated documents had been exchanged and (2) selected other final actions for which the agencies’ files did not indicate that documents had been exchanged. In total, we reviewed the files for 42 of the 82 final regulatory actions that we examined in the agencies. For 25 of these 42 actions, the OIRA files had the same documents that the agencies’ files indicated had been exchanged or had more documents that had been exchanged than the agencies’ files had indicated. For 17 of the 42 actions, the OIRA files did not have certain documents that the agencies files said had been exchanged (although in 7 of these cases, the OIRA files also had documents that were not in the agencies’ files). The OIRA files nearly always contained the draft rules that the agencies’ files indicated had been exchanged. However, OIRA less frequently had the other types of documents that the agencies’ files indicated had been exchanged (e.g., letters, faxes, and memorandums). Executive Order 12866 requires federal agencies to make the regulatory review process more transparent by identifying for the public “in a complete, clear, and simple manner” the substantive changes made to regulatory actions while under review at OIRA, and to identify the changes made at the suggestion or recommendation of OIRA. We believe that these public disclosure requirements, combined with the administration’s assertion of their effectiveness, can result in a public perception that information on changes made to regulations while at OIRA and at the suggestion of OIRA is readily available. However, our review of the information available to the public at four agencies indicated that this was usually not the case. The public rulemaking dockets for many of the 122 regulatory actions that we examined did not contain complete documentation of the changes made during OIRA’s review or at OIRA’s suggestion. Some of the files in those dockets indicated that certain OIRA-suggested changes had been made to the rules in question, but these files did not indicate whether all such changes had been documented. Other files contained no documentation of changes made during OIRA’s review or at OIRA’s suggestion. It was unclear whether this absence of documentation meant that no changes had been made to the rules or that the changes were made, but they had not been documented. Some agencies had prepared or collected documentation of these changes, but the documents were not in the public rulemaking dockets. Some of the files in the dockets were extremely voluminous, and, without indexes to the documents in those files, it was difficult to locate the information that the order requires be made available to the public. On the other hand, about 26 percent of the 122 regulatory actions that we reviewed in the 4 agencies had complete documentation available to the public of the changes made to rules while at OIRA, and about 24 percent had documentation of changes made at OIRA’s suggestion. Some of the dockets were well-organized, with clear indexes indicating where changes made during OIRA’s review and at OIRA’s suggestion could be found. Several agencies had begun to automate their dockets so that both indexes and eventually the entire rulemaking record could be accessed electronically by the public. These best practices illustrate both how agencies can satisfy the order’s transparency requirements, and how they can organize their dockets to facilitate public access and disclosure. As the agency charged with providing guidance and central review of the regulatory process, OIRA is in a position to tell the regulatory agencies how to improve the transparency of the regulatory review process. However, OIRA’s October 1993 guidance on this issue essentially repeated the requirements of the executive order. OIRA did not issue any further guidance on this issue after we noted in September 1996 that EPA and DOT frequently had not documented changes made to rules at OIRA’s suggestion. One resource that OIRA could use in the development of additional guidance on the order’s transparency requirements could be the best practices that we found in some of the agencies that we reviewed. OIRA’s October 1993 guidance indicated that the changes made at the suggestion or recommendation of OIRA are a subset of the changes made during the period of OIRA’s formal review. However, OIRA frequently comments on draft rules before they are formally submitted for review. Under OIRA’s current guidance, any changes made to the rules as a result of these comments would not need to be documented for the public. S. 981 contains public disclosure requirements that, if enacted into law, would provide a statutory foundation for the public’s right to regulatory review information. We believe that the bill’s requirement that rule changes be described in a single document is a good idea because it would make understanding regulatory changes much easier for the public. However, even if a statute is not enacted, the agencies would still benefit from guidance on how to improve the transparency of the regulatory review process under the order. We recommend that the Administrator of OIRA provide the agencies with guidance on how to implement Executive Order 12866 transparency requirements. The guidance should require agencies to include a single document in the public docket for each regulatory action that (1) identifies all substantive changes made during OIRA’s review and at the suggestion or recommendation of OIRA or (2) states that no changes were made during OIRA’s review or at OIRA’s suggestion or recommendation. The guidance should also indicate that agencies should document changes made at OIRA’s suggestion whenever they occur, not just during the period of OIRA’s formal review. Finally, the guidance should point to best practices in some agencies to suggest how other agencies can organize their dockets to best facilitate public access and disclosure. We sent a draft of this report for review and comment to the Director of OMB; the Secretaries of HUD, Labor, and DOT; and the Administrator of EPA. HUD officials said they had no comments on the draft report. The other agencies provided the following comments. On October 30, 1997, EPA’s Director of the Office of Regulatory Management and Evaluation told us that he believed the draft report was factually correct for EPA rules and highlighted the need for improved agency compliance with the docketing requirements of Executive Order 12866. He said that EPA will re-examine the content and implementation of its initial guidance and will issue new guidance or implementation methods to improve compliance with the requirements. Also on October 30, 1997, we discussed the draft report with DOT officials, including the Assistant General Counsel for Regulation and Enforcement. The Assistant General Counsel suggested that we make several changes in the final report. Specifically, he said the report should more clearly state that Executive Order 12866 does not require agencies to document instances where no changes were made to rules during OIRA review, and that the absence of documentation of changes made to a rule does not mean that an agency had not complied with the order’s transparency requirements; note that the November 1996 guidance he issued regarding certification of compliance with the executive order applied only to OST and was suggested guidance for the rest of DOT; and reflect the extent of DOT’s efforts to develop best practices for improving transparency of regulatory decisionmaking, particularly in the area of automation. DOT officials also suggested that we modify our recommendation to state that the OIRA guidance should specifically require agencies to document for the record when no changes were made during the OIRA review or at the suggestion or recommendation of OIRA. We agreed with all of these suggestions and made the appropriate changes in this report. On November 4, 1997, we met with OSHA officials, including OSHA’s Director of Regulatory Analysis, to discuss the draft report. We noted that we had changed the draft to address a question raised earlier by an OSHA official. This official had pointed out that one of the OSHA-proposed rules included in our review had been reviewed by OIRA before the issuance of Executive Order 12866 and should not have been subject to the requirements of the order. We deleted this proposed rule from our analysis, thereby reducing the number of OSHA regulatory actions from four to three. We also deleted 6 proposed rules in DOT that had been reviewed by OIRA before the issuance of the order, thereby reducing the number of DOT regulatory actions from 45 to 39. We then recalculated all related statistics and figures to account for these changes. (None of the HUD or EPA proposed rules was reviewed by OIRA before the issuance of the order.) OSHA officials also suggested that we provide additional clarification regarding our criteria for distinguishing between actions characterized as having “complete documentation” and those having “some documentation.” In this report, we clarified the definition of “some documentation,” emphasizing that the term referred to those agencies’ files that did not indicate whether all changes had been documented. Finally, OSHA officials said that we should make our recommendation more specific to indicate that the OIRA guidance should require agencies to document all changes made during the OIRA review and at the suggestion or recommendation of OIRA in a single, summary memorandum to the file. We agreed with this suggestion and made appropriate changes to the recommendation. On November 10, 1997, we received a letter commenting on the draft report from the Administrator of OIRA. (See app. I for a reprint of those comments.) The Administrator said that OIRA staff were not surprised by and agreed with several of the issues raised in the draft report. For example, she said that she was not surprised that there may not be a “one-to-one equivalence” between agencies’ files and OIRA files because of differences in Executive Order 12866 requirements between the agencies and OIRA. She said that OIRA’s database indicated that no changes were made to about 40 percent of the regulatory actions in the four agencies, which she suggested was why documentation did not exist in about 40 percent of the actions that we reviewed (because, as she confirmed, the order itself does not require documentation of no changes). She also said that providing an interested individual with a copy of the draft rule submitted for review and the draft on which OIRA concluded its review was an effective way to permit that individual to identify changes made to the draft rule. However, the Administrator also indicated that OIRA disagreed with the draft report in at least three respects. First, she said that OIRA disagreed with the draft report’s recommendation that OIRA issue guidance to agencies on how to organize their rulemaking dockets to best facilitate public access and disclosure. She said agencies have developed their own methods of organizing their rulemaking dockets and “it is not the role of OMB to advise other agencies on general matters of administrative practice.” Second, she said that OMB interprets some of the transparency requirements in the order differently than we did. We believe that the order requires agencies to document OIRA-suggested changes whenever they occur. The Administrator said that the order requires agencies to document only OIRA-suggested changes made during the formal period of OIRA’s review, not any changes made at OIRA’s suggestion before that period. Third, the Administrator said that she believes that the requirement that agencies document the changes made at the suggestion of OIRA is “counterproductive,” and that it is irrelevant who gets the “credit” for suggesting changes. The OIRA Administrator’s statement, in response to our recommendation, that it is not OMB’s role to advise agencies on general matters of administrative practice seems to run counter to the requirements placed on the agency in Executive Order 12866. Section 2(b) of the order states that “o the extent permitted by law, OMB shall provide guidance to agencies . . . ,” and that OIRA “is the repository of expertise concerning regulatory issues, including methodologies and procedures that affect more than one agency . . . .” Furthermore, as the Administrator pointed out, OIRA has already provided agencies with general guidance on the implementation of the order, including the transparency requirements. Therefore, we retained our recommendation, and, as suggested by DOT and OSHA, made it more specific by suggesting that the guidance require agencies to include a single document in the public docket for each regulatory action that (1) identifies all substantive changes made during OIRA’s review and at the suggestion or recommendation of OIRA or (2) states that no changes were made during OIRA’s review or at OIRA’s suggestion or recommendation. Executive Order 12866 requires that agencies identify for the public (1) the substantive changes that are made to rules while under review at OIRA and (2) the changes that are made at the suggestion or recommendation of OIRA. We believe the Administrator’s view that the second of these transparency requirements only applies to suggestions or recommendations made during the period of OIRA’s formal review reflects a narrow interpretation of the order, and is inconsistent with the intent of the order’s transparency requirements. In her letter, the Administrator said that OIRA tries to consult with agencies “early and often” in the rulemaking process because OIRA can become “deeply” involved in important agency rules “before an agency has become invested in its decision.” However, her interpretation of the order that agencies do not have to document any changes made at OIRA’s suggestion or recommendation during this period would result in agencies’ failing to document OIRA’s early involvement in the rulemaking process. The transparency requirements were included in the order because of concerns during previous administrations that the public could not determine what changes OIRA was making to agencies’ rules. Limiting the disclosure of OIRA-suggested or OIRA-recommended changes only to those made during the relatively narrow window of OIRA’s formal review, and specifically excluding changes made during a period in which the Administrator said OIRA can have its greatest impact, is not consistent with the order’s transparency objective. Furthermore, the OIRA Administrator’s comment that “an interested individual” can identify changes made to a draft rule by comparing drafts of the rule seems to change the focus of responsibility as it is stated in Executive Order 12866. The order requires agencies to identify for the public changes made to draft rules. It does not place the responsibility on the public to identify changes made to agency rules. Also, comparison of a draft rule submitted for review with the draft on which OIRA concluded review would not indicate which of the changes were made at OIRA’s suggestion, which is a specific requirement of the order. Finally, the Administrator’s position that the order’s requirement that agencies document the changes made at OIRA’s suggestion is “counterproductive” is unpersuasive for several reasons. First, this transparency requirement was put in place because of criticisms that OIRA exerted too much control over the development of rules and that decisions were being made without appropriate public scrutiny. Therefore, the purpose of this requirement is to allow the public to be able to understand why certain changes were made during the rulemaking process; it has nothing to do with who gets the “credit” for those changes. Second, the Administrator cites no evidence of a counterproductive effect of the requirement that agencies document OIRA-suggested changes. Even if evidence of negative effects were presented, those effects would need to be weighed against the transparency and public disclosure that the requirement permits. Finally, if the Administrator believes that this requirement is counterproductive and will result in resistance to change, she could recommend that the President revise the executive order and delete this requirement. Four years after the issuance of the order and the imposition of this requirement, the Administrator has not done so. In response to this comment, we clarified our interpretation of the order’s requirements in the body of the report and specified that OIRA’s guidance should indicate that agencies should document changes made at OIRA’s suggestion whenever they occur. We are sending copies of this report to the Director of OMB; the Secretaries of HUD, Labor, and DOT; and the Administrator of EPA. We are also sending this report to the Chairmen and Ranking Minority Members of (1) the House Committee on Government Reform and Oversight; (2) that Committee’s Subcommittee on National Economic Growth, Natural Resources, and Regulatory Affairs; and (3) the House Committee on the Judiciary, Subcommittee on Commercial and Administrative Law. We will make copies available to others on request. Major contributors to this report are listed in appendix II. Please contact me on (202) 512-8676 if you or your staff have any questions concerning this report. Alan Belkin, Assistant General Counsel Susan Michal-Smith, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the regulatory review process, focusing on the Office of Information and Regulatory Affairs (OIRA) and four regulatory agencies, the Department of Housing and Urban Development (HUD) and Department of Transportation (DOT), the Department of Labor's Occupational Safety and Health Administration (OSHA), and the Environmental Protection Agency (EPA), and on whether: (1) the regulatory agencies had identified for the public the substantive changes made to their regulations between the draft they submitted to OIRA and the regulatory actions they subsequently announced; (2) the regulatory agencies identified for the public the changes made to their regulations at the suggestion or recommendation of OIRA; and (3) OIRA had made available to the public all documents exchanged between OIRA and the selected agencies during OIRA's review. GAO noted that: (1) EPA, DOT, HUD, and OSHA had complete documentation available to the public of all of the substantive changes made to their rules between the draft submitted to OIRA and the actions subsequently announced for about 26 percent of the 122 regulatory actions that GAO reviewed; (2) for about 30 percent of the regulatory actions, the agencies had some documentation available to the public indicating that changes had been made to the rules while at OIRA, but the information did not indicate whether all such changes had been documented; (3) for the remaining 44 percent of the regulatory actions, the agencies had no documentation available to the public of changes made during OIRA's review; (4) because Executive Order 12866 does not specifically require agencies to document that no changes were made to rules while they were under review at OIRA, the absence of documentation does not necessarily mean that the agencies were not complying with the order; (5) however, it was unclear whether the absence of documentation meant that no changes had been made to the rules or whether changes had been made but they had not been recorded; (6) the agencies had complete documentation available to the public of all of the changes that OIRA had suggested or recommended for about 24 percent of the 122 regulatory actions; (7) for about 17 percent of the regulatory actions, the agencies had some documentation available to the public indicating that OIRA had suggested changes to the rules, but the information did not indicate whether all such changes had been documented; (8) for the remaining 59 percent of the actions, the agencies had no documentation available to the public indicating whether changes had been made at the suggestion or recommendation of OIRA; (9) for some of these actions, the agencies had documentation available indicating that changes had been made to the rules during the rulemaking process, but it was unclear whether any of the changes were at OIRA's suggestion; (10) even those rules for which the agencies had complete documentation of all changes made while they were at OIRA and at the suggestion of OIRA, the documents were not always available to the public or easy to locate; and (11) GAO could not identify all of the documents that had been exchanged between the agencies and OIRA during the regulatory review process, so it could not be determined whether OIRA had made all such documents available to the public.
Transplants are performed for organs such as kidney, liver, heart, intestine, pancreas, heart-lung, and kidney-pancreas. However, the kidney, liver, and heart are the most commonly transplanted organs. In 2000, doctors performed 13,333 kidney, 4,950 liver, and 2,197 heart transplants. Of these, children made up 617 of the kidney recipients, 569 of the liver recipients, and 274 of the heart recipients. Organ transplants were performed at 261 centers, which had one or more specific organ transplant programs, in 1998. Some of these centers accept both adults and children, and others are for children only. In 1998, pediatric kidney transplants were performed at 129 of the 241 centers that performed kidney transplants; pediatric liver transplants were performed at 77 of the 116 centers that transplanted livers; and pediatric heart transplants were performed at 54 of the 134 centers that transplanted hearts. In 1984, Congress enacted the National Organ Transplant Act (P.L. 98-507), which requires HHS to establish the OPTN. In 1986, HHS awarded the OPTN contract to UNOS, which operates the network under HRSA’s oversight. The OPTN develops national transplantation policy, maintains the list of patients waiting for transplants, and fosters efforts to increase the nation’s organ supply. OPTN members include all transplant centers, organ procurement organizations, and tissue-typing laboratories. Only a small fraction of those who die are considered for organ donation. Most cadaveric organs derive from donors who have been pronounced brain-dead as a result of a motor vehicle collision, stroke, violence, suicide, or severe head injury. When an organ becomes available, staff from the local organ procurement organization typically identify potential recipients from the OPTN computerized waiting list. Patients are ranked on the OPTN waiting list according to points assigned on the basis of time waiting, medical urgency, organ size, and the quality of the tissue-type match between the donor and the potential recipient, as determined by antigen matching. The criteria that determine the order of candidates on the list are applied or defined differently for each type of organ and for pediatric versus adult patients. With certain limitations, organs from pediatric donors can be transplanted into adults, and vice versa. The UNOS computer matches each patient in the OPTN database against a donor’s characteristics and then generates a different ranked list of potential recipients for each transplantable organ from the donor. Organs are generally allocated first to patients waiting in the local organ procurement organization’s service area, with priority based on a patient’s severity of illness. If a matching recipient is not found locally, the organ is offered regionally and then nationally. Organ allocation policies are revised from time to time to reflect advancements in medical science and technology. Title XXI of the Children’s Health Act of 2000 (P.L. 106-310, October 17, 2000) requires the OPTN to recognize the differences in organ transplantation needs between children and adults and adopt criteria, policies, and procedures that address the unique health care needs of children. In addition, the OPTN is to carry out studies and demonstration projects for improving procedures for organ procurement and allocation, including projects to examine and to increase transplantation among populations with special needs, such as children and racial or ethnic minority groups. Finally, the act requires the Secretary of HHS to conduct a study and make recommendations regarding the (1) special growth and developmental issues that children have before and after transplant; (2) extent of denials by medical examiners and coroners to allow donation of organs; (3) other special health and transplantation needs of children; and (4) costs of the immunosuppressive drugs that children must take after receiving a transplant and the extent of their coverage by health plans and insurers. (For a discussion of children’s access to these necessary medications, see app. I.) The Secretary must report to the Congress by December 31, 2001. Pediatric patients in need of an organ transplant continue to face a shortage of donated organs. From 1991 through 2000, the number of pediatric organ donors each year has remained relatively constant, even though the number of potential pediatric donors decreased. The number of adult donors has increased significantly during the same period, in large part because donor eligibility criteria have been expanded to include older donors and donors with certain diseases that were not accepted in the past. Simultaneously, the demand for organs for pediatric patients has grown substantially, with the number of children on waiting lists for organ transplants more than doubling. However, compared to adults, children account for a small number of transplant candidates. Several factors can prevent the recovery of organs from a potential donor. Refusal by the family to give consent for donation is the primary reason for nonrecovery of an organ, but failure by health professionals to identify potential donors or approach families and refusal by medical examiners and coroners to release the body also account for significant losses of transplantable organs. Nonetheless, organs are recovered from a higher proportion of potential pediatric donors than potential adult donors. The number of pediatric donors has held relatively steady despite a drop in the number of potential donors. Our analysis of 1989 through 1997 mortality data for children showed a 20-percent decline in deaths of the kinds that are most likely to result in organ donation, such as those resulting from head trauma, motor vehicle collisions, and violence. (See app. II for a complete list of these causes of death.) Mortality for potential donors up to age 19 years declined from 24,069 deaths in 1989 to 19,327 in 1997, the latest data available at the time of our analysis (see table 1). OPTN data show that from 1991 through 2000, while the number of pediatric donors remained relatively constant, the number of adult donors increased 45 percent (see fig. 1). The large increase in the number of adult donors is primarily due to changes in the criteria for accepting organs from a donor. At one time, organs were accepted only from someone who had been declared brain-dead and was relatively young and free from diseases that could affect organ quality. However, because of the continuing shortage of transplantable organs, transplant professionals have gradually expanded the criteria for acceptable organs. Older individuals and persons with certain medical conditions who previously would have been excluded from donating organs can now be donors.From 1991 through 2000, the number of cadaveric donors aged 50 to 64 increased 108 percent, and the number of cadaveric donors aged 65 or older increased 272 percent. The number of children waiting for a transplant has increased over time, but not as much as for adults (see fig. 2). OPTN data show that the number of pediatric patients awaiting transplants increased from 1,010 in 1991 to 2,299 in 2000, a 128-percent increase. The number of adults on the waiting list has increased even faster, from 23,709 in 1991 to 77,047 in 2000, a 225- percent increase. These increases have been spurred by advances in medical science and technology, which have made transplantation a more acceptable medical procedure; improvements in immunosuppressive medications, which have increased survival rates; and an increase in the incidence of certain diseases that lead to end stage organ failure. Despite these increases, the proportion of patients awaiting transplant who are children has remained fairly constant from 1991 through 2000, at between 3 and 4 percent overall. Several factors can prevent the recovery of organs from potential pediatric and adult donors and thus contribute to the continuing shortage of transplantable organs for both children and adults. For example, for many potential donors, families refuse to give consent for organ donation. For others, health care professionals may fail to offer the families the opportunity to donate. Further, some medical examiners and coroners believe that the need to preserve forensic evidence in certain types of cases, such as suspected child abuse and sudden infant death syndrome, makes it impossible for them to allow organ donation to proceed. The Association of Organ Procurement Organizations (AOPO) recently conducted a study at 31 organ procurement organizations on the reasons why potential adult and pediatric donors do not become organ donors. The study found that consent was not given for 39 percent of potential donors and only 41 percent of suitable individuals actually become organ donors. AOPO provided us with the survey data from the referral, request, and organ recovery processes for the pediatric patients. As our analysis shows in table 2, of the 2,420 potential pediatric donors, organs were recovered in 1,230 cases, or about 51 percent of pediatric cases, a rate higher than the overall donation rate. Family refusal (25 percent) was the most common obstacle to organ recovery, but this occurred less frequently for potential pediatric donors than for the entire group of potential donors. Most pediatric organs are transplanted into adults because adults make up the vast majority of patients waiting for an organ transplant and therefore are more likely to be at a higher status on local organ waiting lists than children. However, the degree to which pediatric organs are transplanted into adults varies by organ. In particular, adult patients receive more pediatric kidneys than pediatric patients do, partly because of the importance of tissue-type matching criteria in the allocation of kidneys.While most pediatric kidneys are transplanted into adults, adult kidneys are sometimes transplanted into children. The situation is different for livers and hearts, where organ size is an important determinant of suitability. Livers and hearts from children under 10 are usually transplanted into pediatric patients, whereas those from children aged 11 to 17 are usually transplanted into adults. Figure 3 shows the distribution of pediatric kidneys, livers, and hearts to pediatric and adult recipients. (See app. III for a detailed listing of the distribution of kidneys, livers, and hearts by age of donor and recipient.) Pediatric livers and hearts that are given to adults have sometimes been refused beforehand for a pediatric patient by the patient’s physician for various medical or logistical reasons. Adult organs are also transplanted into children, but in much smaller numbers. Although the majority of pediatric kidneys are transplanted into adults, some adult kidneys are transplanted into children. From 1994 through 1999, adult donors provided 81 percent of the kidneys procured and pediatric donors provided 19 percent (see table 4 in app. III). Of the adult kidneys, 4 percent were transplanted into children. Of the pediatric kidneys, 93 percent were transplanted into adults. Figure 4 shows the distribution of pediatric kidneys by age of donor and recipient. During that period, 32 percent of the kidneys given to pediatric recipients came from children, and 68 percent came from adults. Kidneys from pediatric donors are most often transplanted into adults because children make up only a small portion of the kidney waiting listand because of the importance of antigen matching as a ranking factor for this organ. Also, the matching criteria for kidneys generally do not include the size (weight and height) of the donor and recipient. When kidneys from small children are given to adults, they are typically transplanted en bloc, meaning that both kidneys are transplanted into the recipient. Transplant center representatives told us that adult kidneys are often preferred for children because of the larger kidney mass. If complications occur, the larger kidney is more apt to continue functioning than a small, pediatric kidney. For liver transplants, the sizes of the donor and the recipient are factors that are considered to obtain an organ of compatible size. From 1994 through 1999, adult donors provided 78 percent of the livers procured and pediatric donors provided 22 percent (see table 5 in app. III). Of the adult livers, 4 percent were transplanted into children. Of the pediatric livers, 63 percent were transplanted into adults, but this varied greatly by age of the donor. Most livers (81 percent) from donors aged 5 years or younger went to recipients in the same age group, and 4 percent went to adults. For the 6- to 10-year-old donors, 47 percent of the livers went to adult recipients, and for the 11- to 17-year-old donors, 89 percent of the livers went to adult recipients. Figure 5 shows the distribution of pediatric livers by age of donor and recipient from 1994 through 1999. During that period, 72 percent of the livers given to pediatric recipients came from children, and 28 percent came from adults. Unlike kidneys and hearts, livers can be reduced in size or split to accommodate the size of the recipient. A reduced-size liver from an adult donor can be transplanted into a pediatric patient. A split liver can yield a portion for an adult and a portion for a child. However, the number of livers that are either reduced or split is small. From 1994 through 1999, fewer than 2 percent of donor livers were reduced for transplantation, and about 1 percent were split for transplantation. Although using reduced or split livers can provide a needed transplant for children, initial studies found that survival rates were lower for pediatric recipients of these types of liver transplants. However, a recent OPTN analysis of 1997-99 transplants has shown similar 1-year survival rates for whole and split-liver transplants. Sometimes an organ from a pediatric donor is transplanted into an adult even though there is a higher-ranking pediatric patient waiting. This only occurs if the transplant center refuses the organ for the higher-ranked patient. According to OPTN data, 1,122 liver transplants occurred during 1997 and 1998 in which an adult recipient received a pediatric organ. Of these, 222 livers were each refused for at least one potential pediatric recipient who was ranked higher on the waiting list than the adult recipient. The most common reasons for refusing the pediatric liver for a pediatric patient involved administrative reasons (e.g., medical judgment, transportation, logistics, and distance concerns) (33 percent), donor size and/or weight (26 percent), and poor donor quality (18 percent). From 1994 through 1999, adult donors provided 75 percent of the hearts procured and pediatric donors provided 25 percent (see table 6 in app. III). Of the adult hearts, 3 percent were transplanted into children. Of the pediatric hearts, 39 percent were transplanted into adults, but this varied greatly by age of donor. For heart transplants, organ size is critically important both to proper functioning and to proper fit into the chest cavity. Hearts from small children, aged 5 years or younger, are therefore likely to be transplanted into children of the same age group. Of the hearts recovered from donors aged 5 years and younger, 93 percent were transplanted into recipients in the same age group, and 1 percent went to adults. During the same period, adults received about 24 percent of the hearts from donors aged 6 through 10 years, and 89 percent from donors aged 11 through 17 years. Figure 6 shows the distribution of pediatric hearts by age of donor and recipient. During that period, 83 percent of the hearts given to pediatric recipients came from children, and 17 percent came from adults. OPTN data indicate that 664 heart transplants occurred during 1997 and 1998 in which a pediatric organ was transplanted into an adult. Of these, 75 hearts were each refused for at least one pediatric patient who was ranked higher on the waiting list than the adult recipient. In these instances, the most common refusal reasons were donor quality (17 percent), donor size and/or weight (17 percent), administrative reasons (14 percent), and abnormal echocardiogram (14 percent). Although the patterns vary by organ and present a complex picture, pediatric patients appear to be faring as well as or better than adult patients, both while on the waiting list and after transplantation. Data from the OPTN and HHS on four key measures—time on the waiting list, deaths while waiting for a transplant, and 1- and 5-year post-transplant survival— show that children appear to fare as well as or better than adults, with some exceptions for very young patients and heart transplant patients. Other measures of importance for pediatric patients, such as growth and development, are not routinely part of the current OPTN data collection. Pediatric patients wait fewer days on average than adults for transplants. With the exception of infants under 1 year of age and heart transplant patients, death rates for pediatric patients on the waiting list are lower than those for adults. Again with the exception of infants under 1 year old, post-transplant survival rates for children generally appear to be equivalent to or better than those for adult patients at the 1- and 5-year post-transplant points. However, because the number of pediatric patients is small, variation across time by even a few pediatric patients on any of these measures could result in relatively large changes in the percentages. We report on the most current data available. In general, pediatric patients wait fewer days than adults for transplants (see fig. 7). Adults are likely to wait about twice as long as children for a kidney transplant. For patients added to the waiting list for a transplant in 1997, the median waiting time for pediatric kidney recipients ranged from 389 days for 6- to 10-year-olds to 548 days for 11- to 17-year-olds, while for adults the range was from 1,044 days for 18- to 34-year-olds to 1,150 days for 50- to 64-year-olds. For livers and hearts, the median waiting time for adult candidates was two to three times as long as it was for children. For livers, median waiting times for patients added to the waiting list in 1999 ranged across age subgroups from 182 to 318 days for children through age 10. For children aged 11 to 17 years, however, the waiting time was similar to waiting times for adults. Candidates aged 11 through 17 years waited 746 days, whereas adult waiting times ranged across age subgroups from 636 to 795 days. Across all age groups, waiting times for hearts were much shorter than they were for kidneys and livers because survival is lower without a transplant. Among heart transplant candidates added to the waiting list in 2000, median waiting times for children ranged from 52 to 86 days and for adults from 137 to 242 days across the different age subgroups. The death rates for pediatric patients on the waiting list vary considerably by organ, with pediatric patients having slightly lower rates than adults for kidneys and livers, but higher rates than adults for hearts (see fig. 8). In 2000, death rates for children waiting for a kidney transplant ranged from 0 to 92 per 1,000 patient risk years (i.e., years on the waiting list), whereas for adults they ranged from 36 to 104. Infants under 1 year old who were awaiting liver or heart transplants had considerably higher death rates than other pediatric or adult age groups; however, pediatric patients aged 1 year or older waiting for a liver transplant had lower death rates than adults. For patients waiting for a heart transplant, pediatric patients of all age groups had higher death rates than did adults. With the exception of infants under 1 year old, post-transplant survival rates (i.e., the percentage of patients alive at 1 and 5 years after transplant) for children generally appear to be as good as or better than those for adults (see figs. 9 and 10). In general, 1-year survival rates vary more by type of organ than they do by age group, with kidney transplant recipients having the highest survival rates and heart transplant patients having the lowest survival rates. Overall, survival rates for children at 5 years after transplant are better than adult survival for kidneys and livers. Children 5 years old and younger have lower 5-year survival rates for heart transplants. Organ allocation policies provide a number of protections for children awaiting transplants. The organ transplant community has recognized the distinctive needs of children waiting for a transplant, and the OPTN has revised organ allocation policies over time to consider the pediatric patient. The priority a child receives takes into account differences between children and adults in the progression and treatment of end stage organ disease. Prolonged waiting times can be more harmful for children than for adults because disease progression in children can be faster and their growth and development can be compromised without timely transplantation. The policies differ for each organ. For example, waiting time requirements for kidney transplants are less stringent for pediatric patients than for adult patients because of the unique problems children experience with end stage renal disease, including difficulties with dialysis. For livers, research showing better survival for pediatric patients who received a pediatric liver led to a policy change giving priority for pediatric livers to pediatric patients. For hearts, medical urgency status is determined differently for pediatric patients because pretransplant treatments appropriate for adults, such as heart assist devices, cannot always be used for children who are waiting for transplants. Current kidney allocation policy provides several protections for pediatric kidney patients because of the unique problems they experience in association with end stage renal disease. These problems include dialysis difficulties and disruption of growth and development due to renal failure. Early transplantation can avoid or ameliorate many of the effects of end stage renal disease experienced by pediatric patients. One advantage the allocation policy gives to pediatric patients concerns waiting time, one factor in determining priority for obtaining a transplant. Waiting time for children is measured from when they are placed on the waiting list, whereas, since changes to the adult kidney allocation policy in January 1998, waiting time for adults begins when they reach a certain stage of disease. Therefore pediatric patients can begin moving up in priority on the waiting list at an earlier point in their disease progression than can adult patients. In addition, pediatric patients receive higher priority for kidney allocation at the time of listing and until they reach 18 years of age, based on their age at listing. The criteria for granting this priority were first implemented by the OPTN in 1990 and have been altered several times, most recently in November 1998. Kidney transplant candidates less than 11 years of age at listing are assigned four additional points, and candidates aged 11 through 17 years are assigned three additional points. Another advantage was introduced by the OPTN in November 1998. It provides that patients who are less than 18 years old at listing, and have not received a transplant within a specified amount of time, must be the first in line to receive available kidneys, except for those that must be allocated to a patient with a perfect antigen match, to a patient needing a kidney plus a nonrenal organ, or to a patient whose immune system makes it difficult to receive organs. These specified times are within 6 months of listing for candidates up to and including 5 years of age, 12 months for those from 6 to 10 years, and 18 months for those from 11 to 17 years. The liver allocation policy for pediatric patients has been revised several times since 1994 to address conditions and challenges unique to pediatric patients. Children with chronic liver disease may deteriorate rapidly and unpredictably. Their growth and development may also be affected. The policy revisions redefine medical urgency criteria, focus on disease progression in children, and recognize factors distinctive to pediatric liver candidates. In June 2000, the OPTN approved a policy to give pediatric liver transplant patients preference over adult patients for livers from pediatric donors. Prior to the implementation of this change, the age of the donor was not a factor. Now, a pediatric liver is offered to a pediatric patient before an adult patient with the same medical urgency within the same organ distribution area. If no local matches occur in a given medical urgency category, the pediatric liver will be offered to a pediatric patient before an adult patient with the same medical urgency at the regional level. This change was made in response to the finding that pediatric liver transplant recipients have higher survival rates and better graft survival if they are transplanted with a pediatric liver rather than an adult liver. A study showed that pediatric patients receiving livers from pediatric donors during 1992 through 1997 had a 3-year graft survival rate of 81 percent, compared to 63 percent for children receiving an adult liver. Adults, however, had similar 3-year graft survival rates regardless of donor age. The OPTN policy also provides an advantage for pediatric patients with chronic liver failure. The policy places these patients at the highest medical urgency level when their condition worsens, a provision that is not in place for adult patients with chronic liver failure. Moving pediatric patients to the highest category provides the advantage of access to donated organs locally and regionally before all patients in lower categories. The heart allocation criteria have also been revised recently to reflect differences in treatment and progression of heart disease between children and adults. Before these revisions, the use of certain mechanical assist devices or other monitoring and treatment therapies was required for any patient to be included in the highest medical urgency categories. However, because some of these devices and therapies are generally not used with pediatric patients, the OPTN removed this requirement for pediatric patients in January 1999. The OPTN implemented two further revisions in May 2000. One change allows pediatric patients on the waiting list for a heart to retain their medical urgency status when they turn 18 rather than being subject to adult criteria. Another revision gives priority to pediatric heart transplant candidates, within each medical urgency category, for hearts recovered from 11- to 17-year-old donors. Children constitute a small proportion of patients in need of an organ transplant, but organ allocation policies have been designed to provide this vulnerable population with some special protections. Our examination of transplantation patterns across age groups and recent data on waiting times and death and survival rates indicates that pediatric patients do not appear to be at a disadvantage in the competition for scarce organs. These data show comparable or better outcomes for pediatric patients even before the most recent policy changes, such as the change to prioritize pediatric livers for pediatric recipients. We provided HHS with the opportunity to comment on a draft of this report. HHS provided technical comments, which we have incorporated where appropriate. We also provided a draft of the report to UNOS, and it provided technical comments, which we have incorporated where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to others who are interested and make copies available to others who request them. If you or your staffs have any questions about this report, please call me at (202) 512-7119. Another contact and key contributors to this report are listed in appendix IV. Coverage for immunosuppressive medications may be extended to children under Medicare, Medicaid, and private insurance. Pediatric patients may also gain access to prescription drug coverage through special state insurance programs for children. However, both adults and children may have difficulty in obtaining and retaining insurance coverage for the expensive immunosuppressive medications necessary for survival following transplantation. Further, gaps in coverage may occur during a transition from one type of insurance to another. For example, if a parent loses Medicaid eligibility, a child’s eligibility status could also be affected. In addition, coverage problems can arise for both Medicaid- and private- insurance-covered pediatric patients when they reach adulthood. Transplant recipients covered by Medicaid as children may become ineligible for continued coverage if they are able to obtain employment as they reach adulthood. Children covered by private insurance under a parent’s policy may be unable to afford coverage, given their expensive preexisting medical condition, when they grow too old to be covered by a parent’s policy. Data on the costs of immunosuppressive medications, actual payments, and patient cost-sharing by the various insurers are not readily available, so the level of the coverage cannot be specified with certainty. The proportion of transplant patients covered by different insurance programs can be used to derive an indication of coverage for immunosuppressive medications. Data from the Organ Procurement and Transplantation Network (OPTN) on the expected sources of payment for the pediatric transplants performed from 1997 through 1999 may serve as a general estimate of the share of immunosuppressive medications for children paid for by Medicare, Medicaid, and private insurance. OPTN data show that 4,835 transplants were performed on children up to age 17 from 1997 to 1999. Of these, 2,775 transplants were for livers and hearts, and 2,060 were for kidneys. As figure 11 shows, private insurance paid for almost half of the pediatric transplants for these three organs performed from 1997 through 1999, while Medicaid paid for 25 percent and Medicare paid for 14 percent of these transplants. For the same period, Medicare paid for an estimated 30 percent of pediatric kidney transplants because of its special coverage for kidney patients under the End-Stage Renal Disease (ESRD) program. Medicare coverage for transplants and the associated medications is provided to children either under a special entitlement to the Medicare program created by the Congress for those diagnosed with ESRD or by virtue of a parent’s enrollment as an eligible Medicare beneficiary. The Medicare program has special entitlement rules for patients with ESRD, the stage of kidney impairment that is considered irreversible and requires either regular dialysis or a kidney transplant to maintain life. To be eligible for Medicare entitlement as an ESRD patient, the patient generally must have been on dialysis for 3 months and must be (1) entitled to a monthly insurance benefit under title II of the Social Security Act (or an annuity under the Railroad Retirement Act), (2) fully or currently insured under Social Security, or (3) the spouse or dependent child of a person who meets at least the first 2 requirements. Currently, ESRD patients’ entitlement to Medicare—and thus coverage for immunosuppressive medications—ends 36 months after a transplant is performed. In contrast, individuals who are eligible for Medicare under other entitlement rules—that is, age 65 or disabled, and eligible for Social Security or Railroad Retirement benefits—currently receive unlimited coverage for immunosuppressive drug medications for the life of the transplant under Part B. Originally, Medicare limited immunosuppressive drug coverage to 1 year. However, the Omnibus Budget Reconciliation Act of 1993 (P.L. 103-66) expanded this coverage with a series of annual 6- month increases beginning in 1995. As a result, by 1998, Medicare patients received immunosuppressive medication coverage for 36 months after a transplant operation. In 1999, the Medicare, Medicaid, and SCHIP Balanced Budget Refinement Act of 1999 (P.L. 106-113) extended this immunosuppressive drug coverage benefit for an additional 8 months. Most recently, the Medicare, Medicaid and SCHIP Benefits Improvement and Protection Act of 2000 (P.L. 106-554) eliminated all time limits for immunosuppressive drug coverage under Part B of Medicare. Medicaid is a joint federal/state entitlement that annually finances health care coverage for more than 40 million low-income individuals, over half of whom are children. Medicaid coverage for children is comprehensive, offering a wide range of medical services and mandating coverage based upon family income in relation to the federal poverty level (FPL). Federal law requires states to cover children up to age 6 from families with incomes up to 133 percent of the FPL, and children ages 6 to 15 for incomes up to 100 percent of the FPL. Medicaid benefits are particularly important for children because of Medicaid’s Early and Periodic Screening, Diagnostic, and Treatment (EPSDT) Program. EPSDT, which is mandatory for categorically needy children, provides comprehensive, periodic evaluations of health and developmental history, as well as vision, hearing, and dental screening services to most Medicaid-eligible children.Under EPSDT, states are required to cover any service or item that is medically necessary to correct or ameliorate a condition detected through an EPSDT screening, regardless of whether the service is otherwise covered under a state Medicaid program. This would include immunosuppressive drugs. Private insurance, such as employer-sponsored health plans, generally covers all aspects of organ transplants, including follow-up care and necessary medications. Information is not readily available on private insurance coverage specifically for immunosuppressive medications. However, according to a 1998 national survey of employer-sponsored health plans, nearly all employers that offer health benefits include benefits for outpatient prescription drugs. In addition, a Kaiser Family Foundation survey of employer health benefits found that 96 percent of all firms with conventional fee-for-service plans and 99 percent of those with managed care plans cover prescription drugs. Privately insured organ transplant patients most likely will incur additional expenses for medications, however, such as out-of-pocket expenses for deductibles and copayments, because of limits on coverage. A recent survey of employers with 1,000 or more employees on strategies to control prescription drug expenditures found that 6 percent of employers cap annual benefits and 10 percent are considering doing so. The study also found that 41 percent of employers limit the quantities of prescription drugs and 7 percent are considering it. Moreover, 40 percent of employers now require higher copayments than previously, and 39 percent are considering it. The causes and circumstances of death that could reasonably result in a declaration of brain death and from which organ donation might be possible are listed in table 3. We used the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM) codes to classify deaths by causes and circumstances. The following tables show the distribution of kidneys, livers, and hearts procured from all donors from 1994 to 1999, by age of donor and recipient. In addition to the above, Donna Bulvin, Charles Davenport, Roy Hogberg, Behn Miller, and Roseanne Price made key contributions to this report.
Pediatric patients in need of an organ transplant face a shortage of donated organs. The number of pediatric organ donors has remained relatively constant from 1991 to 2000, despite a drop in potential donors. The number of adult donors rose 45 percent during the same period, in large part because donor eligibility criteria have been expanded to include older donors and donors with diseases that have been prohibited in the past. Organ waiting lists for pediatric patients have more than doubled. Compared to adults, however, children account for a small number of transplant candidates. The degree to which pediatric organs are transplanted into adults varies by organ. Pediatric patients appear to be faring as well as or better than adult patients, both while on the waiting list and after transplantation. Allocation policies for kidneys, livers, and hearts provide several protections for children awaiting transplants. The priority a child receives takes into account differences between children and adults in the progression and treatment of end stage organ disease, with the policies differing for each organ.
The purposes of the Single Audit Act are to promote sound financial management, including effective internal controls, with respect to federal awards administered by nonfederal entities; promote the efficient and effective use of audit resources; and ensure that federal departments and agencies, to the maximum extent practicable, rely upon and use single audit work. For the audit period of this report, the Single Audit Act, as implemented by OMB Circular No. A-133, requires nonfederal entities that expend $500,000 or more in federal awards in a fiscal year to have either a single audit or program-specific audit conducted. Federal awards include grants, loans, loan guarantees, property, cooperative agreements, interest subsidies, insurance, food commodities, direct appropriations, other assistance, and federal cost-reimbursement contracts. A single audit is an audit of both the entity’s financial statements and expenditures of federal awards. The Single Audit Act mandates that federal agencies assume oversight responsibility for the funds that they award to nonfederal entities. OMB Circular No. A-133 requires that federal awarding agencies ensure that award recipients complete and submit single audit reports within the earlier of 30 days after the receipt of the auditor’s report or 9 months of the award recipient’s fiscal year-end, unless a longer period for audit is agreed to in advance by the cognizant or oversight agency for audit. Moreover, OMB Circular No. A-133 directs federal agencies to take appropriate action using sanctions in cases where an award recipient is unable or unwilling to have an audit conducted in accordance with the circular’s requirements. The sanctions may include withholding a percentage of federal awards until the audit is completed satisfactorily, withholding or disallowing overhead costs, suspending federal awards until the audit is conducted, or terminating the federal award. OMB Circular No. A-133 also requires each award recipient to submit an audit reporting package to the FAC for distribution to each federal agency responsible for programs for which the audit report identifies a single audit finding and for archival purposes. The reporting package is to include (1) the award recipient’s financial statements and schedule of expenditures of federal awards; (2) a summary schedule of prior audit findings, including the status of all single audit findings included in the prior audit’s schedule of findings and questioned costs for federal awards; (3) the auditor’s report (including an opinion on the award recipient’s financial statements and schedule of expenditures of federal awards, reports on internal control and compliance with laws, regulations, and provisions of contracts or grant agreements, and a schedule of findings and questioned costs); and (4) a corrective action plan. In addition, OMB Circular No. A-133 requires federal awarding agencies to review award recipients’ plans to correct single audit report findings and issue a written management decision on those plans within 6 months of receipt of the single audit report for each single audit finding. A management decision entails the federal awarding agency or pass- through entity evaluating the single audit report findings and the award recipient’s corrective action plan and issuing a written decision on whether the federal awarding agency is satisfied with the corrective action plan and, if not, what corrective action is necessary. Federal award recipients are required to submit their single audit reporting packages to the FAC within the shorter of 30 days after receipt of the single audit report from the auditor or nine months after the end of the award recipient’s fiscal year. Federal awarding agencies are to ensure that each award recipient submits its single audit reporting package to the FAC in accordance with these requirements. Under the time frames provided by the Single Audit Act and OMB Circular No. A- 133, it may take up to 15 months for the award recipient to initiate corrective action for single audit findings. For example, if the auditor’s single audit report identifies a single audit finding for an entity that has a June 30, 2013 fiscal year-end, the award recipient must submit the single audit report along with its corrective action plan to the FAC, at the latest, within 9 months of the fiscal year-end or no later than March 31, 2014. The federal awarding agency would then have 6 months or until September 30, 2014, from receipt of the single audit report to communicate a written management decision to the award recipient, and the award recipient is to initiate corrective action. Issuance of a timely management decision is critical because OMB Circular No. A-133 allows award recipients to consider a single audit finding as no longer valid and to take no further action if all the following have occurred: a management decision was not issued, 2 years have passed since the audit report that contained the single audit finding was submitted to the FAC, and the federal agency or pass-through entity is not currently following up with the award recipient on the single audit finding. In some cases, action to correct the single audit finding is not started until the award recipient receives the management decision. As a result, it may take up to 15 months after the end of the fiscal year in which the single audit finding was initially identified before corrective action has begun. Award recipients report their progress in a schedule of prior single audit findings where the status of the single audit finding is reported as either corrected (closed), partially corrected, or not corrected (open). The auditor then reviews this schedule and includes it in the next single audit report. When the awarding agency delays issuing a management decision to the award recipient, initiation of corrective action may also be delayed. As a result, the single audit finding may remain open. Figure 1 provides an illustrative timeline for single audit reports with single audit findings. In December 2013, OMB issued the Uniform Guidance to supersede and streamline existing requirements for federal awards. The Uniform Guidance required federal agencies to update their regulations and policies for federal awards. Portions of the Uniform Guidance are aimed at strengthening federal agency oversight of single audits. The Uniform Guidance required that federal agencies implement the policies and procedures applicable to federal awards by issuing a regulation to be effective by December 26, 2014, unless different provisions are required by statute or approved by OMB. On December 19, 2014, OMB and all federal award-making agencies issued a joint interim final rule to implement the Uniform Guidance, fulfilling the requirement contained in the Uniform Guidance. We previously reported that all federal grant- awarding agencies issued regulations implementing this guidance. Both selected subagencies in Agriculture and HHS and a selected subagency in Transportation (the Federal Transit Administration) did not effectively design policies and procedures to reasonably assure that award recipients submitted their single audit reports within the time frames required by OMB Circular No. A-133. In addition, both selected subagencies in Education and HUD and the other selected subagency in Transportation (the Federal Highway Administration) effectively designed policies and procedures to reflect that requirement. OMB Circular No. A- 133 requires that federal awarding agencies ensure that award recipients submit single audit reports within the shorter of 30 days after receipt of the report from the auditor or 9 months of the award recipient’s fiscal year-end unless a longer period is agreed to in advance by the cognizant or oversight agency for audit. Standards for Internal Control in the Federal Government provides the overall framework for establishing and maintaining internal control and states that control activities should be effective and efficient in accomplishing the agency’s control objectives. The standards further state that management should clearly document internal control in a manner that allows the documentation to be readily available for examination. The five components of internal control are as follows: Control Environment: Management and employees should establish and maintain an environment throughout the organization that sets a positive and supportive attitude toward internal control and conscientious management. Risk Assessment: Internal control should provide for an assessment of the risks the agency faces from both external and internal sources. Control Activities: Internal control activities help ensure that management’s directives are carried out. The control activities should be effective and efficient in accomplishing the agency’s control objectives. Information and Communications: Information should be recorded and communicated to management and others within the entity who need it and in a form and within a time frame that enables them to carry out their internal control and other responsibilities. Monitoring: Internal control monitoring should assess the quality of performance over time and ensure that the findings of audits and other reviews are promptly resolved. To implement the internal control standards, each agency and subagency is to develop detailed policies and procedures (control activities) to execute its responsibilities under OMB Circular No. A-133. Without effectively designed policies and procedures for determining whether single audit reports have been submitted timely by award recipients, agencies are hampered in their ability to take timely and effective action to reasonably assure that award recipients correct all single audit findings. Effective federal agency oversight would help to reasonably assure effective program operations and minimize improper payments. We identified four key steps relating to the design of single audit policies and procedures that would assist federal awarding agencies in fulfilling their responsibilities under OMB Circular No. A-133 for reasonably assuring that award recipients submit single audit reports timely. To that end, designing policies, procedures, and mechanisms that include the following steps would help provide reasonable assurance that agencies can fulfill these responsibilities: identify award recipients that should have submitted single audit reports, verify that the award recipients submitted single audit reports, determine whether the single audit reports were submitted within the required time frames, and take action to obtain the single audit reports when award recipients did not submit the reports within the required time frames. Figure 2 provides an overview of our assessment of the selected subagencies’ design of policies and procedures relating to the award recipients’ timeliness of single audit report submission for fiscal year 2013. Agriculture’s two selected subagencies, Food and Nutrition Service (FNS) and Rural Development (RD), did not effectively design policies and procedures to reasonably assure that award recipients submitted their single audit reports timely. Agriculture’s approach for single audit oversight is decentralized, and the Office of the Chief Financial Officer (OCFO) is responsible for providing departmental policy to the subagencies. Agriculture’s subagencies, such as FNS, are responsible for obtaining single audit reports and resolving single audit findings. FNS did not effectively design policies and procedures to reasonably assure that award recipients submitted single audit reports timely. FNS had policies and procedures that reference OMB Circular No. A-133 single audit guidance, but these policies and procedures were not effectively designed to identify award recipients that should have submitted single audit reports, verify that the award recipients that were required to submit single audit reports actually submitted the reports, determine whether the single audit reports were submitted within the required time frames, and take action to obtain the single audit reports when award recipients did not submit the reports within the required time frames. According to FNS’s policies and procedures included in the FNS Audit Manual, the Agriculture OCFO receives a copy of single audit reports with findings related to Agriculture’s programs. If the OCFO determines that the single audit reports include award recipients that have received awards from FNS, the OCFO forwards those single audit reports to FNS. In addition, FNS’s policies and procedures state that the regional offices should periodically check the FAC database to determine if single audit reports have been submitted. Specifically, according to FNS officials, the single audit liaison checks the FAC for single audit reports and notifies the regional offices that single audits are available for download from the FAC. While these policies and procedures allow FNS to identify the award recipients that have submitted single audit reports, including the date when each single audit report was submitted, the policies and procedures do not allow FNS to reasonably assure that award recipients that have not submitted their single audit reports do so by the required due dates. An FNS official stated that the agency does not maintain a list of award recipients that should have submitted single audit reports to the FAC but did not. In addition, the FNS official stated that its award recipients are primarily state agencies and that FNS receives a statewide single audit report once annual statewide audits are completed. As a result, according to an FNS official, FNS would be aware when a state has not submitted its single audit report. FNS also makes awards to Indian tribal organizations. We reviewed data in the FAC database and found that 110 Indian tribal organizations received over $500,000 in federal awards from FNS and that all of them had submitted single audit reports for their 2013 fiscal year-end. With award recipients from up to 50 states and numerous Indian tribal organizations, it is unclear how FNS officials would reasonably assure that award recipients have submitted each single audit report in the time frame required without a tracking system or list that identifies the award recipients, the fiscal year-end for each, and required due dates for single audit report submission. Thus, FNS’s policies and procedures were not effectively designed to identify award recipients that should have submitted single audit reports, verify that the award recipients that were required to submit single audit report actually submitted them, determine whether the award recipients that were required to submit single audit reports did so within the required time frames, and take action to obtain single audit reports when award recipients did not submit the single audit reports within the required time frames. A FNS official stated that it is the award recipient’s responsibility to ensure that it files a single audit report not FNS’s responsibility. However, OMB Circular No. A-133 requires award recipients to ensure that the audits are performed and submitted when due and federal awarding agencies to ensure that single audit reports are received timely. An Agriculture official acknowledged that independently verifying if single audit reports are missing from the FAC database would be a major challenge for the subagencies because of their reliance on award recipients to self-report. Nonetheless, OMB Circular No. A-133 requires that federal awarding agencies ensure that award recipients complete and submit single audit reports within the required time frames. RD did not effectively design policies and procedures to reasonably assure that award recipients submitted single audit reports timely. RD had policies and procedures that made reference to OMB Circular No. A-133; however, we found that RD’s policies and procedures were not effectively designed to identify award recipients that should have submitted single audit reports, verify that the award recipients that were required to submit single audit reports actually submitted the reports, determine whether the award recipients that were required to submit single audit reports submitted the reports within the required time frames, and take action to obtain single audit reports when award recipients did not submit the reports within the required time frames. RD’s policies and procedures included in its Financial Management OMB Circular A-133 Standard Operating Procedures call for RD to run queries on information stored in the FAC database to determine which single audit reports have been submitted; however, it did not contain policies and procedures for identifying the award recipients that met the threshold and were required to submit single audit reports. An RD official stated that RD is aware of the dollar amount of funds that it provides to award recipients and whether those funds met the threshold. Officials in RD’s Financial Management Division reported that one of RD’s program divisions relies on award recipients to undergo single audits and submit the single audit reports to the FAC when total expenditures of federal awards exceed the threshold. However, similar to FNS, while these policies and procedures allow RD to identify award recipients that have submitted single audit reports, they do not reasonably assure that RD identifies award recipients that should have submitted single audit reports by the required due dates but did not. An Agriculture official acknowledged that independently verifying that single audit reports are missing from the FAC would be a major challenge for the subagencies because they rely on the award recipients to self-report. Nonetheless, OMB Circular No. A-133 requires that federal awarding agencies ensure that award recipients complete and submit single audit reports within the required time frames. Education effectively designed policies and procedures to reasonably assure that award recipients completed and submitted single audit reports timely. Education has a centralized organizational structure for overseeing single audits. As a result, the policies and procedures designed to reasonably assure that award recipients submitted single audit reports on time at the two selected subagencies—the Office of Elementary and Secondary Education (OESE) and the Office of Special Education and Rehabilitative Services (OSERS)—are the same. Education’s Post Audit Group (PAG) under the OCFO has department- wide policies and procedures for single audit oversight and audit resolution. PAG has effectively designed policies and procedures to identify and follow up on single audit reports that have not been submitted to the FAC within the required time frames. PAG also processes and distributes single audit reports to Education’s principal offices for resolution. Education’s Office of the Chief Financial Officer Guidance Memorandum calls for PAG to take steps to help program offices identify award recipients that have not submitted their single audit reports in a timely manner. The guidance memorandum requires that in September of each fiscal year, Education’s Risk Management Service prepares a list of all of Education’s award recipients that have expended funds over the threshold dollar amount in a prior year. The guidance memorandum requires the Risk Management Service to compare the list of award recipients against the FAC database to determine if audit reports were completed and submitted to the FAC in 9 months. According to its policies and procedures, the Risk Management Service is to then create a list of award recipients that did not submit the required single audit reports and send the list to PAG to identify the award recipients that are to be notified. PAG’s procedures then call for PAG to send a notification letter requesting that the award recipient submit the required single audit report or provide information as to why it did not submit the report or is not required to do so. If there is no response or no single audit report is provided within 30 business days, in accordance with PAG’s procedures, PAG is to send a second letter to the award recipient. If the award recipient still fails to respond by submitting the single audit report or explaining why it is not required to do so, PAG’s procedures call for program offices to make decisions about follow-up action. These actions may include imposing special conditions that direct the award recipient to submit a single audit report by a certain date or its plan to obtain an audit by a certain date, declaring an award recipient as “high risk” and imposing special conditions that place the recipient on a cost-reimbursement payment basis until it submits a single audit report or provides a plan for obtaining an audit, or withholding funds from the award recipient. In addition, audit follow-up reflects a key component of Education’s risk management strategy in the U.S. Department of Education Strategic Plan for Fiscal Years 2014 – 2018. According to Education’s OCFO officials, Education performed these procedures annually in fiscal year 2013 but began performing them quarterly at the time of our audit. HHS’s two selected subagencies, Audit Resolution Division (ARD) and the Centers for Medicare and Medicaid Services (CMS), did not effectively design policies and procedures to reasonably assure that award recipients submitted single audit reports timely. HHS has a decentralized approach in which the Office of the Assistant Secretary for Financial Resources sets policy for single audit oversight, which is carried out by ARD, HHS’s operating divisions (such as CMS), and the Office of Inspector General (OIG). While ARD has a key role in HHS’s single audit oversight process, it does not award federal funds to nonfederal entities. ARD did not effectively design policies and procedures to reasonably assure that award recipients submitted single audit reports timely. ARD had detailed policies and procedures for conducting single audit oversight; however, ARD’s policies and procedures were not effectively designed to identify award recipients that should have submitted single audit reports, verify that the award recipients that were required to submit single audit reports actually submitted the reports, and determine whether the award recipients that were required to submit single audit reports submitted the reports within the required time frames. Based on a review of ARD’s policies and procedures, we determined that ARD is not responsible for taking action to obtain single audit reports when award recipients did not submit the single audit reports within the required time frames, as shown in figure 2. ARD is to provide each HHS operating division with a list of the award recipients that are potentially delinquent in submitting single audit reports to the FAC. Each operating division is responsible for following up with its award recipients and updating ARD quarterly on the status of the potentially delinquent single audit reports. ARD did not effectively design policies and procedures to identify award recipients that should have submitted single audit reports. ARD’s policies and procedures include the Audit Resolution Division Manual (ARD Manual) and the 2011 Annual Delinquent Audit Process Detailed Instructions. These documents outline ARD’s responsibility for annually performing a delinquent audit review to identify award recipients that should have submitted single audit reports as required by OMB Circular No. A-133 but did not. However, these policies and procedures do not reasonably assure that the delinquent audit review is performed annually. We found that as of May 2016, ARD had not performed the fiscal year 2013 delinquent audit review. In addition, ARD had not updated the 2011 Annual Delinquent Audit Process Detailed Instructions for the fiscal year 2013 delinquent audit review. In May 2016, an ARD official stated that the reason the delinquent audit review for fiscal year 2013 had not been completed was that ARD found errors with the information for identifying award recipients that received over $500,000 in federal awards during fiscal year 2013.The ARD Manual states the Program Support Center is to query the Payment Management System to identify award recipients who received $500,000 or more in federal awards and compare the results of the query against the FAC to verify whether award recipients submitted single audit reports. However, the ARD Manual did not contain a requirement to ensure that the query was complete and accurate. As a result, when the query was run, the list that identified award recipients for fiscal year 2013 that received $500,000 or more in federal awards did not include CMS award recipients— because the query contained CMS’s prior name, the Health Care Financing Administration, according to an ARD official. CMS awarded the largest dollar amount of HHS’s reported grant outlays to states and local governments for fiscal year 2013. Once ARD noticed that the data were inaccurate, ARD worked with the Program Support Center to change the query to include CMS’s name in future queries. An HHS official stated that the policies and procedures to incorporate changes related to the query had not been updated as of May 2016 and that HHS was working on revising the policies and procedures in this area. However, it is unclear whether the revised policies and procedures relating to single audits would be effectively designed to address our audit finding. In addition, the ARD Manual did not contain policies and procedures for determining whether the award recipients that were required to submit single audit reports submitted the reports within the required time frames. Thus, ARD’s policies and procedures were not effectively designed to identify award recipients that should have submitted single audit reports, verify that the award recipients that were required to submit single audit reports actually submitted the reports, and determine whether the award recipients that were required to submit single audit reports did so within the required time frames. CMS did not effectively design policies and procedures to reasonably assure that it takes action when award recipients did not submit single audit reports within the required time frames. We found that ARD is responsible for identifying award recipients for each HHS operating division that should have submitted single audit reports, verifying that the award recipients that were required to submit single audit reports actually submitted the reports, and determining whether the award recipients that were required to submit single audit reports did so in a timely manner. As a result, we did not assess the design of CMS’s policies and procedures for those activities, as shown in figure 2. We found that CMS’s policies and procedures for single audit oversight consisted of older manuals. Specifically, the policies and procedures that CMS officials provided consisted of documents such as the Grants Administration Manual and the 1982 Health Care Financing Administration Audit Resolution Manual—both of which were issued before the 1984 enactment of the Single Audit Act. Accordingly, the manuals do not reflect requirements related to the Single Audit Act, OMB Circular No. A-133, or HHS’s Grants Policy Directive 4.01. CMS officials acknowledged that their policies and procedures in this area were old. In addition, HHS officials acknowledged that one of the most significant challenges with single audit oversight has been the lack of standardization among the operating divisions’ policies and procedures. In May 2016, during our audit, HHS officials stated that they were working on updating single audit policies and procedures. However, it is unclear whether the updated single audit policies and procedures would address the specific findings related to CMS’s policies and procedures that we identified during our audit. HUD’s two selected subagencies, the Office of Community Development and Planning (CPD) and the Office of Public and Indian Housing (PIH), effectively designed policies and procedures to reasonably assure that award recipients completed and submitted single audit reports timely. HUD’s single audit oversight process is decentralized and HUD’s OCFO provides general departmental policies and procedures to the department’s program offices, such as CPD and PIH. HUD has policies and procedures that describe responsibilities of and processes for program offices in reviewing single audit reports. Each HUD program office is responsible for ensuring that award recipients submit single audit reports according to the time frames outlined in OMB Circular No. A-133. CPD effectively designed policies and procedures to reasonably assure that award recipients completed and submitted single audit reports timely. According to CPD’s Clarifying Guidance to CPD Field Offices on Single Audit Act Requirements, field offices are expected to maintain a list of award recipients. CPD’s policies and procedures call for each field office to use its list to identify award recipients that are required to submit single audit reports based on the $500,000 expenditure threshold and to compare the list to the FAC database to determine if single audit reports have been completed and submitted within the required time frame. According to CPD’s guidance, if an audit has not been submitted, the field office is required to follow up in writing with the award recipient to determine the date of anticipated audit completion and submission of the needed information to the FAC database. CPD provided an example of a tracking log from fiscal year 2013 that it used to document missing audit reports, including the dates that it contacted award recipients of missing reports. PIH effectively designed policies and procedures to reasonably assure that award recipients completed and submitted single audit reports timely. PIH developed standard operating procedures for the field offices to help ensure that single audit reports are submitted. These procedures include (1) monitoring the PIH Real Estate Assessment Center’s (REAC) website, which is designed to allow award recipients to provide their financial statements to PIH prior to submission deadlines, and determining if the expenditures exceeded the single audit threshold; (2) contacting the award recipients to help ensure that single audit reports are submitted; and (3) checking the FAC to determine whether the required reports have been submitted. The REAC established a system wherein award recipients electronically submit information such as financial statements, audit reports, and audit findings into a data warehouse. In addition, PIH’s IPA Assessment Review states that PIH is to perform on-site reviews of the field offices’ internal controls over single audits and provide a checklist for these on-site reviews. PIH’s IPA Assessment Review included steps for reviewing the field offices’ internal controls (tracking logs) for ensuring submission of single audit reports. One of Transportation’s two selected subagencies, the Federal Highway Administration (FHWA), effectively designed policies and procedures to reasonably assure that award recipients completed and submitted single audit reports timely. However, the other selected subagency at Transportation, the Federal Transit Administration (FTA), did not effectively design policies and procedures to reasonably assure that award recipients completed and submitted single audit reports timely. Transportation’s single audit oversight process is decentralized, with oversight responsibilities divided between the Assistant Secretary for Administration; the OIG; and each operating authority, such as FHWA and FTA. Transportation’s Assistant Secretary for Administration is responsible for issuing guidance, including guidance related to single audits. Each operating authority is responsible for ensuring that award recipients submit single audit reports according to the time frames outlined in OMB Circular No. A-133 and Transportation’s department- wide guidance. FHWA effectively designed policies and procedures to reasonably assure that award recipients completed and submitted single audit reports timely. Transportation’s Office of the Senior Procurement Executive’s Financial Assistance Guidance Manual states that each operating authority, such as FHWA, is responsible for ensuring that award recipients undergo single audits and distribute the audit reports timely. The guidance manual also requires each operating authority to maintain tracking mechanisms for recording the receipt of audit reports. In addition, FHWA’s Financial Integrity Review and Evaluation (FIRE) Program Tool Kit (FIRE Tool Kit) states that FHWA officials are to determine when an audit report is due and use the FAC to ensure that the audit has been submitted. The FIRE Tool Kit includes a list of the states and their respective single audit reporting due dates. If an award recipient does not submit its single audit report by the due date, the guidance manual outlines the sanctions that can be taken until the report is received. FTA did not effectively design policies and procedures to reasonably assure that award recipients submitted single audit reports timely. FTA had policies and procedures that reference OMB Circular No. A-133 single audit guidance, but these policies and procedures were not effectively designed to identify award recipients that should have submitted single audit reports, verify that the award recipients that were required to submit single audit reports actually submitted the reports, determine whether the award recipients that were required to submit single audit reports submitted the reports within the required time frame, and take action to obtain single audit reports when award recipients did not submit the reports within the required time frame. FTA’s Grants A to Z Standard Operating Procedures states that each FTA regional office is responsible for ensuring that single audit reports are submitted timely and for identifying award recipients that are required to submit single audit reports, either through the review of the FAC database or by requesting report copies directly from award recipients. FTA regional offices are also required to follow up in writing with award recipients that have not provided single audit reports as required. While FTA’s policies and procedures allow it to ascertain which award recipients have submitted single audit reports and the dates of submission, the standard operating procedures document does not provide procedures on how regional offices should identify and track award recipients that should have submitted single audit reports by the required due dates but did not. According to an FTA official, FTA provides (1) additional guidance to its regional offices by holding bimonthly single audit training and guidance teleconferences and (2) customer support on a one-on-one basis when requested. Transportation officials acknowledged that single audits are important and that there are a number of opportunities to improve the department’s oversight. The selected subagencies in Agriculture, HHS, HUD, and Transportation did not effectively design policies and procedures to reasonably assure that management decisions were prepared with the required content and issued within 6 months of receipt of the single audit report for each single audit finding, as required by OMB Circular No. A-133. Both of the selected subagencies in Education effectively designed policies and procedures to reasonably assure that management decisions were prepared with the required content and issued within 6 months of receipt of the single audit report for each single audit finding. OMB Circular No. A-133 requires federal agencies to issue written management decisions on each of the single audit findings that clearly state whether the agency sustains the audit finding, the reasons for the decision, the expected award recipient action to repay disallowed costs and make financial adjustments or take other action, a timetable for follow-up if the award recipient has not completed corrective action, the appeals process available to the award recipient on the federal awarding agency’s management decision, and the reference numbers the auditor assigned to each single audit finding. OMB Circular No. A-133 also requires the management decision to be issued within 6 months of receipt of a recipient’s single audit report. To reasonably ensure that award recipients take action to correct single audit findings, management decisions are to describe the corrective actions that federal agencies consider necessary based on their evaluation of the single audit findings and the award recipient’s plan to correct the single audit findings, according to OMB Circular No. A-133. Since federal agencies are responsible for ensuring that the award recipients take appropriate and timely corrective action, it is important for agency management to clearly communicate the agency’s expectations and time frames for action through management decisions. Management decision letters provide award recipients with written notification of the agency’s position. Without timely notification that contains all of the OMB requirements for written management decisions, award recipients may be unclear about the agency’s position on the single audit findings and what corrective actions, if any, they need to take to address the single audit findings. In turn, single audit findings may not be corrected, thus hampering oversight of federal awards. Figure 3 provides an overview of our assessment of the design of selected subagencies’ policies and procedures to reasonably assure that management decisions met the requirements in OMB Circular No. A-133 and were issued timely. Agriculture’s two selected subagencies, FNS and RD, did not effectively design policies and procedures to reasonably assure that they prepared management decisions with the required content for each single audit finding and issued such management decisions within 6 months of receipt of the single audit reports, as required by OMB Circular No. A-133. Agriculture’s single audit oversight is decentralized, and the OCFO is responsible for providing departmental policy to the subagencies. Each subagency is responsible for issuing management decisions. FNS did not effectively design policies and procedures to reasonably assure that it prepared management decisions with the required content for each single audit finding and issued such management decisions within 6 months of receipt of the single audit reports, as required by OMB Circular No. A-133. The FNS Audit Manual requires regional offices to (1) evaluate single audit findings and the award recipients’ plans to correct single audit findings, (2) develop and communicate written management decisions on award recipients’ corrective action plans, and (3) issue management decisions within 6 months of receipt of the single audit reports. However, the audit manual does not require management decisions to include all of the criteria for the content of written management decisions identified in OMB Circular No. A-133, such as clearly stating whether a single audit finding is sustained, the reason for the management decision, and a description of the appeals process available to the award recipient on the federal agency’s decision. FNS has policies and procedures for issuing a management decision within 6 months after a single audit report is received; however, the policies and procedures were not effectively designed to reasonably assure that the management decisions are issued as required. Specifically, FNS’s policies and procedures describe its Automated Tracking System, where FNS personnel enter single audit report tracking information, such as open and closed audits, disallowed costs, and the dates management decisions were issued. The Automated Tracking System generates a report that identifies audit reports for which management decisions have not been issued more than 120 days (about 4 months) after issuance of the audit report. However, the FNS Audit Manual does not require policies and procedures to reasonably assure that action is taken on the information in the Automated Tracking System so that management decisions are issued within 6 months of receipt of the audit reports. An FNS official stated that other than its audit manual, FNS has not provided the regional offices with additional guidance about management decisions. The official also stated that single audit reports are not very helpful to FNS because while some of these reports are very comprehensive, the single audit findings are typically general in nature and very rarely specifically address FNS programs. According to an FNS official, the regional offices are expected to resolve FNS-specific single audit findings. However, according to FNS officials, there is no standard procedure for how the regional offices are to contact the states to resolve the issues. An FNS official stated that the regional offices conduct Management Evaluations and Financial Management Reviews of award recipients. These reviews provide periodic assessments to help ensure that award recipients operate audit management programs that ensure compliance with applicable federal requirements. The FNS official further stated that FNS finds these assessment tools to be more useful to them for oversight purposes than single audit reports because they help FNS to identify specific programs that may need additional oversight. FNS officials also stated that single audits have not been the top priority for FNS because the quality and usefulness of the single audit reports are inconsistent, and therefore these reports are not beneficial oversight tools. However, the Single Audit Act requires federal awarding agencies to use single audits to conduct oversight of the federal funds that they award and provides for single audits to play a key role in achieving federal accountability objectives and ensuring that funds are used for authorized purposes and that risks of fraud, waste, and abuse are mitigated. Without effectively designed policies and procedures for issuing management decisions related to single audit findings, FNS cannot reasonably assure the management decisions are issued timely and contain all of the required elements. For example, we reviewed a nongeneralizable sample of 16 FNS single audit findings and found that FNS issued management decision letters for 3 of the 16 single audit findings and that none of the letters contained all of the required elements listed in OMB Circular No. A-133 or were issued within 6 months of receipt of the single audit reports. The 3 management decisions were issued within 8 to 11 months of receipt of the single audit reports. RD did not effectively design policies and procedures to reasonably assure that it prepared management decisions with the required content for each single audit finding and issued such management decisions within 6 months of receipt of the single audit reports, as required by OMB Circular No. A-133. RD had policies and procedures in its Instruction A- 2012 Manual that call for its regional offices to (1) evaluate single audit findings and the award recipients’ plans to correct single audit findings and (2) develop and communicate written management decisions on the award recipients’ corrective action plans. In addition, RD’s Financial Management OMB A-133 Standard Operating Procedures also describe its Automated Reports Tracking System, where RD personnel enter single audit report information for tracking, such as the single audit report number, audit finding code, action the auditee will take to correct the single audit findings, and the management decision date as defined by RD. However, its policies and procedures did not require all of the content listed in OMB Circular No. A-133 for management decisions, including whether the single audit finding was sustained, the reasons for the decision, a description of the appeals process available to the award recipient on the federal agency’s decision, and the reference numbers the auditor assigns to each single audit finding. In addition, the policies and procedures do not include a requirement that management decisions are to be issued within 6 months after the single audit reports are received. Officials in RD’s Financial Management Division said that the state offices are in contact with the award recipients periodically, but do not usually send management decision letters to award recipients. In addition, RD’s policies and procedures use a different definition for management decisions than stated in OMB Circular No. A-133. RD’s policies and procedures state that RD provides a transmittal letter to the responsible party and that the responsible party is to provide all necessary documentation that verifies that corrective actions have been taken to address the auditor’s findings. RD’s policies and procedures describe the management decision date as the date that the award recipient responds to RD’s transmittal letter. OMB Circular No. A-133, however, provides that a management decision is the evaluation by the federal awarding agency of the single audit finding and corrective action plan and the issuance of a written decision as to what corrective action is necessary, and also states that the entity responsible for issuing a management decision shall do so within 6 months of receipt of the single audit report. We reviewed supporting documentation for a nongeneralizable sample of 12 single audit findings and found that RD issued transmittal letters for all 12 of the single audit findings. However, these transmittal letters did not provide evaluations of the single audit findings and corrective action plans and written decisions as to what corrective actions are necessary, as required by OMB Circular No. A-133. Without effectively designed policies and procedures for issuing management decisions related to single audit findings, RD cannot reasonably assure that the management decisions are issued timely and contain all required elements. Education effectively designed policies and procedures to reasonably assure that it prepared management decisions with the required content for each single audit finding and issued such management decisions within 6 months of receipt of the single audit reports, as required by OMB Circular No. A-133. Education’s action officials are responsible for determining the action to be taken in issuing management decision letters within the 6-month time frame required by OMB Circular No. A-133. The policies and procedures related to management decisions on the award recipients’ plans to correct single audit findings at the two selected subagencies—OESE and OSERS—are the same. Education’s Handbook for the Post Audit Process provides elements for the management decision letters that met the OMB Circular No. A-133 content requirements. These elements include (1) whether Education agrees with the audit finding; (2) the reasons for the decision; and (3) the expected auditee action to repay disallowed costs, make financial adjustments, or take other action. In addition, Education’s policies and procedures require its Audit Accountability and Resolution Tracking System to mark as overdue any management decision letter not issued within 6 months. According to the handbook, tracking reports generated from the database that identify overdue or potentially overdue audit reports are sent to the action officials at the end of any quarter following the end of the 6-month resolution period. According to Education officials, at the time of our audit PAG was responsible for completing and updating the audit dashboards each month that include a metric related to the percentage of audits that are resolved timely. The dashboards include a variety of metrics for single audits reports and are to be distributed to all of Education’s program offices and reviewed by Education’s senior management. Despite these policies and procedures, Education officials stated that for fiscal year 2013 single audit reports, some management decisions for OSERS and OESE were not issued timely. We reviewed a nongeneralizable sample of 42 Education single audit findings from fiscal year 2013 and found that all of the written management decisions included the content required by OMB Circular No. A-133. However, we found that most of the letters were issued from 7 to 18 months after receipt of the single audit reports. According to an Education official, in fiscal year 2013, Education did not have mechanisms to hold the program offices accountable for not issuing management decisions timely, but subsequently Education developed an organizational performance review that includes timely audit resolution in its metrics, which has served to increase accountability. In addition, Education’s fiscal year 2015 agency financial report stated that Education has strengthened controls over audit follow-up to ensure more timely resolution, correction, and closure of audit findings and continues to show significant improvements in timely audit resolution, and remains focused on working cooperatively with award recipients to address the most complex and repeat findings. In addition, audit follow-up reflects a key component of Education’s risk management strategy in the U.S. Department of Education Strategic Plan for Fiscal Years 2014 – 2018. HHS’s two selected subagencies, ARD and CMS, did not effectively design policies and procedures to reasonably assure that they prepare management decisions with the required content for each single audit finding and issue such management decisions within 6 months of receipt of the single audit reports, as required by OMB Circular No. A-133. According to an official from the HHS OIG, the OIG reviews single audit reports and assigns single audit findings to operating divisions, such as CMS, for resolution. Operating divisions are responsible for issuing management decisions for single audit findings assigned by the OIG. The OIG also assigns single audit findings that affect more than one operating division—crosscutting findings—to ARD for resolution, and ARD is to work with applicable operating divisions to issue management decisions. An HHS official acknowledged that one of the most significant challenges with single audit oversight has been the lack of standardization and interpretation of the policies and procedures. An HHS official further stated that some operating divisions were interpreting the management decision guidance differently. Specifically, the official said that some operating divisions thought that the management decision was only sent when the audit resolution official determined that the corrective action was complete and acceptable, while others sent management decision letters as soon as they had evaluated the nature of the finding and decided what the corrective action would be, with the goal of accomplishing this within 6 months. In other cases, the award recipient had already provided evidence that the single audit finding was resolved. HHS officials stated that the operating divisions have the same understanding now and that it was agreed that the decisions would be based on the initial opinion of the single audit finding and planned corrective action, and that these decisions would need to be issued within 6 months. In addition, an HHS official stated that HHS’s Single Audit Resolution work group recently developed guidance for issuing and handling management decisions for single audit findings that will apply to the entire department. At the time of our audit, the Deputy Assistant Secretary and the Deputy Chief Financial Officer were reviewing this guidance. In addition, ARD officials told us that HHS plans to transition single audit finding activities from the OIG to ARD. They also stated that ARD has established an overall plan to streamline processes that will enhance its audit resolution processes and reporting and that the implementation of an enterprise-wide audit resolution system that will automate audit resolution processes will be a major process enhancement. However, it is unclear whether the updated single audit guidance or processes will address our audit finding, ensuring that policies and procedures are effectively designed to reasonably assure that management decisions with the required content for each single audit finding are prepared and issued within 6 months of receipt of the single audit reports, as required by OMB guidance. ARD did not effectively design policies and procedures to reasonably assure that it prepared management decisions with the required content for each single audit finding and issued such management decisions within 6 months of receipt of the single audit reports, as required by OMB Circular No. A-133. The ARD Manual contains ARD’s policies and procedures for issuing management decisions and states that staff should issue management decisions after they have completed their review of single audit reports and the award recipients’ corrective action plans. In addition, the manual provides an example of a management decision that staff should use when communicating the results of ARD’s review of the single audit finding and the corrective action plan to the award recipient. However, neither the ARD Manual nor the management decision example contains all of the required OMB Circular No. A-133 elements, such as the reason for the decision, a timetable for follow-up that should be performed if the award recipient has not completed corrective action, and a description of any appeal process available to the award recipient. In addition, the manual does not specifically state that the management decision letter should be issued within six months of receipt of the single audit report as required. According to an HHS official, ARD issues management decisions after the HHS OIG sends ARD a letter of single audit findings for resolution. OMB Circular No. A-133 states that the awarding agency shall issue a management decision on single audit findings within 6 months after the receipt of the audit report. Without effectively designed policies and procedures for issuing management decisions, ARD cannot reasonably assure that the management decisions are issued timely and contain all required elements. For example, we reviewed a nongeneralizable sample of 10 ARD single audit findings from fiscal year 2013 and found that ARD issued management decisions for all 10 single audit findings but they did not contain the required elements listed in OMB Circular No. A-133 . We also found that ARD issued 6 of the 10 management decisions within the required 6 months of the single audit report being available in the FAC. ARD issued the other 4 management decisions from 7 to 9 months after the reports were available in the FAC. According to an HHS official, there can be delays between the time the OIG obtains the reports from the FAC and assigns the single audit findings to ARD and other operating divisions. For the 4 management decisions, we reviewed the time period between the date that the OIG assigned the single audit finding to ARD and the date ARD issued the management decision. ARD issued the management decisions from 3 to 5 months after the OIG assigned the single audit findings to ARD. ARD officials stated that ARD was currently in a transition period, as it is taking on some of the activities that were performed by the OIG and is revising its audit resolution policies and procedures to accommodate the transition. CMS did not effectively design policies and procedures to reasonably assure that it prepared management decisions with the required content for each single audit finding and issued such management decisions within 6 months of receipt of the single audit reports, as required by OMB Circular No. A-133. According to a CMS official, CMS’s policies and procedures relating to single audit oversight include its 1982 Health Care Financing Administration Manual. This manual contains the policies and procedures for resolving CMS single audit findings on grants, contracts, and cooperative agreements, and for controlling the audit resolution process and CMS’s departmental audit resolution policies and requirements. According to a CMS official, CMS issued its policies and procedures for the resolution of audit findings, which contained elements found in OMB Circular No. A-73, Audit of Federal Operations and Programs. However, OMB Circular No. A-73 was rescinded in 1995. CMS had not updated its policies and procedures for management decisions as of May 2016. Without effectively designed policies and procedures for issuing management decisions, CMS cannot reasonably assure that the management decisions are issued timely and contain all required elements. For example, we reviewed a nongeneralizable sample of 18 CMS single audit findings from fiscal year 2013, and found that CMS issued only one management decision for 1 of the 18 single audit findings that we reviewed. This management decision contained the content required by OMB Circular No. A-133 and was issued within 6 months of receipt of the single audit report. CMS officials told us that in the past, CMS has communicated management decisions through telephone conversations or via e-mail to the award recipients. CMS provided us with e-mail correspondence between CMS and the award recipients for 13 of the 18 single audit findings. However, the e-mails related to requesting additional information from the award recipients and did not contain the content elements required by OMB Circular No. A-133 for management decisions and accordingly do not constitute a management decision.. According to CMS officials, at the time of our audit CMS was collaborating with the other HHS operating divisions to update their policies and procedures. However, it is unclear whether the updated single audit guidance would address our audit finding, ensuring that policies and procedures are effectively designed to reasonably assure that management decisions with the required content for each single audit finding are prepared and issued within 6 months of receipt of the single audit reports, as required by OMB guidance. HUD’s two selected subagencies, CPD and PIH, did not effectively design policies and procedures to reasonably assure that they prepared management decisions with the required content for each single audit finding and issued such management decisions within 6 months of receipt of the single audit reports, as required by OMB Circular No. A-133. HUD’s OCFO provides general departmental policies and procedures to its program offices, such as CPD and PIH. HUD’s OCFO developed the Audits Management System Handbook that describes procedures and responsibilities for HUD’s program offices to review single audit reports and issue management decisions. Each HUD program office is responsible for ensuring single audit findings are resolved and management decisions are issued as set forth in OMB Circular No. A- 133. HUD’s senior accountable official for single audits stated that the decentralized approach for single audit oversight that HUD currently uses is a consequence of the HUD OIG transferring audit resolution activities to HUD program offices. The official further stated that the program offices did not have the processes and procedures or experience needed to take on responsibility for audit resolution. The official also stated that HUD is working on a proposal to move to a centralized approach for single audit oversight to better manage HUD’s compliance with OMB Circular No. A-133 but will need additional staffing to implement the proposed approach. CPD did not effectively design policies and procedures to reasonably assure that it prepared management decisions with the required elements for each single audit finding and issued such management decisions within 6 months of receipt of the single audit reports, as required by OMB Circular No. A-133. HUD/CPD’s Clarifying Guidance to CPD Field Offices on Single Audit Act Requirements requires that written management decisions on all CPD-specific single audit findings be communicated to the award recipients within 6 months of receipt of the single audit. It also listed all the management decision content requirements of OMB Circular No. A-133. However, CPD’s and HUD’s policies and procedures were not effectively designed to reasonably assure that management decisions are actually prepared with the required content and were issued timely as they did not include steps to help to ensure that the policies and procedures were actually carried out. For example, neither CPD’s nor HUD’s policies or procedures identify the steps such as reviews by management at the functional or activity levels to help reasonably assure that management decisions are issued timely. We reviewed a nongeneralizable sample of 18 CPD single audit findings from fiscal year 2013 and found that CPD issued management decisions for all 18 single audit findings, but only 4 of the management decisions contained all of the required content elements of OMB Circular No. A-133. Furthermore, management decisions for 10 of the 18 single audit findings were not issued within 6 months of receipt of the single audit reports. The 10 management decisions were issued from 7 to 19 months after receipt of the single audit reports. Without effectively designed policies and procedures for issuing management decisions, CPD cannot reasonably assure that the decisions are issued timely and contain all required elements. PIH did not effectively design policies and procedures to reasonably assure that it prepared management decisions with the required content for each single audit finding and issued such management decisions within 6 months of receipt of the single audit reports, as required by OMB Circular No. A-133. PIH’s Memorandum for PIH Hub Directors and Program Coordinators (Field Guidance on Single Audit Act Requirements) did not include all of the management decision letter content required by OMB Circular No. A-133, such as the requirement that the management decision describe the appeals process available to the recipient or include the auditor’s assigned reference number for the single audit finding. In addition, while PIH’s policies and procedures call for management decisions to be issued within 6 months of receipt of the single audit report, such policies and procedures did not include steps to help ensure that the policies and procedures are actually carried out. For example, the policies and procedures do not identify steps such as reviews by management at the functional or activity levels to help reasonably assure that management decisions are issued timely. We reviewed a nongeneralizable sample of 19 PIH single audit findings from fiscal year 2013 and found that PIH issued management decision letters for 14 of the 19 single audit findings, but none of them contained all of the content required for management decisions, as specified in OMB Circular No. A-133. For 4 of the 14 single audit findings, the management decisions were not issued within 6 months of the single audit report but were issued about 11 months after receipt of the single audit reports. Without effectively designed policies and procedures for issuing management decisions, PIH cannot reasonably assure the management decisions are issued timely and contain all required elements. Transportation’s two selected subagencies, FHWA and FTA, did not effectively design policies and procedures to reasonably assure that they prepared management decisions with the required content for each single audit finding and issued such management decisions within 6 months of receipt of the single audit reports, as required by OMB Circular No. A- 133. Each operating authority is responsible for issuing management decisions and ensuring that award recipients take action to correct single audit findings. Accordingly, each operating authority may have different policies and procedures related to issuing management decisions than those included in Transportation’s Financial Assistance Guidance Manual, which provides procedural guidance to be followed in the department’s award and monitoring of financial assistance. The Office of the Deputy Secretary for Administration recognized the need to strengthen Transportation’s actions relating to single audit recommendations. However, as discussed below, we found deficiencies relating to the management decision requirements in OMB Circular No. A- 133 at the two selected subagencies—FHWA and FTA. FHWA did not effectively design policies and procedures to reasonably assure that it prepared management decisions with the required elements for each single audit finding and issued such management decisions within 6 months of receipt of the single audit reports, as required by OMB Circular No. A-133. Transportation had policies and procedures relating to single audit findings, and FHWA had policies and procedures requiring the issuance of a management decision within 6 months of receipt of the single audit report, as required by OMB Circular No. A-133. While FHWA’s policies and procedures include an example of a management decision, they did not specifically require the elements of management decisions specified in OMB Circular No. A-133. FHWA’s FIRE Tool Kit states that division offices are responsible for developing and issuing management decision letters and contains an example of a management decision. This example includes information related to the single audit report, a reference to the single audit finding, and FHWA’s assessment of the corrective action proposed by the award recipient. However, neither the Fire Tool Kit nor the example of the management decision describes the elements OMB Circular No. A-133 requires be included in the management decision, such as a statement indicating whether FHWA agrees with the single audit finding, the reasons for its decision, and a timetable for actions if the award recipient has not completed corrective action. In addition, FHWA’s policies and procedures listed in the FIRE Tool Kit were not effectively designed to reasonably assure that management decisions are actually prepared with the required content and are issued timely because such policies and procedures did not include steps that help to ensure that the policies and procedures are actually carried out. For example, the policies and procedures did not include steps such as reviews by management to reasonably assure that management decisions contain the required content and are issued within 6 months of the date the single audit report is received. We reviewed a nongeneralizable sample of 22 FHWA single audit findings from fiscal year 2013 and found that FHWA issued management decisions for 9 of the 22 single audit findings. Of these nine management decisions, three contained the elements required by OMB Circular No. A- 133 and were issued within 6 months of receipt of the single audit reports. FHWA officials stated that the management decisions that were not issued were inadvertently missed and the FHWA division offices responsible for issuing the management decisions did not adhere to FHWA’s single audit procedures. FHWA officials further stated that the responsible division offices worked with the award recipients to resolve these single audit findings. However, FHWA’s policies and procedures were not effectively designed to reasonably assure that management decisions are actually prepared with the required content and are issued timely, as they did not include steps to help to ensure that the policies and procedures were actually carried out. FTA did not effectively design policies and procedures to reasonably assure that it prepared management decisions with the required content for each single audit finding and issued such management decisions within 6 months of receipt of the single audit reports, as required by OMB Circular No. A-133. Transportation had policies and procedures relating to single audit findings. In addition, FTA’s Grants A to Z Standard Operating Procedures states that FTA’s staff is responsible for issuing management decisions, staying up-to-date on OMB Circular No. A-133 requirements, and ensuring that management decisions are issued on FTA single audit findings within 6 months of receiving the award recipients’ single audit reports. However, FTA’s Grants A to Z Standard Operating Procedures does not state the OMB Circular No. A-133 content requirements that should be included in the management decisions or describe how the agency will reasonably assure that management decisions are issued timely. FTA officials stated that they issued management decisions via e- mail. However, we reviewed the e-mails provided to us by FTA officials and found that none of them contained all of the elements required in OMB Circular No. A-133. FTA officials also stated that they are working on updating the standard operating procedures to reflect the changes they are implementing for issuing management decisions and have started a review of their management decision process. Without effectively designed policies and procedures over management decisions, FTA cannot reasonably assure the management decisions contain the required elements and are issued timely. We reviewed a nongeneralizable sample of 15 FTA single audit findings from fiscal year 2013 and found that FTA issued management decision letters for 3 of the 15 single audit findings. In addition, none of the three management decisions contained all of the required elements or were issued within 6 months, as required by OMB Circular No. A-133. For the remaining 12 single audit findings, FTA provided us with e-mail correspondence; however, the e-mails primarily related to requesting additional information from the award recipients and did not contain the management decision content elements required by OMB Circular No. A-133. FTA officials stated that they ensured that the award recipients took steps to complete the appropriate corrective actions. Two of the selected subagencies, OESE and OSERS in Education, had policies and procedures designed to use a risk-based approach to identify and manage both high-risk and recurring single audit findings. Education uses a centralized approach to identify and manage high-risk and recurring single audit findings for these subagencies. None of the selected subagencies in Agriculture, HHS, and HUD had policies and procedures for using a risk-based approach to identify and manage high- risk and recurring single audit findings. Two of the selected subagencies in Transportation had policies and procedures for using a risk-based approach to identify and manage high-risk but not recurring single audit findings. We previously reported that federal agencies do not systematically use audit findings to identify and understand emerging and persistent issues related to grant programs and award recipients’ use of funds. Identifying and categorizing certain single audit findings as high risk—that is, those that if they are not corrected in a timely manner, may be seriously detrimental to federal programs—can assist federal agencies in understanding emerging and persistent issues related to award recipients’ use of funds and allow federal agencies to prioritize their resources to help ensure that the award recipients timely address these findings. Among other things, high-risk single audit findings could result in program failure; abuse; mismanagement; misuse of federal funds; improper payments; significantly impaired service; significantly reduced program efficiencies and effectiveness; unreliable data for decision making; and unauthorized disclosure, manipulation, or misuse of sensitive information. Recurring single audit findings are also of concern because they indicate deficiencies that have persisted and may need more resources or attention from the agency to address them. Consequently, both high-risk and recurring single audit findings pose increased risks to federal programs and improper payments. Risk management is a strategy for helping program managers and stakeholders make decisions about assessing risk, allocating resources, and taking actions under conditions of uncertainty. Risk management can be applied to an entire organization; to different levels of the organization; or to specific functions, projects, and activities. Leading risk management practices include that an organization develop, implement, and continuously improve a process for managing risk and integrate it into the organization’s overall governance, strategy, policies, planning, management, and reporting processes. While risk management does not provide absolute assurance of achieving an organization’s objectives, an effective risk management strategy over high-risk and recurring single audit findings can be particularly useful to help management identify potential problems and reasonably allocate resources to address them. In addition, Standards for Internal Control in the Federal Government states that internal control should provide for an assessment of the risk the agency faces from both external and internal sources. FAC officials stated that award recipients submitted over 40,000 single audit reports to the FAC for fiscal year 2013. Given the numbers of single audit reports and single audit findings as well as constraints in federal resources for conducting oversight of single audits, identifying and managing high-risk and recurring single audit findings using a risk-based approach can assist in identifying problem areas and addressing priorities. Figure 4 provides an overview of our assessment of the selected subagencies’ policies and procedures for using a risk-based approach for identifying and managing high-risk and recurring single audit findings. Agriculture’s two subagencies, FNS and RD, did not have policies and procedures for using a risk-based approach to identify and manage high- risk and recurring single audit findings. FNS did not have policies and procedures for using a risk-based approach to identify and manage high-risk and recurring single audit findings. Instead, FNS officials stated that regional offices conduct a Management Evaluation and Financial Management Review of each award recipient every 3 to 5 years. This review focuses on program compliance issues as well as financial management issues to assist FNS’s oversight of award recipients. According to FNS officials, these reviews are used to identify award recipients that are not in compliance with award requirements, including requirements for resolving single audit findings. While these reviews may be useful in evaluating award recipients’ use of federal funds, performing the reviews every 3 to 5 years limits FNS’s ability to timely identify problem areas and set priorities for addressing them. RD did not have policies and procedures for using a risk-based approach to identify and manage high-risk and recurring single audit findings. According to RD officials, all single audit findings were given the utmost attention to ensure that corrective actions were taken for all findings, regardless of their substance. However, using a risk-based approach may help to reasonably assure that the single audit findings that pose the greatest risk to a program’s objectives or of fraud, waste, or abuse of federal funds are identified and addressed in a timely manner. RD’s standard operating procedures state that the Financial Management Division program analyst downloads single audit reports from the FAC, reviews the information contained in the single audit reports, and determines which RD agency or program area is to respond to the single audit findings. In addition, the standard operating procedures state that the program analyst tracks the single audit findings in the Automated Reports Tracking System until the single audit finding is closed. Although RD does not separately identify and track recurring single audit findings, officials noted that single audit reports are checked against previous single audit reports to see if there are recurring single audit findings. According to RD officials, they notify the state office or program area via transmittal letter that a single audit finding is recurring from a prior year (citing the previously processed single audit finding) and that the award recipient has not addressed the single audit finding through corrective actions. While this procedure is indicated in RD’s standard operating procedures, RD’s standard operating procedures do not include a risk-based approach so that state offices or program areas can identify and manage the single audit findings that could pose a greater risk to the program’s objectives or increase the risk of fraud, waste, or abuse of federal funds. Education had policies and procedures for using a risk-based approach to identify and manage high-risk and recurring single audit findings and such procedures include assigning a category to single audit reports and findings based upon an assessment of risks. The Post Audit Group (PAG) of Education’s Office of the Chief Financial Officer (OCFO) developed department-wide policies and procedures for single audit oversight and audit resolution. In addition PAG plays a central role in single audit oversight and in coordination of audit resolution by processing and distributing single audit reports, working with the principal offices and other stakeholders for the award recipients to address single audit findings, and tracking audit reports and single audit findings in the Audit Accountability and Resolution Tracking System (AARTS). Education’s PAG Director stated that the department began categorizing single audit reports as high, medium, or low risk, based on a variety of factors, through the AARTS system in 2012. As new single audit reports are uploaded into AARTS, they are assessed electronically using predetermined criteria based on risk factors and each audit is assigned an overall risk rating. As such, all single audit findings contained in an audit carry the same level of risk. According to the Director, these risk ratings enable the staff assigned to the audit report to prioritize their workloads according to the assessed risk. Education’s program offices and PAG then use a “triage” process at the program and department levels to assess the seriousness of each single audit finding and determine the amount of attention needed for resolution. According to the policies and procedures in Education’s Handbook for the Post Audit Process, each program office holds a monthly meeting to discuss and reach agreement on the actions needed to resolve each single audit finding at the program level. In addition, Education holds a monthly “triage meeting” at the department level where the program offices, PAG, OIG, and the Office of the General Counsel review the program office-level triage recommendations on whether the resolution approach should be full resolution, abbreviated resolution, informal resolution, or an appropriate combination of these approaches. Education’s Handbook for the Post Audit Process also states that a full resolution approach should be used for a recurring finding. In addition, Education was able to provide us with a list of single audit findings that were recurring from prior years. HHS’s two selected subagencies, ARD and CMS, did not have policies and procedures for using a risk-based approach to identify and manage high-risk and recurring single audit findings. HHS’s Office of the Assistant Secretary sets the policies that outline the roles and responsibilities of the offices and operating divisions involved in the single audit process, but had not issued guidance requiring the department to identify and manage high-risk and recurring single audit findings using a risk-based approach. HHS officials stated that operating divisions are responsible for ensuring that single audit findings are identified and resolved. Operating divisions are responsible for issuing management decisions for single audit findings assigned to them by the OIG. The OIG also assigns single audit findings that affect more than one operating division—known as crosscutting findings—to ARD for resolution, and ARD is to work with the affected operating divisions to issue management decisions. ARD and CMS officials stated that they do not assess the risk of the single audit findings. ARD did not have policies and procedures for using a risk-based approach to identify and manage high-risk and recurring single audit findings. According to ARD officials, they do not prioritize single audit findings based on risk or recurrence, and as a result, they have not developed policies and procedures to identify, monitor, or track high-risk and recurring single audit findings. ARD officials stated that if they determine that a crosscutting finding is recurring from a prior year, they will indicate that it is a prior year finding in the management decision to the award recipient, but they do not otherwise assign a risk to recurring single audit findings. CMS did not have policies and procedures for using a risk-based approach to identify and manage high-risk and recurring single audit findings. CMS officials stated that they resolve each single audit finding and ensure that corrective actions have been achieved. CMS’s policies and procedures did not contain requirements to identify and manage single audit findings based on risk and recurrence. However, CMS officials stated that CMS has been participating in a Single Audit Metrics Initiative pilot project since fiscal year 2012. The project focuses on eliminating repeated material noncompliance single audit findings in CMS’s highest-risk program, Medicaid. They also stated that the number of repeated material noncompliance single audit findings for fiscal year 2014 audits decreased by 29 percent compared to the number in fiscal year 2010. CMS officials stated that this level of success was achieved by focusing on training audit analysts in audit resolution principles and processes; requiring that implementation of corrective action plans is verified; and holding regular meetings with CMS’s single audit work group to discuss best practices, new techniques, and other topics. The CMS pilot illustrates the effectiveness of using a risk-based approach to manage risks. HUD’s two subagencies, CPD and PIH, did not have policies and procedures for using a risk-based approach to identify and manage high- risk and recurring single audit findings. The OCFO provides general departmental policies and procedures to HUD’s program offices for single audit oversight. HUD’s policies and procedures state that the program offices (such as CPD and PIH) are responsible for ensuring that single audits are submitted, monitored, and tracked. CPD did not have policies and procedures for using a risk-based approach to identify and manage high-risk and recurring single audit findings. CPD’s Risk Analyses for Monitoring Community Planning and Development Grant Programs guidance provides a methodology for conducting risk analyses of award recipients and establishes monitoring priorities. Each CPD field office is responsible for developing an office work plan with monitoring strategies to address how it will monitor CPD award recipients and programs during the fiscal year. The risk analyses include evaluating risk factors related to grant management and financial management and looking at specific elements, such as findings contained in the single audit reports. Based on this evaluation, CPD assigns award recipients to one of three risk categories: low, medium, and high. CPD’s policies and procedures call for it to rank award recipients based on risks in order to develop a work plan and strategies for monitoring individual award recipients. According to CPD’s guidance, recurring findings are included in CPD’s risk analysis process and can affect the risk score and risk rating assigned to a grantee. The guidance also requires CPD staff to maintain a log to track all CPD findings and recurring findings. However, CPD policies and procedures do not specifically require staff to indicate if a finding is repeated from a prior year. While the review of single audit findings is included as one of the elements used to determine award recipient risk, this process is not designed to assess individual audit findings based on risks and thus, CPD does not use a risk-based approach for recurring single audit findings. Single audit findings of a higher risk, if not identified and managed, could increase the risk of fraud, waste, or abuse of federal resources. PIH did not have policies and procedures for using a risk-based approach to identify and manage high-risk and recurring single audit findings. According to PIH officials, PIH conducts a national risk assessment on a quarterly basis to assess the risks presented by each of PIH’s award recipients. PIH officials informed us that each award recipient is rated on four categories—physical risks, financial risks, management risks, and governance risks—using a basic statistical formula to identify outliers and other key risk indicators. According to PIH officials, the National Risk Assessment includes a survey of the quality of the audits and whether the award recipient is responsive to single audit findings, including whether the award recipient has past due responses or repeat single audit findings. The staff survey includes a question on whether the award recipient understands HUD requirements, policies, regulations, and laws in the event that there were serious single audit findings. However, the National Risk Assessment does not include identifying or tracking single audit findings and is not used to identify high-risk single audit findings. According to PIH officials, the data on specific single audit findings are not in a usable format that would allow the risk assessment to include that level of data. According to PIH officials, single audit findings are identified and tracked either through the Next Generation Management System/Portfolio and Risk Management Tool or through the intra-office tracking log located on PIH’s Office of Field Operation’s SharePoint website. However, PIH did not have policies and procedures for using a risk-based approach to identify and manage high-risk and recurring single audit findings. Transportation had policies and procedures for using a risk-based approach to identify and manage high-risk single audit findings. In addition, FHWA and FTA had policies and procedures for identifying recurring single audit findings, but neither of them had policies and procedures to use a risk-based approach to manage recurring single audit findings. The responsibilities of identifying, monitoring, and resolving high-risk and recurring single audit findings were divided between the OIG and the operating administrations. The OIG reviews single audit reports and categorizes single audit findings as part of its oversight role, and the operating authorities are responsible for ensuring that single audit findings are resolved. The OIG had policies and procedures detailing its single audit oversight for reviewing single audit reports from the FAC and using a risk-based approach for categorizing single audit findings for both FHWA and FTA. The Transportation OIG is responsible for and had policies and procedures for reviewing single audit findings for FHWA and categorizing them based on risk; however, FHWA did not have policies and procedures for using a risk-based approach to manage recurring single audit findings. FHWA’s FIRE Tool Kit states that high-risk single audit findings identified by the OIG are tracked using a SharePoint tracking website. FHWA has 30 days to prepare a plan to address the single audit findings, establish target action dates to correct the findings, and provide an e-mail to the OIG stating (1) that the single audit report has been reviewed, (2) that the award recipient is in tentative agreement with the single audit finding, and (3) the action dates. If FHWA does not submit this information to the OIG within 30 days, it is considered late. According to an OIG official, in such instances, officials within the Office of the Secretary contact the departments to determine why the single audit findings are still open. FHWA’s FIRE Tool Kit also states that its regional offices are responsible for conducting repeat finding analyses to determine if there are systemic deficiencies or internal control gaps in the award recipients’ processes. However, the policies and procedures do not state what is done with this report or how it is used to monitor recurring single audit findings. FHWA uses the Review Response Tracker and SharePoint system to track the resolution of single audit findings. According to FHWA officials, the Review Response Tracker can generate a report or list of all the single audit findings that are repeated from a prior year; however, the FIRE Tool Kit does not require that this report be generated regularly or that the report to be used to monitor recurring single audit findings. In addition, the FIRE Tool Kit does not state what is done with this report or how it is used to monitor recurring single audit findings. The Transportation OIG is responsible for and has policies and procedures for reviewing single audit findings and categorizing them based on risk; however, FTA did not have policies and procedures for using a risk-based approach to identify and manage recurring single audit findings. FTA’s Grants A to Z Standard Operating Procedures requires FTA regional offices to track all single audit findings using the FTA Oversight Tracking System. In addition, FTA regional offices work with award recipients to ensure that the single audit findings identified by the OIG are adequately addressed. Once action has been taken, an FTA regional office is responsible for completing the Report to Close OIG Single Audit Recommendation document and sending it to the OIG for review and closeout. Once the OIG approves the closure of the finding, the finding will be closed in the Oversight Tracking System. According to FTA officials, those responsible for resolving single audit findings can indicate in the Oversight Tracking System whether a single audit finding is a “repeat” finding from a prior year, and the system can generate a report of all the single audit findings that have been identified as repeat findings. However, FTA’s guidance does not require staff to indicate if the finding is repeated from a prior year. As a result, FTA lacks policies and procedures to reasonably assure that recurring single audit findings are consistently identified within the Oversight Tracking System, or track the progress award recipients have made to address such findings. Single audits are considered to be a critical element of the government’s oversight of more than an estimated $600 billion in annual federal awards and help reasonably assure that federal funds are properly used. The selected subagencies of the five federal agencies that we reviewed had policies and procedures that included elements of OMB’s guidance on single audit oversight. However, most of these subagencies did not always effectively design policies and procedures to reasonably assure that award recipients submitted single audit reports in a timely manner and that federal awarding agencies issued written management decisions on single audit findings with the required content in a timely manner. Without the effective design of these policies and procedures, agencies and subagencies cannot reasonably assure that they are conducting the required effective oversight of the federal funds they have awarded. In addition, most of the subagencies in our study did not have policies and procedures for using a risk-based approach to identify and manage high- risk and recurring single audit findings. Such findings, if not addressed timely, may be seriously detrimental to federal programs. Identifying and managing single audit findings using a risk-based approach could also assist federal agencies and subagencies in understanding emerging and persistent issues related to award recipients’ use of funds identified by single audit findings and allow federal agencies and subagencies to prioritize their resources to help ensure that these findings are timely addressed by the award recipients. Of the five agencies in our audit, only one agency’s (Education) two selected subagencies had effectively designed policies and procedures in the three areas addressed in our audit. The results in our report point to a need for more effective design of policies and procedures relating to single audits to help improve oversight of federal awards and reduce improper payments. We recommend that the Secretary of Agriculture direct the Under Secretary for Food, Nutrition, and Consumer Services to take the following three actions: Design policies and procedures to reasonably assure that all award recipients required to submit single audit reports do so in accordance with OMB guidance. Revise policies and procedures to reasonably assure that management decisions contain the required elements and are issued timely in accordance with OMB guidance. Design and implement policies and procedures for identifying and managing high-risk and recurring single audit findings using a risk- based approach. We recommend that the Secretary of Agriculture direct the Under Secretary for Rural Development to take the following three actions: Design policies and procedures to reasonably assure that all award recipients required to submit single audit reports do so in accordance with OMB guidance. Revise policies and procedures to reasonably assure that management decisions contain the required elements and are issued timely in accordance with OMB guidance. Design and implement policies and procedures for identifying and managing high-risk and recurring single audit findings using a risk- based approach. We recommend that the Secretary of Health and Human Services direct the Assistant Secretary for Financial Resources to take the following three actions: Design policies and procedures to reasonably assure that all award recipients required to submit single audit reports do so in accordance with OMB guidance. Revise policies and procedures to reasonably assure that management decisions contain the required elements and are issued timely in accordance with OMB guidance. Design and implement policies and procedures for identifying and managing high-risk and recurring single audit findings using a risk- based approach. We recommend that the Secretary of Health and Human Services direct the Administrator of the Centers for Medicare and Medicaid Services to take the following three actions: Revise its policies and procedures to take action to obtain single audit reports when award recipients did not submit reports within the required time frames. Revise its policies and procedures to reasonably assure that management decisions contain the required elements and are issued timely in accordance with OMB guidance. Design and implement policies and procedures for identifying and managing high-risk and recurring single audit findings using a risk- based approach. We recommend that the Secretary of Housing and Urban Development direct the Principal Deputy Assistant Secretary for the Office of Community Planning and Development to take the following two actions: Revise policies and procedures to reasonably assure that management decisions contain the required elements and are issued timely in accordance with OMB guidance. Design and implement policies and procedures for identifying and managing high-risk and recurring single audit findings using a risk- based approach. We recommend that the Secretary of Housing and Urban Development direct the Principal Deputy Assistant Secretary for the Office of Public and Indian Housing to take the following two actions: Revise policies to reasonably assure that management decisions contain the required elements and are issued timely in accordance with OMB guidance. Design and implement policies and procedures for identifying and managing high-risk and recurring single audit findings using a risk- based approach. We recommend that the Secretary of Transportation direct the Administrator of the Federal Highway Administration to take the following two actions: Revise policies and procedures to reasonably assure that management decisions contain the required elements and are issued timely in accordance with OMB guidance. Design and implement policies and procedures for identifying and managing recurring single audit findings using a risk-based approach. We recommend that the Secretary of Transportation direct the Administrator of the Federal Transit Administration to take the following three actions: Design policies and procedures to reasonably assure that all award recipients required to submit single audit reports do so in accordance with OMB guidance. Revise policies and procedures to reasonably assure that management decisions contain the required elements and are issued timely in accordance with OMB guidance. Design and implement policies and procedures for identifying and managing recurring single audit findings using a risk-based approach. We provided a draft of this report to Agriculture, Education, HHS, HUD, and Transportation for comment. One of Agriculture’s subagencies, RD, provided comments, in an e- mail submitted on behalf of the Acting Deputy Under Secretary for Rural Development, stating that it agreed with our recommendations. The e-mail did not address our recommendations to the other subagency, FNS. Education provided a technical comment, in an e-mail from the Executive Secretariat in the Office of the Secretary, which we incorporated as appropriate. In its written comments, reprinted in appendix II, HHS concurred with our recommendations. HUD provided comments related to its two subagencies, PIH and CPD, in an e-mail submitted on behalf of the Principal Deputy Assistant Secretary for PIH and the Principal Deputy Assistant Secretary for CPD. PIH indicated that it had taken actions that addressed our recommendations, while CPD disagreed with the two recommendations directed to it. In its written comments, reprinted in appendix III, Transportation concurred with our recommendations. Agriculture, HHS, HUD, and Transportation also provided technical comments, which we incorporated as appropriate. In an e-mail submitted on behalf of the Acting Deputy Under Secretary for Rural Development, RD stated that it agreed with our recommendations and stated that it will work with Agriculture’s OCFO and the program areas to develop policies and procedures to ensure that its award recipients are in compliance with OMB guidance. RD stated that it will meet with each program area and discuss how single audits are processed and resolved. RD plans to develop policies and procedures for identifying all award recipients that should file single audit reports; determining if all required single audit reports are filed, received, and processed each year; and assuring that transmittal letters and management decisions contain the required elements and are issued timely in accordance with current OMB guidance. RD also stated that it will develop new policies and procedures for identifying and managing high-risk and recurring single audit findings using a risk-based approach. In its letter reprinted in appendix II, HHS concurred with our recommendations to the Assistant Secretary for Financial Resources (ASFR) and to the Administrator of CMS related to monitoring the submission of single audit reports, issuing management decisions, and identifying and managing high-risk and recurring single audit findings. Overall, HHS stated that it is committed to ensuring that grantees submit single audits in a timely and complete manner and that its subagencies utilize the findings to ensure proper management of its grantees. HHS stated that ASFR is taking over the single audit assignment and tracking function from the OIG and that this realignment will allow HHS to better meet the requirements of the Uniform Guidance. HHS also stated that it updated its Grants Policy Manual on December 31, 2015 With regard to the recommendations directed to ASFR, HHS stated that ASFR had issued several policies and procedures related to the grantees’ timely submission of single audits. HHS stated that it would continue to evaluate its policies and procedures to ensure compliance with its regulations, particularly as ASFR takes over the single audit findings assignment and tracking function from the OIG. In addition, HHS stated that ASFR will work to develop policies and procedures that utilize a risk- based approach for high-risk and recurring single audit findings. With regard to the recommendations directed to CMS, HHS stated that it is working to incorporate the Grants Policy Manual into CMS’s policies and procedures for actions related to the timely submission of single audit reports and for reasonably assuring that management decisions contain the required elements and are issued timely in accordance with OMB guidance. CMS plans to update the 1982 Health Care Financing Administration Audit Resolution Manual and work collaboratively to develop policies and procedures that utilize a risk-based approach for high-risk and recurring single audit findings. If implemented as planned, these actions could address the intent of the recommendation. In an e-mail submitted on behalf of the Principal Deputy Assistant Secretary, CPD disagreed with our recommendation to revise policies and procedures to reasonably assure that management decisions contain the required elements and are issued timely in accordance with OMB guidance. CPD stated that it was not a prudent investment of resources to implement recommendations based upon a review of single audits under OMB Circular No. A-133, which was superseded by the Uniform Guidance. As stated in our report, the Uniform Guidance is effective for audits for fiscal years beginning on or after December 26, 2014. However, Uniform Guidance, Section 200.521, Management Decisions, carried forward the requirements in OMB Circular No. A-133 for management decisions relating to the content requirements and the time frames for issuing management decisions. Therefore, we continue to believe that actions to revise CPD’s policies and procedures related to management decisions are warranted and will help CPD comply with the Uniform Guidance. In its e-mail, CPD also disagreed with our recommendation to design and implement policies and procedures for identifying and managing high-risk and recurring single audit findings using a risk-based approach. CPD stated that neither OMB Circular No. A-133 nor the Uniform Guidance makes reference to high-risk or recurring single audit findings. However, as stated in our report, leading risk management practices include that an organization develop, implement, and continuously improve a process for managing risk and integrate it into the organization’s overall governance, strategy, policies, planning, management, and reporting processes. Further, our report also notes that in July 2016, OMB released OMB Circular No. A-123, Management’s Responsibility for Enterprise Risk Management and Internal Control, effective beginning fiscal year 2016, which requires federal agencies to consider risk management in their operations. Also, in its e-mail, CPD stated that our definition of high-risk findings was too broad. Our definition states that high-risk single audit findings are findings that may be seriously detrimental to federal programs. Among other things, high-risk single audit findings could result in program failure; abuse; mismanagement; misuse of federal funds; improper payments; significantly impaired service; significantly reduced program efficiencies and effectiveness; unreliable data for decision making; and unauthorized disclosure, manipulation, or misuse of sensitive information. This definition provides agencies with the flexibility to analyze the types of risks pertinent to their operating environments and to develop definitions of high-risk single audit findings that they deem appropriate. CPD also stated that our definition of high-risk findings does not lend itself to uniform and consistent interpretation across all CPD programs or within HUD and is unlikely to be consistently interpreted across all federal agencies, undermining the basis upon which the Uniform Guidance was developed. However, CPD and HUD can develop a definition of high-risk single audit findings that is pertinent to their operating environment and applied consistently within their organization. As stated in our report, an effective risk management strategy over high-risk and recurring single audit findings can be particularly useful to help management identify potential problems and reasonably allocate resources to address them, which can help improve program performance and outcomes. In addition, OMB Circular No. A-123 states that risks are analyzed in relation to achievement of the objectives established in an agency’s strategic plan and are to be reexamined regularly to identify new risks or changes to existing risks. Thus, risks can be agency specific. To that end, the definition for high-risk single audit findings need not be uniform across the federal government. We continue to believe that CPD needs to take action to implement this recommendation to design and implement policies and procedures for identifying and managing high-risk and recurring single audit findings using a risk-based approach. In an e-mail on behalf of the Principal Deputy Assistant Secretary, PIH stated that it has been in the process of revising its policies and procedures for reviewing and taking action on single audit report findings for public housing agencies. PIH stated that our recommendations were substantially addressed through new procedures. Where the report identified new areas that had not been addressed, PIH stated that it has incorporated those areas into its audit finding review process and risk assessment updates to address the recommendations in our report. If fully implemented, these actions should address the intent of the recommendations. In its letter reprinted in appendix III, Transportation concurred with our recommendations. Transportation stated that it employs stringent monitoring and oversight to ensure that grantees meet the terms of award agreements and conduct activities in accordance with federal laws and regulations. It stated that Transportation is committed to using single audits as a valuable tool to help monitor performance, reduce improper payments, and strengthen accountability and oversight of these funds. In addition, Transportation indicated that it will revise, design, and implement policies and procedures to (1) reasonably assure that all required award recipients submit single audit reports and management decisions contain required elements and are issued timely, in accordance with OMB guidance, and (2) identify and manage recurring single audit findings using a risk-based approach. If fully implemented, these actions should address the intent of the recommendations. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Agriculture, Education, Health and Human Services, Housing and Urban Development, and Transportation; and other interested parties. This report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-2623 or davisbh@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix IV. Our objectives were to determine the extent to which selected federal agencies (1) effectively designed policies and procedures for reasonably assuring that award recipients submit single audit reports in a timely manner, (2) effectively designed policies and procedures for reviewing award recipients’ plans to correct single audit report findings and issuing written management decisions on those plans, and (3) had policies and procedures for managing high-risk and recurring single audit findings through a risk-based approach. Because grant awards proportionally represent one of the largest amounts of federal awards, we used grant data from the Office of Management and Budget’s (OMB) Outlays for Grants to State and Local Governments in fiscal year 2013 to select the agencies and subagencies for our audit. We selected for our audit the five agencies that had the largest dollar amount of total reported outlays for grants to state and local governments in fiscal year 2013. These five agencies—the Departments of Agriculture (Agriculture), Education (Education), Housing and Urban Development (HUD), Transportation (Transportation), and Health and Human Services (HHS)—collectively accounted for 93 percent of total reported outlays for federal grants to state and local governments in fiscal year 2013. For Agriculture, Education, HUD, and Transportation, we selected subagencies that collectively outlaid over 90 percent of their respective agencies’ fiscal year 2013 total reported outlays for grants to state and local governments. For HHS, we selected the single largest subagency within HHS, which outlaid over 83 percent of HHS’s fiscal year 2013 total reported outlays for grants to state and local governments. We also reviewed HHS’s Audit Resolution Division, which has a key role in HHS’s single audit oversight process but does not provide federal awards. (See table 1.) For our first objective, we reviewed the Single Audit Act of 1984, as amended (Single Audit Act); OMB Circular No. A-133; and Standards for Internal Control in the Federal Government to identify agency responsibilities. In addition, we reviewed the selected agencies’ and subagencies’ written policies and procedures for those areas and interviewed selected agency and subagency officials with responsibility for single audit oversight. We also identified four key steps relating to the design of single audit policies and procedures that would assist federal awarding agencies in fulfilling their responsibilities under OMB Circular No. A-133 for reasonably assuring that award recipients submit single audit reports timely. To that end, designing policies, procedures, and mechanisms to include the following steps would help provide reasonable assurance that agencies can fulfill these responsibilities: identify award recipients that should have submitted single audit reports, verify that the award recipients submitted single audit reports, determine whether the reports were submitted within the required time take action to obtain single audit reports when award recipients did not submit the reports within the required time frames. For purposes of our audit, we reviewed the agencies’ and subagencies’ policies and procedures to assess whether they reasonably assure that award recipients completed and submitted their single audit reports within the required time frames, using 9 months after each award recipient’s fiscal year-end as the applicable time period. Nine months after the fiscal year-end is in most cases the later date compared to 30 days after receipt of the single audit report; we used this time period because it generally represents the maximum amount of time award recipients would have to submit single audit reports, and measuring the length of time after receipt as the applicable time period would have been unduly burdensome for our audit. We requested from each subagency a list of award recipients that expended in excess of $500,000 in federal awards during fiscal year 2013 and were therefore required to submit single audit reports for fiscal year 2013. We then queried the Federal Audit Clearinghouse (FAC) to determine whether selected award recipients submitted their single audit reports timely. While we do not present data directly from the FAC in this report, because the FAC database is integral to the Single Audit Act reporting requirements, we undertook data reliability procedures to ascertain whether award recipients were submitting their single audit reports to the FAC. Specifically, we randomly selected 15 award recipients from each of the six subagencies that provided lists—Food and Nutrition Service, Office of Elementary and Secondary Education, Office of Special Education and Rehabilitative Services, Office of Public and Indian Housing, Federal Highway Administration, and Federal Transit Administration—for a total of 90 sampled cases. We received a listing of award recipients from the remaining three subagencies: Centers for Medicare and Medicaid Services, Rural Development, and Office of Community Planning and Development. We were unable to use the lists for our testing because lists from the Centers for Medicare and Medicaid Services and Rural Development were not generated independently of the FAC. In addition, the Office of Community Planning and Development did not provide us with a list that facilitated searching in the FAC. Two of the six responding subagencies provided us with a list that included 8 award recipients that expended less than $500,000. For the remaining 82 cases, we undertook searches in the FAC to ascertain whether those award recipients had single audit reports in the FAC database. In 72 of the 82 cases, we were able to find the applicable fiscal year 2013 single audit report. In the remaining 10 cases, we could not readily locate an applicable 2013 single audit report using our multistep search procedures. Our random sample, which examined a limited number of single audit reports for the selected subagencies, was not designed to be generalizable. As a result, we do not make a generalizable statement about the completeness of the single audit reports in the FAC database. For our second objective, we reviewed selected policies and procedures and interviewed agency and subagency officials to clarify our understanding of those policies and procedures. We also randomly selected nongeneralizable samples of single audit findings applicable to each subagency. For each sample item, we requested and reviewed documentation relating to written management decisions. To address our third objective, we reviewed the selected agencies’ and subagencies’ written policies and procedures for those areas and interviewed selected agency and subagency officials to clarify our understanding of those policies and procedures. We conducted this performance audit from May 2015 to February 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Phyllis Anderson (Assistant Director), Sharon Byrd, Bruce David, Francine DelVecchio, Benjamin Durfee, Doreen Eng, Maxine Hattery, Bradley Johnson, Jason Kelly, Jennifer Leone, Kevin McAloon, Jared Minsk, Lisa Motley, Mai Nguyen, Anna Maria Ortiz, Amber Sinclair, and Walter Vance made key contributions to this report.
In fiscal year 2015, federal agencies outlaid over $600 billion in federal awards to state and local governments, according to OMB. The Single Audit Act of 1984, as amended, requires that federal agencies oversee their awards to nonfederal entities. OMB Circular No. A-133 provided guidance for implementing the act during GAO's audit. GAO was asked to examine federal agency oversight of single audits. This report examines whether selected agencies effectively designed policies and procedures to reasonably assure that (1) recipients submit timely single audit reports and (2) award recipients take action on single audit findings by issuing timely management decisions. GAO also examined whether selected agencies had policies and procedures for managing high-risk and recurring audit findings. GAO selected the five agencies with the largest dollar amounts of reported outlays for grants to state and local governments in fiscal year 2013. For each agency, GAO reviewed its two subagencies accounting for over 80 percent of outlays, reviewed written policies and procedures, and interviewed the respective officials. Federal agencies have oversight responsibilities for the funds that they award to nonfederal entities and can assign these responsibilities to their subagencies (i.e., operating units or divisions). Nonfederal entities are required to undergo a single audit if their expenditures of federal awards in a fiscal year exceed a certain threshold. A single audit is an audit of the award recipient's expenditure of federal awards and of its financial statements and can identify deficiencies in the award recipient's compliance with the provisions of laws, regulations, contracts, or grant agreements and in its financial management and internal control systems. Correcting such deficiencies can help reasonably assure the effective use of federal funds and reduce federal improper payments. Of the five agencies in GAO's study—the Departments of Agriculture, Education, Health and Human Services (HHS), Housing and Urban Development (HUD), and Transportation—some of the agencies' subagencies that GAO reviewed did not effectively design policies and procedures to reasonably assure the timely submission of single audit reports by award recipients. The Office of Management and Budget's (OMB) guidance requires that federal awarding agencies ensure that award recipients submit single audit reports within certain time frames. This can help assure that single audit findings are timely corrected. Most of the selected subagencies in GAO's review did not effectively design policies and procedures to reasonably assure that they issued timely management decisions containing the information required by OMB guidance. This guidance requires agencies to evaluate each award recipient's audit findings and corrective action plans and issue a management decision within 6 months of receipt of the single audit report as to the actions award recipients must take to correct each single audit finding. Such decisions may add clarity about the agency's position on the single audit finding and the corrective action. Only the two selected subagencies in Education had policies and procedures for using risk-based approaches to manage high-risk and recurring single audit findings. High-risk single audit findings may be seriously detrimental to federal programs and could result in improper payments. Recurring single audit findings have persisted for more than one audit period and may need more attention or resources to correct. With over 30,000 single audit reports submitted for fiscal year 2015 and constraints in resources for conducting federal oversight, managing single audit findings using a risk-based approach can assist in identifying and prioritizing problem areas. GAO is making 21 recommendations. One Agriculture subagency agreed with the recommendations and the other did not comment. HHS and Transportation concurred. HUD commented that one subagency had taken actions to address the recommendations, while the other subagency disagreed with the recommendations directed to it. GAO believes that the recommendations are valid as discussed in the report.
Since 2014, we have reported multiple times on the progress CBP has made deploying technologies under the ATP. We reported in May 2016 that CBP had initiated or completed deployment of technology to Arizona for six programs under the ATP. In addition to deploying technologies under the ATP, CBP’s 2014 Southwest Border Technology Plan extended technology deployments to the remainder of the southwest border, beginning with selected areas in Texas and California. As of July 2017, CBP completed deployment of select technologies to sectors in Arizona, Texas, and California. For example, in our April 2017 assessment of DHS’s major acquisitions programs, we reported that CBP completed deployments of 7 Integrated Fixed Tower (IFT) systems to the Nogales Border Patrol station within the Tucson sector in Arizona, and was working to deploy the remaining 46 towers to other sectors in Arizona. As of July 2017, CBP reported deploying an additional 8 IFT systems, for a total of 15 of 53 planned towers. CBP has also made changes to the IFT program. Specifically, rather than expanding IFT capabilities to the Wellton Border Patrol station within the Yuma sector in Arizona as originally planned, CBP now plans to replace 15 existing SBInet fixed- tower systems with IFT systems. CBP also reported that it had completed Remote Video Surveillance System (RVSS) and Mobile Surveillance Capability (MSC) deployments to Arizona as planned under the ATP, and deployed 32 MSC systems to Texas and California. Additionally, CBP completed contract negotiations with the RVSS program for follow-on contract option periods to deploy RVSS to two stations in the Rio Grande Valley sector in Texas. The deployment status of the IFT, RVSS, and MSC technologies is shown below in table 1. We will plan to report on the deployment status of southwest border surveillance technology, among other topics, in a forthcoming report. In March 2014, we assessed CBP’s efforts to develop and implement the ATP. Specifically, we recommended that CBP, among other things, (1) apply scheduling best practices; (2) develop an integrated schedule; and (3) verify life-cycle cost estimates. DHS concurred with some of our recommendations and has taken actions to address some of them, which we discuss below. Program Schedules. In March 2014, we found that CBP had a schedule for deployment for each of the ATP’s seven programs, and that four of the programs would not meet their originally planned completion dates. Specifically, we found that the three highest-cost programs (IFT, RVSS, and MSC) had experienced delays relative to their baseline schedules, as of March 2013. We also reported that CBP had at least partially met the four characteristics of reliable schedules for the IFT and RVSS schedules and partially or minimally met the four characteristics for the MSC schedule. Scheduling best practices are summarized into four characteristics of reliable schedules—comprehensive, well-constructed, credible, and controlled (i.e., schedules are periodically updated and progress is monitored). We assessed CBP’s schedules as of March 2013 for the three highest-cost programs and reported in March 2014 that schedules for two of the programs at least partially met each characteristic (i.e., satisfied about half of the criterion), and the schedule for the other program at least minimally met each characteristic (i.e., satisfied a small portion of the criterion). For example, the schedule for the IFT program partially met the characteristic of being credible in that CBP had performed a schedule risk analysis for the program, but the risk analysis did not include the risks most likely to delay the program or how much contingency reserve was needed. For the MSC program, the schedule minimally met the characteristic of being controlled in that it did not have valid baseline dates for activities or milestones by which CBP could track progress. We recommended that CBP ensure that scheduling best practices are applied to the IFT, RVSS, and MSC program schedules. DHS concurred with the recommendation and stated that CBP planned to ensure that scheduling best practices would be applied, as outlined in our schedule assessment guide, when updating the three programs’ schedules. In response to our March 2014 recommendation regarding applying scheduling best practices, CBP provided us with updated program schedules for the IFT, RVSS, and MSC programs. Based on our assessment of updated program schedules for the IFT, RVSS, and MSC that CBP had completed as of January 2017, CBP has made significant improvements in the quality of the programs’ schedules, but the programs’ schedules had not met all characteristics of a reliable schedule. For example, CBP has improved the quality of its products for analyzing and quantifying risk to the programs’ schedules; however, CBP could improve the documentation of these analyses and the prioritization of the programs’ risks. While CBP has taken positive steps, we continue to believe that by ensuring that all scheduling best practices are applied, CBP could help ensure the reliability of its programs’ schedules and better position itself to identify and address any potential delays in its programs’ commitment dates. Integrated Master Schedule. In March 2014, we also found that CBP had not developed an Integrated Master Schedule for the ATP in accordance with best practices. Rather, CBP had used separate schedules for each program to manage implementation of the ATP, as CBP officials stated that the ATP contained individual acquisition programs rather than integrated programs. However, collectively these programs are intended to provide CBP with a combination of surveillance capabilities to be used along the Arizona border with Mexico, and resources are shared among the programs. We recommended in March 2014 that CBP develop an Integrated Master Schedule for the ATP. CBP did not concur with this recommendation and maintained that an Integrated Master Schedule for the ATP in one file undermines the DHS- approved implementation strategy for the individual programs making up the ATP, and that the implementation of this recommendation would essentially create a large, aggregated program, and effectively create an aggregated “system of systems.” DHS further stated at the time that a key element of its plan has been the disaggregation of technology procurements. As we reported in March 2014, this recommendation was not intended to imply that DHS needed to re-aggregate the ATP’s seven programs into a “system of systems” or change its procurement strategy in any form. The intent of the recommendation was for DHS to insert the individual schedules for each of the ATP’s programs into a single electronic Integrated Master Schedule file in order to identify any resource allocation issues among the programs’ schedules. We continue to believe that developing and maintaining an Integrated Master Schedule for planned technologies could allow CBP insight into current or programmed allocation of resources for all programs as opposed to attempting to resolve any resource constraints for each program individually. Life-cycle Cost Estimates. In March 2014, we also reported that the life- cycle cost estimates for the technology programs under the ATP reflected some, but not all, best practices. Cost-estimating best practices are summarized into four characteristics—well documented, comprehensive, accurate, and credible. Our analysis of CBP’s estimate for the ATP and estimates completed at the time of our March 2014 review for the two highest-cost programs—the IFT and RVSS programs—showed that these estimates at least partially met three of these characteristics: well documented, comprehensive, and accurate. In terms of being credible, these estimates had not been verified with independent cost estimates in accordance with best practices. We concluded that verifying life-cycle cost estimates with independent estimates in accordance with cost- estimating best practices could help better ensure the reliability of the cost estimates, and we recommended that CBP verify the life-cycle cost estimates for the IFT and RVSS programs with independent cost estimates and reconcile any differences. DHS concurred with this recommendation, but stated then that it did not believe that there would be a benefit in expending funds to obtain independent cost estimates and that if the costs realized to date continued to hold, there may be no requirement or value added in conducting full program updates with independent cost estimates. We recognize the need to balance the cost and time to verify the life-cycle cost estimates with the benefits to be gained from verification with independent cost estimates. As part of our updates on CBP’s efforts to implement our 2014 recommendations, CBP officials told us that in fiscal year 2016, DHS’s Cost Analysis Division (CAD) would begin piloting DHS’s independent cost estimate capability on the RVSS program. According to CBP officials, this pilot is an opportunity to assist DHS in developing its independent cost estimate capability. CBP selected the RVSS program for the pilot because the program was at a point in its planning and execution process where it can benefit most from having an independent cost estimate performed, as these technologies are being deployed along the southwest border beyond Arizona. According to CBP officials, DHS’s Cost Analysis Division completed its independent cost estimate for the RVSS program in August 2016, and in February 2017 CBP had completed its efforts to verify the RVSS program cost estimate with CAD’s independent cost estimate, which is part of the CAD pilot. However, as of July 2017, CBP has not yet provided us with the final reconciliation of the independent cost estimate and the RVSS program cost estimate, as we recommended in 2014. CBP officials have not detailed similar plans for the IFT. We continue to believe that independently verifying the life-cycle cost estimates for the IFT and RVSS programs and reconciling any differences, consistent with best practices, could help CBP better ensure the reliability of the estimates. We reported in March 2014 that CBP had identified mission benefits of its surveillance technologies to be deployed along the southwest border, such as improved situational awareness and agent safety. However, the agency had not developed key attributes for performance metrics for all surveillance technologies to be deployed, as we recommended in November 2011. Further, we also reported in March 2014 that CBP did not capture complete data on the contributions of these technologies, which in combination with other relevant performance metrics or indicators, could be used to better determine the impact of CBP’s surveillance technologies on CBP’s border security efforts and inform resource allocation decisions. We found that CBP had a field within its Enforcement Integrated Database for data on whether technological assets, such as SBInet surveillance systems, and non-technological assets, such as canine teams, assisted or contributed to the apprehension of illegal entrants and seizure of drugs and other contraband; however, according to CBP officials, Border Patrol agents were not required to record these data. This limited CBP’s ability to collect, track, and analyze available data on asset assists to help monitor the contribution of surveillance technologies, including its SBInet system, to Border Patrol apprehensions and seizures and inform resource allocation decisions. We recommended that CBP require data on asset assists to be recorded and tracked within its database, and once these data were required to be recorded and tracked, that it analyze available data on apprehensions and technological assists—in combination with other relevant performance metrics or indicators, as appropriate—to determine the contribution of surveillance technologies to CBP’s border security efforts. CBP concurred with our recommendations and has implemented one of them. Specifically, in June 2014, CBP issued guidance informing Border Patrol agents that the asset assist data field within its database was now a mandatory data field. Therefore, agents are required to enter any assisting surveillance technology or other equipment. Further, as part of our updates on CBP’s efforts to implement our 2014 recommendations, we found that in May 2015, CBP had identified a set of potential key attributes for performance metrics for all technologies to be deployed under the ATP. However, CBP officials stated at that time that this set of performance metrics was under review as the agency continued to refine the key attributes for metrics to assess the contributions and impacts of surveillance technology on its border security mission. In our April 2016 update on the progress made by agencies to address our findings on duplication and cost savings across the federal government, we reported that CBP had modified its time frame for developing baselines for each performance measure and that additional time would be needed to implement and apply key attributes for metrics. According to CBP officials, CBP expected these performance measure baselines to be developed by the end of calendar year 2015, at which time the agency planned to begin using the data to evaluate the individual and collective contributions of specific technology assets deployed under the ATP. Moreover, CBP planned to use the baseline data to establish a tool that explains the qualitative and quantitative impacts of technology and tactical infrastructure on situational awareness in specific areas of the border environment by the end of fiscal year 2016. Although CBP had initially reported it had expected to complete its development of baselines for each performance measure by the end of calendar year 2015, as of March 2016, it was adjusting the actual completion date, pending test and evaluation results for recently deployed technologies to the southwest border. In our April 2017 update on the progress made by agencies to address our findings on duplication and cost savings across the federal government, we reported that CBP had provided us a case study that assessed technology assist data, along with other measures such as field-based assessments of capability gaps, to determine the contributions of surveillance technologies to its mission. This is a helpful step in developing and applying performance metrics. However, the case study was limited to one border location and the analysis was limited to select technologies. To fully implement our recommendation, CBP should complete its efforts to fully develop and apply key attributes for performance metrics for all technologies deployed and begin using the data to evaluate the individual and collective contributions of specific technologies, fully assess its progress in implementing planned technologies, and determine when mission benefits have been fully realized. Until CBP completes this effort it will not be well positioned to fully assess its progress in implementing the ATP and determining when mission benefits have been fully realized. Chairwoman McSally, Ranking Member Vela, and Members of the Subcommittee, this concludes my prepared statement. I will be happy to answer any questions you may have. For further information about this testimony, please contact Rebecca Gambler at (202) 512-8777 or gamblerr@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony are Jeanette Espinola (Assistant Director), Yvette Gutierrez (Analyst in Charge), Charlotte Gamble, Ashley Davis, Claire Peachey, Marycella Mierez, and Sasan J. “Jon” Najmi. 2017 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-17-491SP. Washington, D.C.: April 26, 2017. 2017 Homeland Security Acquisitions: Earlier Requirements Definition and Clear Documentation of Key Decisions Could Facilitate Ongoing Progress. GAO-17-346SP. Washington, D.C.: April 6, 2017. Border Security: DHS Surveillance Technology, Unmanned Aerial Systems and Other Assets. GAO-16-671T. Washington, D.C.: May 24, 2016. 2016 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-16-375SP. Washington, D.C.: April 13, 2016. Homeland Security Acquisitions: DHS Has Strengthened Management, but Execution and Affordability Concerns Endure. GAO-16-338SP. Washington, D.C.: March 31, 2016. Southwest Border Security: Additional Actions Needed to Assess Resource Deployment and Progress. GAO-16-465T. Washington, D.C.: March 1, 2016. GAO Schedule Assessment Guide: Best Practices for Project Schedules. GAO-16-89G. Washington, D.C.: December 2015. Border Security: Progress and Challenges in DHS’s Efforts to Implement and Assess Infrastructure and Technology. GAO-15-595T. Washington, D.C.: May 13, 2015. Homeland Security Acquisitions: Addressing Gaps in Oversight and Information is Key to Improving Program Outcomes. GAO-15-541T. Washington, D.C.: April 22, 2015. Homeland Security Acquisitions: Major Program Assessments Reveal Actions Needed to Improve Accountability. GAO-15-171SP. Washington, D.C.: April 22, 2015. 2015 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-15-404SP. Washington, D.C.: April 14, 2015. Arizona Border Surveillance Technology Plan: Additional Actions Needed to Strengthen Management and Assess Effectiveness. GAO-14-411T. Washington, D.C.: March 12, 2014. Arizona Border Surveillance Technology Plan: Additional Actions Needed to Strengthen Management and Assess Effectiveness. GAO-14-368. Washington, D.C.: March 3, 2014. Border Security: Progress and Challenges in DHS Implementation and Assessment Efforts. GAO-13-653T. Washington, D.C.: June 27, 2013. Border Security: DHS’s Progress and Challenges in Securing U.S. Borders. GAO-13-414T. Washington, D.C.: March 14, 2013. U.S. Customs and Border Protection’s Border Security Fencing, Infrastructure and Technology Fiscal Year 2011 Expenditure Plan. GAO-12-106R. Washington, D.C.: November 17, 2011. Arizona Border Surveillance Technology: More Information on Plans and Costs Is Needed before Proceeding. GAO-12-22. Washington, D.C.: November 4, 2011. GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs. GAO-09-3SP. Washington, D.C.: March 2009. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
CBP deploys land-based surveillance technologies to help monitor and secure the border and apprehend individuals who attempt to cross the border illegally. GAO has reported on the progress and challenges DHS and its components have faced implementing its border security efforts. This statement addresses (1) the status of CBP efforts to deploy land-based surveillance technologies along the southwest border and (2) CBP's efforts to measure the effectiveness of these technologies. This statement is based on GAO reports and testimonies from 2011 through 2016, selected updates conducted in 2017, and ongoing work for this subcommittee related to border surveillance technology. For ongoing work and updates, GAO analyzed technology program documents; interviewed DHS, CBP, and U.S. Border Patrol officials; and conducted site visits to Arizona and Texas to observe technologies. U.S. Customs and Border Protection (CBP), a component of the Department of Homeland Security (DHS), has made progress deploying surveillance technology along the southwest U.S. border under its 2011 Arizona Technology Plan (ATP) and 2014 Southwest Border Technology Plan. The ATP called for deployment of a mix of radars, sensors, and cameras in Arizona, and the 2014 Plan incorporates the ATP and includes deployments to the rest of the southwest border, beginning with areas in Texas and California. As of July 2017, CBP completed deployment of select technologies to areas in Arizona, Texas, and California. For example, CBP deployed all planned Remote Video Surveillance Systems (RVSS) and Mobile Surveillance Capability (MSC) systems, and 15 of 53 Integrated Fixed Tower (IFT) systems to Arizona. CBP also deployed all planned MSC systems to Texas and California and completed contract negotiations to deploy RVSS to Texas. CBP has made progress implementing some, but not all of GAO's recommendations related to managing deployments of its technology programs. In 2014, GAO assessed CBP's implementation of the ATP and recommended that CBP: (1) apply scheduling best practices; (2) develop an integrated schedule; and (3) verify cost estimates for the technology programs. DHS concurred with some, but not all of the recommendations and has taken actions to address some of them, such as applying best practices when updating schedules, but has not taken action to address others, such as developing an integrated master schedule and verifying cost estimates with independent estimates for the IFT program. GAO continues to believe that applying schedule and cost estimating best practices could better position CBP to strengthen its management efforts of these programs. CBP has also made progress toward assessing performance of surveillance technologies. GAO reported in 2014 that CBP identified some mission benefits, such as improved situational awareness and agent safety, but had not developed key attributes for performance metrics for all technologies, as GAO recommended (and CBP concurred) in 2011. GAO has ongoing work examining DHS's technology deployments and efforts to assess technology performance, which GAO plans to report on later this year. GAO has made recommendations to DHS to improve its management of plans and programs for surveillance technologies. DHS has generally agreed. DHS has taken actions or described planned actions to address some of these recommendations. GAO continues to believe that these recommendations could strengthen CBP's management efforts and will continue to monitor CBP's efforts.
Medicaid is a federal-state partnership that finances health care for certain low-income individuals, including children, families, the aged, and the disabled. More than 64 million persons were enrolled in the Medicaid program for fiscal year 2009. The Centers for Medicare & Medicaid Services (CMS) reported combined fiscal year 2009 and 2010 Medicaid program spending of $744 billion, $499 billion of which was funded by the federal government. The federal government matches most state Medicaid expenditures for covered services according to the FMAP, which is based on a statutory formula drawing on each state’s annual per capita income. Because of the mechanism through which the Recovery Act increased the federal share of funding for Medicaid through an increased FMAP, any provider that received Medicaid reimbursements during 2009 received Recovery Act funds. Within broad federal requirements, each state operates and administers its Medicaid program in accordance with a CMS-approved state Medicaid plan. These plans detail the populations served, the services covered, and the methods used to calculate payments to providers. Title XIX of the Social Security Act allows considerable flexibility within the states’ Medicaid plans. Within broad national guidelines established by federal statutes, regulations, and policies, each state (1) establishes its own eligibility standards; (2) determines the type, amount, duration, and scope of services; administers its own program—including enrollment of providers. Medicaid policies for eligibility, services, and payment are complex and vary considerably, even among states of similar size or geographic proximity. Thus, a person who is eligible for Medicaid in one state may not be eligible in another state, and the services provided by one state may differ considerably in amount, duration, or scope from services provided in a similar or neighboring state. (3) sets the rate of payment for services; and (4) All states must provide certain services, such as inpatient and outpatient hospital services, nursing facility services, and physician services, and may provide additional, optional services, such as prescription drugs, dental care, and certain home- and community-based services. covered under the state’s Medicaid plan. After the claim is approved by the state, it pays the claim. Federal reimbursement for Medicaid generally begins after a Medicaid beneficiary receives care from a health care provider such as a hospital, physician, or nursing home. The state pays the provider from a combination of state funds and federal funds, the latter of which have been advanced by CMS each quarter. The state then files a quarterly expenditure report, in which it claims the federal share of the Medicaid expenditure as reimbursement for its payment to providers and reconciles its total expenditures with the federal advance. In addition to reimbursement for medical services, the state may claim federal reimbursement for functions it performs to administer its Medicaid program, such as enrolling new beneficiaries; reviewing the appropriateness of providers’ claims; and collecting payments from third parties, which are payers other than Medicaid, such as Medicare, that may be liable for some or all of a particular health claim. Federal law does not prohibit providers with unpaid federal taxes from enrolling in or receiving payments from Medicaid. Federal regulations and policies require the states, as part of their responsibilities for determining whether the providers meet Medicaid requirements for enrollment, to verify basic information on potential providers, including whether the providers meet state licensure requirements and whether the providers are prohibited from participating in federal health care programs. However, federal regulations and policies do not require the states to screen these providers for federal tax delinquency nor do they explicitly authorize the states to reject the providers that have delinquent tax debt from participation in Medicaid. Further, federal law generally does not permit IRS to disclose taxpayer information, including tax debts, unless the taxpayer consents. IRS may levy a taxpayer’s property to satisfy a tax debt. For instance, IRS could seize and sell property that a taxpayer holds (such as the taxpayer’s car, boat, or house), or IRS could seize property that belongs to the taxpayer but is held by someone else (such as the taxpayer’s wages, retirement accounts, dividends, bank accounts, licenses, rental income, accounts receivable, or commissions). Currently, IRS may issue a onetime notice of levy to a state Medicaid agency to collect the receivable balance immediately due to a given provider. IRS may then issue additional successive, onetime levies if the proceeds received from the initial levy are not sufficient to satisfy the government’s claim. A provision of the Taxpayer Relief Act of 1997 authorizes IRS to continuously levy (typically using an automated process) certain federal payments made to delinquent taxpayers in order to collect tax debt, but Medicaid reimbursements have never been collected using this provision of the law. This is because IRS determined that Medicaid disbursements do not qualify as federal payments and thus may not be subjected to the continuous levy. This decision was based on the nature of the Medicaid reimbursement as a state entitlement, and the considerable operational discretion vested in state agencies in the administration of the Medicaid program, including discretion to create unique eligibility standards for enrollment of providers and to establish criteria for disbursement of funds. Our analysis found that, as of September 30, 2011, about 7,000 Medicaid providers in the three selected states had approximately $791 million in unpaid federal taxes from 2009 or earlier. These providers accumulated an additional $59 million in unpaid federal taxes during 2010 and 2011. These providers represent about 5.6 percent of the approximately 125,000 Medicaid providers reimbursed by the selected states during 2009. These 7,000 Medicaid providers with unpaid federal taxes received a total of about $6.6 billion in Medicaid reimbursements during 2009, which included Recovery Act funds. The amount of unpaid federal taxes we identified among Medicaid providers is likely understated because the IRS taxpayer data reflect only the amount of unpaid taxes either reported by the taxpayer on a tax return or assessed by IRS through its various enforcement programs, and generally the unpaid federal taxes amount does not include entities that did not file tax returns or underreported their income. As shown in figure 1, about 77 percent of the approximately $791 million in unpaid federal taxes was made up of individual income taxes, corporate income taxes, and payroll taxes. The other 23 percent of taxes included excise taxes, miscellaneous penalties, and other types of taxes. Over 40 percent of the unpaid federal taxes owed by Medicaid providers in these three states were payroll taxes. Employers are subject to civil and criminal penalties if they do not remit payroll taxes to the federal government. When an employer withholds taxes from an employee’s wages, the employer is deemed to have a responsibility to hold these amounts “in trust” for the federal government until the employer makes a federal tax deposit in that amount. To the extent these withheld amounts are not forwarded to the federal government, the employer is liable for these amounts, as well as the employer’s matching Federal Insurance Contributions Act contributions for Social Security and Medicare. Individuals within a business (e.g., corporate officers) may be held personally liable for the withheld amounts not forwarded and they may be assessed a civil monetary penalty known as a trust fund recovery penalty (TFRP). Willful failure to remit payroll taxes can also be a criminal felony offense punishable by imprisonment up to 5 years, while the failure to properly segregate payroll taxes can be a criminal misdemeanor offense punishable by imprisonment of up to 1 year. A substantial amount of the unpaid federal taxes shown in IRS records as owed by Medicaid providers have been outstanding for several years. As shown in figure 2, about 51 percent of the $791 million in unpaid federal taxes was for tax periods from 2004 through 2007, and approximately 21 percent of the unpaid federal taxes was for tax periods prior to 2004. Our previous work has shown that as unpaid taxes age, the likelihood of collecting all or a portion of the amount owed decreases. This is due, in part, to the continued accrual of interest and penalties on the outstanding tax debt, which, over time, can dwarf the original tax obligation. The amount of unpaid federal taxes reported above does not include all tax debts owed by Medicaid providers due to statutory provisions that give IRS a finite period under which it can seek to collect on unpaid taxes. There is a 10-year statute of limitations beyond which IRS is prohibited from attempting to collect tax debt. Consequently, if the Medicaid providers have unpaid federal taxes from beyond the 10-year statutory collection period, the older tax debt may have been removed from IRS’s records. We were unable to determine whether any tax debt had been removed for these providers on this basis, and if so, the amount that had been removed. Although $791 million in unpaid federal taxes owed by Medicaid providers in the selected states as of September 30, 2011, is a significant amount, it likely understates the full extent of unpaid taxes owed by these or other businesses and individuals. The IRS tax database reflects only the amount of unpaid federal taxes either reported by the individual or business on a tax return or assessed by IRS through its various enforcement programs. The IRS database does not reflect amounts owed by businesses and individuals that have not filed tax returns and for which IRS has not assessed tax amounts due. Further, our analysis did not attempt to account for businesses or individuals that purposely underreported income and were not specifically identified by IRS as owing the additional federal taxes. According to IRS, underreporting of income accounted for more than 80 percent of the estimated $450 billion gross tax gap estimated for tax year 2006. As discussed below, some of our case-study examples include individuals and businesses who did not file required accurate tax returns. Further, we did not attempt to broadly identify instances where a Medicaid provider owed taxes under a separate TIN from the TIN under which the provider received the Medicaid reimbursements in our calculations of the magnitude of tax debt. For example, if a sole proprietor filed Medicaid claims under his/her business’s Employer Identification Number (EIN), but owed personal income taxes under his/her own Social Security Number (SSN), we would not have been able to match the proprietor’s Medicaid claims to his/her debt. Consequently, the extent of unpaid federal taxes for Medicaid providers may be understated since we may not have had all relevant TINs for each Medicaid provider that owes tax debt. However, we were able to identify several case-study examples of this phenomenon, as discussed below. When we reviewed each state’s Medicaid data, we reached the conclusion that the data from New York, Texas, and Florida were sufficiently reliable for the purposes of our study. However, we determined, through data tests, interviews, and reviews of state audit reports that the Medicaid data from California for 2009 were unreliable. California provided us with $38.4 billion in transactional data, but reported $41.8 billion in Net Expenditures to CMS—a difference of $3.4 billion (8.3 percent). When we asked California officials why the amounts in the data they provided did not reconcile to externally published sources, officials told us that they were unable to reconcile the data. We have notified the Health and Human Services Office of Inspector General to take any actions it deems appropriate. We reviewed 40 Medicaid providers with unpaid federal taxes (20 with unpaid business taxes and 20 with unpaid individual taxes) and 10 additional cases where the provider did not have unpaid federal taxes, but one of its principals had unpaid federal taxes. In each case, the provider received significant reimbursement payments from Medicaid, including Recovery Act funds, while having unpaid federal taxes. These case studies are intended to illustrate the sizeable amounts of unpaid federal taxes owed by some Medicaid providers, are among the most egregious examples of Medicaid providers with unpaid federal taxes we identified, and cannot be generalized beyond the cases presented. In each of these 40 cases, the provider received significant reimbursement payments from Medicaid (which included Recovery Act funds) while owing at least $100,000 in unpaid federal taxes. In many cases, IRS records showed abusive or potentially criminal activity related to the federal tax system. For example, all 20 of the business providers we reviewed owed delinquent payroll taxes. As discussed previously, businesses and organizations with employees are required by law to collect, account for, and transfer income and employment taxes withheld from employees’ wages to IRS; failure to do so may result in civil or criminal penalties. We also found instances of providers entering into and subsequently defaulting on installment agreements with IRS numerous times or sending IRS bad checks. Thirty of the 40 providers did not file a tax return or filed late at least one time in the last 10 years. These 40 providers received a total of $235 million in Medicaid reimbursements. The case-study providers represent a broad range of provider types such as doctors, dentists, home care providers, hospitals, durable medical equipment suppliers, and social services providers. The amount of unpaid federal taxes associated with these case studies is about $26 million in total, ranging from approximately $100,000 (the minimum threshold used to draw our sample) to over $6 million individually. levying assets, filing federal tax liens, assessing a TFRP) against all 40 of these recipients. We note that at least 13 of these recipients had scheduled Medicaid reimbursements subjected to onetime levy by IRS to pay delinquent taxes on at least one occasion. In one case, IRS collected hundreds of thousands of dollars from the taxpayer using these levies. These figures include all known unpaid debts for tax periods through 2010. had actions taken against their professional licenses and have been fined by state oversight agencies for regulatory violations. Table 1 highlights 10 Medicaid providers with unpaid federal taxes. Thirty additional cases can be found in appendix II. We have referred all 40 providers to IRS for further investigation, as appropriate. We examined 10 additional cases of individuals who had unpaid federal taxes while appearing to serve as a principal for a Medicaid provider that did not have known tax debt. For the principals that we examined, their known unpaid federal taxes ranged from $4,000 to $1.3 million. These individuals reported to IRS receiving from $30,000 to $300,000 in wages or other payments from a Medicaid provider, with 8 of the 10 cases involving total payments exceeding $100,000. The providers they worked for received from $1,000 to $50 million in Medicaid reimbursements. In three of these cases, medical professionals submitted their names as payees to the state Medicaid agency, along with a TIN other than their personal SSN. In all three cases, this secondary TIN did not have associated tax debt, but the doctors each had personal tax debt under their SSNs, ranging from about $20,000 to over $60,000. These doctors received between $15,000 and $150,000 from Medicaid through their secondary TIN. In another case, we identified two officers with unpaid federal taxes totaling approximately $370,000 at a nonprofit provider that received over $6 million in Medicaid reimbursements. Each officer reported a salary in excess of $100,000 for 2009. Finally, in one case, we identified a doctor who, according to IRS, “had a history of noncompliance … and avoidance of payment of taxes” resulting in over $1 million in delinquent personal income taxes (including fines and penalties), while the business he/she owned received under $2,000 in Medicaid reimbursements. IRS collected a portion of the outstanding debt by garnishing the doctor’s wages at his company after the doctor defaulted on an installment agreement. Increased levy of Medicaid reimbursements could help IRS collect millions of dollars of unpaid federal taxes owed by Medicaid providers. IRS may levy a taxpayer’s property to satisfy a tax debt, but IRS currently may only subject Medicaid reimbursements to a onetime levy instead of a continuous levy, because Medicaid reimbursements are not considered “federal payments.” We estimate that if IRS were able to continuously levy Medicaid reimbursements, it could collect from $22 million to $330 million from the three selected states for 2009, depending on the circumstances of the levy and certain provider behaviors. Alternatively, manual continuous levies (levies that are physically mailed by IRS at its discretion) targeted against providers that owed a significant amount of tax debt and received large Medicaid reimbursements may represent a lower-cost opportunity to collect unpaid federal taxes. The states that we spoke to expressed concerns over the use of continuous levies and also described problems related to the enforcement of onetime levies. IRS may issue a onetime notice of levy to a state Medicaid agency to collect the receivable balance immediately due to a given provider, to the extent the provider owes federal taxes. However, IRS can only collect funds that are due to the provider at the moment that the levy is received by the state Medicaid agency. To the extent the initial levy does not collect the full amount of unpaid federal taxes due, IRS must issue subsequent onetime levy notices to collect a provider’s Medicaid reimbursements due from the state Medicaid agency. In comparison, continuous levies are active until IRS agrees to release the levy and if allowed to apply to Medicaid payments can be automatically applied to any future requests for Medicaid reimbursement without additional levy notices. For example, one mechanism that IRS uses to implement continuous levies is an automated system referred to as the Federal Payment Levy Program (FPLP). Through FPLP, IRS collected $614 million in fiscal year 2011 and has collected over $3.26 billion since it was implemented in 2000 (including collection of Medicare payments made after fiscal year 2008). Under the FPLP, each week IRS sends the Department of the Treasury’s Financial Management Service (FMS) an extract of its tax debt files. These files are uploaded into the Treasury Offset Program. FMS sends payment data to this offset program to be matched against unpaid federal taxes. If there is a match and IRS has updated the weekly data sent to the offset program to reflect that it has completed all statutory notifications, any federal payment owed to the debtor is reduced (levied) to help satisfy the unpaid federal taxes. Current federal law does not allow IRS to subject Medicaid reimbursements to continuous levy. At a 2007 hearing held by the Senate Homeland Security & Governmental Affairs Committee, IRS and Department of the Treasury officials testified that the FPLP could not be used to offset Medicaid reimbursements because such payments do not meet the criteria established to be considered “federal payments.” In addition, they noted that, unlike Medicare payments, which are disbursed by the federal government, Medicaid reimbursements to providers are issued by the states, introducing additional legal and operational complexities not present under Medicare. A joint task force of IRS, CMS, and Department of the Treasury officials studied the matter, and concurred with the IRS assertion that since Medicaid is not a “federal payment” it cannot be subject to continuous levy. The task force considered, but did not conduct, a comprehensive cost-benefit analysis considering the potential impact of a change in legislation defining Medicaid as a “federal payment.” Since a comprehensive study was not conducted, the full costs associated with implementing a continuous levy program for Medicaid payments are unknown. Several bills have since been introduced that would add Medicaid to the definition of “federal payment,” but none have become law. For the 7,000 delinquent Medicaid providers we identified in three states, if there had been such an automated continuous levy system in place, we estimate that between $22 million and $55 million could have been collected to offset unpaid federal taxes in 2009. These estimates exclude providers who are identified by IRS as currently precluded from continuous levy for statutory or policy reasons. Cases excluded from the FPLP for statutory reasons include those with tax debt that had not completed IRS’s notification process, or tax debtors who filed for bankruptcy protection or other litigation, who agreed to pay their tax debt through monthly installment payments, or who requested to pay less than the full amount owed through an offer in compromise. Cases excluded from the FPLP for policy reasons include those tax debtors whom IRS has determined to be in financial hardship, those filing an amended return, certain cases under criminal investigation, and those cases in which IRS has determined the specific circumstances of the cases warrant excluding it from the FPLP. The low-end estimate presumes each Medicaid reimbursement to be levied at a 15 percent rate; the high-end estimate presumes a 100 percent levy rate. However, this estimate does not account for potential changes in provider participation after receipt of a notice of levy. For instance, officials at one state we spoke to noted that it had seen individual providers discontinue services after a levy of a large portion of an expected reimbursement. Under ideal circumstances (i.e., 100 percent levy with no statutory or policy exclusions and no decrease in provider participation), the absolute maximum that IRS could have offset for these 7,000 providers in 2009 would be about $330 million. These estimates do not account for the potential costs associated with implementing a large- scale automated continuous levy program for Medicaid reimbursements. Because such an estimate is not currently available, while potential for extensive collections may exist, further study would be required to determine the feasibility of a large-scale automated collection program. The cumulative 2009 Medicaid payments received by these 32 providers were about $310 million and the cumulative unpaid federal taxes were about $241 million. investment by state and federal entities than a continuous levy of all providers. It is not clear what effect a large-scale systematic program for continuously levying Medicaid reimbursements would have on Medicaid provider participation. When we asked selected states how current onetime federal levy or continuous state levy activities affect provider participation, four of the five states told us that they did not believe that their current levy activities had a broad effect on Medicaid provider participation. However, these states also noted that it would be difficult to judge based on the infrequency with which such levies are occurring. One state suggested that providers could begin billing in another state where payments are not offset, or that they may change their TIN to avoid levies. As noted previously, one state did note that it had seen individual providers discontinue services after a levy of a large portion of an expected reimbursement. Two of the states also mentioned that they levy Medicaid reimbursements for the collection of state debts without seeing a broad effect on provider participation. Several of the states we spoke with described a trend towards using Managed Care Organizations (MCO) to administer Medicaid benefits.Since states pay the MCO instead of the provider that performs the services, the only entity that the state could enforce an IRS levy against would be the MCO. This would limit the population of Medicaid providers eligible for a levy to MCOs and to providers who are paid directly by the state. The states expressed concern over the idea of levying Medicaid reimbursements to pay an MCO’s debt when the reimbursement is truly meant for services provided by treating providers that have no association with the MCO’s tax debt. The states also expressed concerns related to the existing process for the enforcement of IRS onetime levies. For example, several states experienced customer service–related challenges when working with IRS including difficulty using the IRS customer service hotline, difficulty reaching the IRS revenue officer, or problems with IRS sending levies to the wrong address. Another state commented that IRS does a poor job of releasing levies in a timely manner, especially for uncollectible levies. One state noted that it had concerns with applying a levy when the provider name or TIN in the state’s Medicaid provider database doesn’t exactly match what is provided by IRS. For example, the state explained that associating a Medicaid reimbursement with an appropriate tax debtor can be a challenge since the state’s system may include more than one TIN for a given provider. Should IRS expand levy collection efforts for Medicaid, increased centralized coordination with states could ease the process. Available data indicate that the vast majority of Medicaid providers appear to fully pay their federal taxes. However, our work has shown that in 2009 about 7,000 Medicaid providers in three states had delinquent federal taxes while receiving billions of dollars in Medicaid reimbursements, including Recovery Act funds. Even though Medicaid providers are relied on to deliver significant medical services to those most in need, payment of billions of federal dollars to those who do not pay their fair share of federal taxes raises questions about the integrity and fairness of the tax system. Our cases provide illustrative examples where IRS was able to, in some instances, collect delinquent taxes by using onetime levies on Medicaid reimbursements, but the process is highly inefficient. While current federal law does not permit the continuous levy of Medicaid payments, our estimates suggest that expanded use of levies against Medicaid providers, specifically an aggressive automated program, has the potential to help IRS collect millions of dollars of unpaid federal taxes, though the effect on provider participation is largely unknown. Enhanced onetime or manual continuous levy programs targeted at high-reimbursement, high-debt Medicaid providers could also potentially yield increased tax collections. Given that we found over $6 billion of payments made to tax delinquent Medicaid providers in just three states, a more rigorous review of the potential costs and financial benefits of implementing enhanced continuous and other levies of Medicaid payments is warranted. We recommend that the Commissioner of Internal Revenue do the folllowing: Explore further opportunities to enhance collection of unpaid federal taxes from Medicaid providers. This should include conducting a cost- benefit analysis of the implementation of a continuous levy program and expanded use of levies against providers with large Medicaid payments and significant unpaid federal taxes. Where appropriate, IRS should seek legislation to modify existing law to allow for more efficient collection of outstanding tax debts from Medicaid providers (i.e., consider taking steps to modify 26 U.S.C. § 6331(h)(2)). In addition, IRS should coordinate with CMS and FMS as necessary in exploring these opportunities. We provided a draft of our report to IRS, CMS, and FMS for review and comment. In its written comments (see app. III), IRS concurred with our recommendation to explore opportunities to enhance collection of unpaid federal taxes from Medicaid providers and noted that previous efforts have revealed significant operational challenges. Similarly, in its written comments (see app. IV), CMS noted that the structure of the Medicaid program (wherein the federal government does not have a direct relationship with providers or pay them directly) provides a programmatic basis for excluding Medicaid from the levy program, and may result in significant challenges to the implementation of an FPLP-style levy expansion. CMS further noted that any potential legislation related to the collection of outstanding tax debts from Medicaid providers may impact the basic structure of the Medicaid program. FMS provided technical comments by e-mail, which were incorporated into this report. Both CMS and FMS noted that they are prepared to coordinate with IRS in exploring opportunities to enhance levy collections from Medicaid providers. We recognize the challenges expressed by IRS and CMS, and are encouraged by the willingness of all parties to work in coordination toward an enhanced Medicaid provider levy program that is beneficial to all affected agencies. As agreed with your offices, unless you publicly release this report’s contents earlier, we plan no further distribution of it until 6 days from its date. At that time, we will send copies of this report to interested congressional committees, the Secretary of the Treasury, the Commissioner of the Financial Management Service (FMS), the Commissioner of Internal Revenue, the Acting Administrator of the Centers for Medicare & Medicaid Services (CMS), and other interested parties. The report is also available at no charge on the GAO website at http://www.gao.gov. If you have any questions concerning this report, please contact Richard J. Hillman at (202) 512-6722 or hillmanr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Our objectives were to: (1) determine the magnitude of unpaid federal taxes owed by Medicaid providers receiving reimbursements during 2009 in selected states, (2) provide examples of Medicaid providers who have significant unpaid federal taxes, and (3) evaluate opportunities and challenges related to collecting unpaid federal taxes through a levy process designed to offset Medicaid reimbursements. To determine the magnitude of unpaid federal taxes owed by Medicaid providers in selected states receiving reimbursements during 2009, we obtained and analyzed annual Medicaid reimbursement information from the states of New York, Texas, and Florida. We attempted to obtain data from the state of California, but the data we received were determined to be unreliable for the purposes of this report. We selected these states because they received the most American Recovery and Reinvestment Act of 2009 (Recovery Act)–related Medicaid money. For the purposes of this report, the term “provider” refers to any individual, business, or other entity that received at least one Medicaid reimbursement (e.g., doctors, hospitals, home care providers) from at least one of the three selected states. unpaid federal taxes IRS classified as compliance assessments or memo accounts for financial reporting, unpaid federal taxes from 2010 and 2011 tax periods, and recipients with total unpaid federal taxes of $100 or less. The criteria above were used to exclude unpaid federal taxes that might be under dispute or generally duplicative or invalid, and unpaid federal taxes that are recently incurred. Specifically, compliance assessments or memo accounts were excluded because these taxes have neither been agreed to by the taxpayers nor affirmed by the court, or these taxes could be invalid or duplicative of other taxes already reported. We excluded known unpaid federal taxes from 2010 and 2011 tax periods to both eliminate tax debt that may involve matters that are routinely resolved between the taxpayers and IRS with the taxes paid or abated within a short time, and tax debts accrued after the Medicaid reimbursement period under review. We excluded tax debts of $100 or less because they are insignificant for the purpose of determining the extent of known taxes owed by Medicaid providers. Using these criteria, we identified about 7,000 Medicaid providers with known unpaid federal taxes. Our final estimate of tax debt may include some debt that is covered under an active IRS installment plan or beyond normal statutory limits for debt collection. Our analysis determined the magnitude of known unpaid federal taxes owed by 2009 Medicaid providers in only New York, Texas, and Florida and cannot be generalized to other states or periods. To provide examples of Medicaid providers who have significant unpaid federal taxes, we selected 20 Medicaid providers with unpaid federal taxes in the IRS Business Master File (BMF) and 20 Medicaid providers with unpaid federal taxes listed in the IRS Individual Master File (IMF) for a detailed review. These nonrepresentative selections of providers were chosen by using a random sample of the 113 entities in the BMF and 26 individuals in the IMF with at least $100,000 in Medicaid reimbursements during 2009, $100,000 in outstanding unpaid federal taxes, and 5 years of accumulated unpaid federal taxes (noncontinuous) in or before 2010. In addition, we also used open-source information to identify the Social Security Number (SSN) for owners and other principals for 600 randomly selected known Medicaid providers in the selected states (a random selection of 200 per state for New York, Texas, and Florida). We electronically matched these individuals with IRS’s tax debt data to identify their outstanding tax debts and to confirm their professional relationship with a nondebtor Medicaid provider. For these providers, we reviewed IRS and public records to develop 10 additional case studies. These 50 case studies serve to illustrate the sizeable amounts of taxes owed by some Medicaid providers, are among the most egregious examples of Medicaid providers with unpaid federal taxes, and cannot be generalized beyond the cases presented. To evaluate opportunities and challenges related to collecting unpaid federal taxes through a levy process designed to offset Medicaid reimbursements, we interviewed officials from relevant federal agencies and from selected states (chosen based on the size of their Medicaid programs or their participation in federal debt-collection programs, or both). We also reviewed applicable laws, regulations, and reports related to the issues of subjecting Medicaid reimbursements to tax levies, including the Federal Payment Levy Program. We conducted this audit from July 2010 through July 2012. performed this audit in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our audit findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Initiation of our review was delayed significantly because California did not comply with our request for Medicaid provider payment data for over 8 months. What California ultimately provided was not sufficiently reliable for the purposes of our report. For the IRS unpaid assessments data, we reviewed the work we performed during our annual audit of IRS’s financial statements and used a copy of the financial record file reviewed under that audit. While our financial statement audits have identified some data reliability problems associated with tracing IRS’s tax records to source records and including errors and delays in recording taxpayer information and payments, these reliability issues are not relevant to our review. On the basis of the extensive testing for accuracy, existence, completeness, and timeliness of relevant variables, we determined that the IRS data were sufficiently reliable to address this report’s objectives. For the selected states’ Medicaid reimbursement databases from New York, Florida, and Texas, we interviewed officials in the selected states responsible for their respective databases. In addition, we performed electronic testing of specific data elements in the databases that we used to perform our work. On the basis of our discussions with agency officials, review of agency documents, and our own testing, we concluded that the data elements used for this report were sufficiently reliable for our purposes. We did not include data received from California because we were unable to conclude that the data elements we intended to use were sufficiently reliable for our purposes. We reached this conclusion because we were unable to reconcile the total balance of Medicaid reimbursements to the amount of reimbursements published in the state’s When we asked California quarterly expense report filed with CMS. officials why the amounts in the data they provided did not reconcile externally published sources, officials told us that they were unable to reconcile the data. We compared the amount of total payments listed in files sent to us by California officials to the annual Net Expenditures Reported for 2009 on the CMS-64 Quarterly Expense Report and found a $3.4 billion difference. The following table provides 30 additional examples of 2009 Medicaid providers who received American Recovery and Reinvestment Act of 2009 (Recovery Act) funds, with sizeable outstanding federal tax debt. Recovery Act: Tax Debtors Have Received FHA Mortgage Insurance and First-Time Homebuyer Credits. GAO-12-592. Washington, D.C.: May 29, 2012. Recovery Act: Thousands of Recovery Act Contract and Grant Recipients Owe Hundreds of Millions in Federal Taxes. GAO-11-485. Washington, D.C.: April 28, 2011. Federal Tax Collection: Potential for Using Passport Issuance to Increase Collection of Unpaid Taxes. GAO-11-272. Washington, D.C.: March 10, 2011. Medicare: Thousands of Medicare Providers Abuse the Federal Tax System. GAO-08-618. Washington, D.C.: June 13, 2008. Tax Compliance: Federal Grant and Direct Assistance Recipients Who Abuse the Federal Tax System. GAO-08-31. Washington, D.C.: November 16, 2007. Medicaid: Thousands of Medicaid Providers Abuse the Federal Tax System. GAO-08-17. Washington, D.C.: November 14, 2007. Tax Compliance: Thousands of Organizations Exempt from Federal Income Tax Owe Nearly $1 Billion in Payroll and Other Taxes. GAO-07-1090T. Washington, D.C.: July 24, 2007. Tax Compliance: Thousands of Organizations Exempt from Federal Income Tax Owe Nearly $1 Billion in Payroll and Other Taxes. GAO-07-563. Washington, D.C.: June 29, 2007. Tax Compliance: Thousands of Federal Contractors Abuse the Federal Tax System. GAO-07-742T. Washington, D.C.: April 19, 2007. Medicare: Thousands of Medicare Part B Providers Abuse the Federal Tax System. GAO-07-587T. Washington, D.C.: March 20, 2007. Internal Revenue Service: Procedural Changes Could Enhance Tax Collections. GAO-07-26. Washington, D.C.: November 15, 2006. Tax Debt: Some Combined Federal Campaign Charities Owe Payroll and Other Federal Taxes. GAO-06-887. Washington, D.C.: July 28, 2006. Tax Debt: Some Combined Federal Campaign Charities Owe Payroll and Other Federal Taxes. GAO-06-755T. Washington, D.C.: May 25, 2006. Financial Management: Thousands of GSA Contractors Abuse the Federal Tax System. GAO-06-492T. Washington, D.C.: March 14, 2006. Financial Management: Thousands of Civilian Agency Contractors Abuse the Federal Tax System with Little Consequence. GAO-05-683T. Washington, D.C.: June 16, 2005. Financial Management: Thousands of Civilian Agency Contractors Abuse the Federal Tax System with Little Consequence. GAO-05-637. Washington, D.C.: June 16, 2005. Financial Management: Some DOD Contractors Abuse the Federal Tax System with Little Consequence. GAO-04-414T. Washington, D.C.: February 12, 2004. Financial Management: Some DOD Contractors Abuse the Federal Tax System with Little Consequence. GAO-04-95. Washington, D.C.: February 12, 2004. Debt Collection: Barring Delinquent Taxpayers From Receiving Federal Contracts and Loan Assistance, GAO/T-GGD/AIMD-00-167, Washington, D.C.: May 9, 2000. Unpaid Payroll Taxes: Billions in Delinquent Taxes and Penalty Assessments Are Owed. GAO/AIMD/GGD-99-211. Washington, D.C.: August 2, 1999. Tax Administration: Federal Contractor Tax Delinquencies and Status of the 1992 Tax Return Filing Season. GAO/T-GGD-92-23. Washington, D.C.: March 17, 1992.
The Recovery Act increased the federal share of Medicaid funding. Federal law does not prohibit providers with tax debt from enrolling in Medicaid, but GAO’s prior work found that thousands of Medicaid providers do have unpaid federal taxes. Since any provider who received Medicaid reimbursements during 2009 received Recovery Act funds, GAO was asked to (1) determine the magnitude of unpaid federal taxes owed by Medicaid providers reimbursed during 2009 in selected states; (2) provide examples of Medicaid providers who have sizeable unpaid federal taxes; and (3) evaluate opportunities and challenges related to collecting unpaid federal taxes through a levy process designed to offset Medicaid reimbursements. GAO compared Medicaid reimbursement information from three states to known IRS tax debts as of September 30, 2009. These states were among those that received the largest portion of Recovery Act Medicaid funding. To provide examples of Medicaid providers who have sizeable unpaid federal taxes, GAO conducted a detailed review of 40 Medicaid providers from the three states that had over $100,000 of federal tax debt. GAO’s sample of three states and 40 cases cannot be generalized to all states and all Medicaid providers. GAO also reviewed relevant laws and reports and interviewed federal and state officials. About 7,000 Medicaid providers in three selected states (Florida, New York, and Texas) had approximately $791 million in unpaid federal taxes from calendar year 2009 or earlier. This represents about 5.6 percent of the Medicaid providers reimbursed by the selected states during 2009. These 7,000 Medicaid providers with unpaid federal taxes received a total of about $6.6 billion in Medicaid reimbursements during 2009 (including American Recovery and Reinvestment Act of 2009 [Recovery Act] funds). The amount of unpaid federal taxes GAO identified is likely understated because Internal Revenue Service (IRS) taxpayer data reflect only the amount of unpaid taxes either reported on a tax return or assessed by IRS through enforcement; it does not include entities that did not file tax returns or underreported their income. The 40 Medicaid providers GAO reviewed received a total of $235 million in Medicaid reimbursements (including Recovery Act funds) in 2009 and had unpaid federal taxes of about $26 million through 2010. The amount of unpaid federal taxes ranged from approximately $100,000 to over $6 million. In addition, IRS records indicate that providers in two of GAO’s cases are currently, or have previously been, under criminal investigation. For example, in one case a provider was caught participating in a medical billing fraud. IRS may levy, or seize, a taxpayer’s property to satisfy a tax debt and, in some instances, is authorized to use an automated process to continuously levy federal payments made to delinquent taxpayers. Medicaid reimbursements have never been continuously levied using this provision of the law because the IRS determined that these reimbursements do not qualify as federal payments. However, if such a process could be used, GAO estimates that IRS could have collected between $22 million and $330 million in the selected states in 2009. States we spoke to expressed concerns about implementing continuous levies, given the challenges they encounter with processing onetime IRS levies. For example, states have had difficulty reaching IRS revenue officers and problems with IRS sending levies to the wrong address. GAO recommends that IRS explore opportunities to enhance collection of unpaid taxes from Medicaid providers, including the use of continuous levies. IRS agreed with our recommendation.
the University of California had inadequate work controls at one of its laboratory facilities, resulting in eight workers being exposed to airborne plutonium and five of those workers receiving detectable intakes of plutonium. This was identified as one of the 10 worst radiological intake events in the United States in over 40 years. DOE assessed, but cannot collect, a penalty of $605,000 for these violations. University of Chicago had violated the radiation protection and quality assurance rules, leading to worker contamination and violations of controls intended to prevent an uncontrolled nuclear reaction from occurring. DOE assessed, but cannot collect, a penalty of $110,000 for these violations. DOE has cited two other reasons for continuing the exemption, but as we indicated in our 1999 report, we did not think either reason was valid: DOE said that contract provisions are a better mechanism than civil penalties for holding nonprofit contractors accountable for safe nuclear practices. We certainly agree that contract mechanisms are an important tool for holding contractors accountable, whether they earn a profit or not. However, since 1990 we have described DOE’s contracting practices as being at high risk for fraud, waste, abuse, and mismanagement. Similarly, in November 2000, the Department’s Inspector General identified contract administration as one of the most significant management challenges facing the Department. We have noted that, recently, DOE has been more aggressive in reducing contractor fees for poor performance in a number of areas. However, having a separate nuclear safety enforcement program provides DOE with an additional tool to use when needed to ensure that safe nuclear practices are followed. Eliminating the exemption enjoyed by the nonprofit contractors would strengthen this tool. DOE said that its current approach of exempting nonprofit educational institutions is consistent with Nuclear Regulatory Commission’s (NRC) treatment of nonprofit organizations because DOE issues notices of violation to nonprofit contractors without collecting penalties but can apply financial incentives or disincentives through the contract. However, NRC can and does impose monetary penalties for violations of safety requirements, without regard to the profit-making status of the organization. NRC sets lower penalty amounts for nonprofit organizations than for- profit organizations. The Secretary could do the same, but does not currently take this approach. Furthermore, both NRC and other regulatory agencies have assessed and collected penalties or additional administrative costs from some of the same organizations that DOE exempts from payment. For example, the University of California has made payments to states for violating environmental laws in California and New Mexico because of activities at Lawrence Livermore and Los Alamos National Laboratories. The enforcement program appears to be a useful and important tool for ensuring safe nuclear practices. Our 1999 review of the enforcement program found that, although it needed to be strengthened, the enforcement program complemented other contract mechanisms DOE had to help ensure safe nuclear practices. Advantages of the program include its relatively objective and independent review process, a follow-up mechanism to ensure that contractors take corrective action, and the practice of making information readily available to the contractor community and the public. Modifications to H.R. 723 Could Help Clarify and Strengthen the Penalty Provisions H.R. 723 eliminates both the exemption from paying the penalties provided by statute and the exemption allowed at the Secretary’s discretion. While addressing the main problems we discussed in our 1999 report, we have several observations about clarifications needed to the proposed bill. The “discretionary fee” referred to in the bill is unclear. H.R. 723, while eliminating the exemption, limits the amount of civil penalties that can be imposed on nonprofit contractors. This limit is the amount of "discretionary fees" paid to the contractor under the contract under which the violation occurs. The meaning of the term “discretionary fee” is unclear and might be interpreted to mean all or only a portion of the fee paid. In general, the total fee—that is, the amount that exceeds the contractor’s reimbursable costs—under DOE’s management and operating contracts consists of a base fee amount and an incentive fee amount. The base fee is set in the contract. The amount of the available incentive fee paid to the contractor is determined by the contracting officer on the basis of the contractor’s performance. Since the base fee is a set amount, and the incentive fee is determined at the contracting officer's discretion, the term “discretionary fee” may be interpreted to refer only to the incentive fee and to exclude the base fee amount. However, an alternate interpretation also is possible. Certain DOE contracts contain a provision known as the “Conditional Payment of Fee, Profit, Or Incentives” clause. Under this contract provision, on the basis of the contractor’s performance, a contractor’s entire fee, including the base fee, may be reduced at the discretion of the contracting officer. Thus, in contracts that contain this clause, the term “discretionary fee” might be read to include a base fee. If the Congress intends to have the entire fee earned be subject to penalties, we suggest that the bill language be revised to replace the term “discretionary fee” with “total amount of fees.” If, on the other hand, the Congress wants to limit the amount of fee that would be subject to penalties to the performance or incentive amount, and exclude the base fee amount, we suggest that the bill be revised to replace the term “discretionary fee” with “performance or incentive fee.” Limiting the amount of any payment for penalties made by tax-exempt contractors to the amount of the incentive fee could have unintended effects. Several potential consequences could arise by focusing only on the contractor’s incentive fee. Specifically: Contractors would be affected in an inconsistent way. Two of the nonprofit contractors—University Research Associates at the Fermi National Accelerator Laboratory and Princeton University—do not receive an incentive fee (they do receive a base fee). Therefore, depending on the interpretation of the term “discretionary fee” as discussed above, limiting payment to the amount of the incentive fee could exempt these two contractors from paying any penalty for violating nuclear safety requirements. Enforcement of nuclear safety violations would differ from enforcement of security violations. The National Defense Authorization Act for Fiscal Year 2000 established a system of civil monetary penalties for violations of DOE regulations regarding the safeguarding and security of restricted data. The legislation contained no exemption for nonprofit contractors but limited the amount of any payment for penalties made by certain nonprofit contractors to the total fees paid to the contractor in that fiscal year. In contrast, these same contractors could have only a portion of their fee (the “discretionary fee”) at risk for violations of nuclear safety requirements. It is not clear why limitations on the enforcement of nuclear safety requirements should be different than existing limitations on the enforcement of security requirements. Disincentives could be created if the Congress decides to limit the penalty payment to the amount of the incentive fee. We are concerned that contractors might try to shift more of their fee to a base or fixed fee and away from an incentive fee, in order to minimize their exposure to any financial liability. Such an action would have the effect of undermining the purpose of the penalty and DOE’s overall emphasis on performance-based contracting. In fact, recent negotiations between DOE and the University of California to extend the laboratory contracts illustrate this issue. According to the DOE contracting officer, of the total fee available to the University of California, more of the fee was shifted from incentive fee to base fee during recent negotiations because of the increased liability expected from the civil penalties associated with security violations. If a nonprofit contractor’s entire fee was subject to the civil penalty, the Secretary has discretion that should ensure that no nonprofit contractor’s assets are at risk because of having to pay the civil penalty. This is because the Secretary has considerable latitude to adjust the amount of any civil penalty to meet the circumstances of any specific situation. The Secretary can consider factors such as the contractor’s ability to pay and the effect of the penalty on the contractor’s ability to do business. Preferential treatment would be expanded to all tax-exempt contractors. Under the existing law, in addition to the seven contractors exempted by name in the statute, the Secretary was given the authority to exempt nonprofit educational institutions. H.R. 723 takes a somewhat different approach by exempting all tax-exempt nonprofit contractors whether or not they are educational institutions. This provision would actually reduce the liability faced by some contractors. For example, Brookhaven Science Associates, the contractor at Brookhaven National Laboratory, is currently subject to paying civil penalties for nuclear safety violations regardless of any fee paid because, although it is a nonprofit organization, it is not an educational institution. Under the provisions of H.R. 723, however, Brookhaven Science Associates would be able to limit its payments for civil penalties. This change would result in a more consistent application of civil penalties among nonprofit contractors. Some contractors might not be subject to the penalty provisions until many years in the future. As currently written, H.R. 723 would not apply to any violation occurring under a contract entered into before the date of the enactment of the act. Thus, contractors would have to enter into a new contract with DOE before this provision takes effect. For some contractors that could be a considerable period of time. The University of California, for example, recently negotiated a 4-year extension of its contract with DOE. It is possible, therefore, that if H.R. 723 is enacted in 2001, the University of California might not have to pay a civil penalty for any violation of nuclear safety occurring through 2005. In contrast, when the Congress set up the civil penalties in 1988, it did not require that new contracts be entered into before contractors were subject to the penalty provisions. Instead, the penalty provisions applied to the existing contracts. In reviewing the fairness of this issue as DOE prepared its implementing regulations, in 1991 DOE stated in the Federal Register that a contractor’s obligation to comply with nuclear safety requirements and its liability for penalties for violations of the requirements are independent of any contractual arrangements and cannot be modified or eliminated by the operation of a contract. Thus, DOE considered it appropriate to apply the penalties to the contracts existing at the time.
This testimony discusses GAO's views on H.R. 723, a bill that would modify the Atomic Energy Act of 1954 by changing how the Department of Energy (DOE) treats nonprofit contractors who violate DOE's nuclear safety requirements. Currently, nonprofit contractors are exempted from paying civil penalties that DOE assesses under the act. H.R. 723 would remove that exemption. GAO supports eliminating the exemption because the primary reason for instituting it no longer exists. The exemption was enacted in 1988 at the same time the civil monetary penalty was established. The purpose of the exemption was to ensure that the nonprofit contractors operating DOE laboratories who were being reimbursed only for their costs, would not have their assets at risk for violating nuclear safety requirements. However, virtually all of DOE's nonprofit contractors have an opportunity to earn a fee in addition to payments for allowable costs. This fee could be used to pay the civil monetary penalties. GAO found that DOE's nuclear safety enforcement program appears to be a useful and important tool for ensuring safe nuclear practices.
Afghanistan is a mountainous, arid, landlocked country with limited natural resources, bordered by Pakistan to the east and south; Tajikistan, Turkmenistan, Uzbekistan, and China to the north; and Iran to the west (see fig. 1). At 647,500 square kilometers, Afghanistan is slightly smaller than the state of Texas; its population, estimated at between 24 and 30 million, is ethnically diverse, largely rural, and mostly uneducated. The country is divided into 34 provinces. Afghanistan is one of the world’s poorest countries. As table 1 shows, development indicators published by the World Bank and the UN rank Afghanistan at the bottom of virtually every category, including nutrition; infant, child, and maternal mortality; life expectancy; and literacy. Over the last two decades, political conflicts ravaged Afghanistan (see fig. 2). Factional control of the country following the withdrawal of Soviet troops in 1989, coupled with the population’s fatigue of fighting, allowed a fundamentalist Islamic group, the Taliban, to seize control of the country. Although the Taliban regime provided some political stability during the late 1990s, its destructive policies, highlighted in its repressive treatment of women, and its continuing war with the opposition Northern Alliance further impeded international aid and development. In December 2001, less than 2 months after U.S. and coalition forces forcibly removed the Taliban regime, a 9-day international summit in Bonn, Germany, established a framework for a new Afghan government. This framework, known as the Bonn Agreement, focused on writing a new constitution by the end of October 2003 and holding democratic elections by June 2004. The agreement was endorsed by the UN Security Council on December 6, 2001, through UN Resolution 1383. In December 2002, the United States passed the Afghanistan Freedom Support Act of 2002, authorizing increased assistance to Afghanistan. The U.S. goal is to firmly establish Afghanistan as a democratic nation inhospitable to international terrorism and drug trafficking and cultivation, at peace with its neighbors, and able to provide its own internal and external security. U.S. efforts in support of this goal are intended to help create national security institutions, provide humanitarian and reconstruction assistance, and reinforce the primacy of the central government over Afghanistan’s provinces. The act strongly urged the President to designate a coordinator within the Department of State to, among other things, be responsible for (1) designing an overall strategy to advance U.S. interests in Afghanistan; (2) ensuring program and policy coordination among U.S. agencies; (3) coordinating assistance with other countries and international organizations; and (4) ensuring proper management, implementation, and oversight by agencies responsible for assistance programs. The U.S. Agency for International Development (USAID) provides U.S. assistance to underdeveloped countries through UN agencies, nongovernmental organizations, and private contractors. The main organizational units responsible for managing USAID’s reconstruction programs and operations in Afghanistan in fiscal year 2004 were the agency’s mission in Kabul, Afghanistan; the Bureau for Asia and the Near East; and the Bureau for Democracy, Conflict, and Humanitarian Assistance through the Office of U.S. Foreign Disaster Assistance, Office of Food for Peace, Office of Transition Initiatives (OTI), and Office for Democracy and Governance. Other U.S. government agencies provided additional assistance, including DOD through its provincial reconstruction teams (PRT) located at sites throughout Afghanistan. In fiscal year 2004, the 12 U.S.-led PRTs ranged in size from 60 to 100 civilian and military personnel, including civil affairs units, force protection soldiers, and representatives of the Departments of Agriculture and State and USAID. The teams are intended to deliver assistance that advances military goals and enhance security, to increase the reach of the Afghan central government in the provinces and allow assistance agencies to implement projects. In spring 2003, DOD, recognizing the lack of progress in the U.S. effort in Afghanistan, drafted a political-military strategy for Afghanistan. The strategy did not include reconstruction. The strategy was vetted by the National Security Council and approved by the President in June 2003. At the time, Department of State and USAID officials drafted a plan to increase funding and expedite reconstruction efforts, particularly in infrastructure, democratization and human rights, and security. This plan served as the basis for the Accelerating Success in Afghanistan Initiative announced by the U.S. government in September 2003. The initiative was designed to be implemented in advance of the Afghanistan Presidential elections scheduled for June 2004. The U.S. government planned to provide $1.76 billion for the initiative, targeting approximately $1 billion of this amount for elections, major and secondary road construction, health and education programs, economic and budget support to the Afghan government, senior advisors and technical experts, and private sector initiatives. The remaining $700 million was to fund efforts to build the Afghan National Army, train and equip the police force, expand the counternarcotics program, and establish rule of law. In fiscal year 2004, the focus of U.S. spending in Afghanistan shifted from humanitarian and quick-impact assistance to reconstruction. Of the nine U.S. government departments and agencies involved in assistance to Afghanistan, USAID provided the largest amount of nonsecurity-related assistance. The largest investment went to USAID’s infrastructure sector, which received approximately half of the agency’s total obligations for Afghanistan in fiscal year 2004. About two-thirds, or $922 million, of USAID’s obligations supported local projects in Afghanistan’s 34 provinces, with Kabul and Kandahar provinces receiving approximately 70 percent of these funds, mainly for roads. The United States provided the largest share of international assistance to Afghanistan, contributing about 38 percent of the $3.6 billion that the international community pledged for 2004. The focus of U.S. spending in Afghanistan in fiscal year 2004 shifted from humanitarian and quick-impact assistance, such as building wells, to larger- scale reconstruction. The U.S. government obligated about $1.4 billion and spent approximately $720 million on nonsecurity-related assistance to Afghanistan; the largest percentage of this amount was spent on reconstruction, especially infrastructure projects. In contrast to fiscal years 2002-2003, when more than three-fourths of U.S. spending was for humanitarian and quick-impact assistance, approximately 75 percent— about $538 million—of the 2004 expenditures supported reconstruction and development projects. The remaining amount was spent on humanitarian and quick-impact projects. (See fig. 3) Of the U.S. government departments and agencies providing assistance in Afghanistan, USAID spent the largest amount, about $587 million, for reconstruction, humanitarian, and quick-impact projects. The Department of State spent the next largest amount, about $70 million, primarily for assistance to refugees. DOD spent approximately $45 million for nonsecurity-related assistance, chiefly small projects through the Commanders’ Emergency Response Program. Six other U.S. government agencies also provided some assistance to Afghanistan in fiscal year 2004. (See fig. 4 for agency percentages; for more details, see app. II.) About half—$497 million—of USAID’s fiscal year 2004 obligations for reconstruction in Afghanistan supported the rebuilding of infrastructure. This amount includes $448 million obligated for large infrastructure projects, such as roads, of which USAID spent approximately $236 million. To build schools and clinics, USAID obligated $49 million, of which it spent less than $6 million. USAID and DOD obligated $44 million and $47 million, respectively, through the PRTs, mainly for small-scale infrastructure projects. In an effort to expand the reach of the Afghan government—a major U.S. and Afghan government priority—USAID directed about two-thirds, or $922 million, of its obligations to local projects in Afghanistan’s 34 provinces, with Kabul and Kandahar provinces receiving approximately 70 percent of these funds (see fig. 5). The remaining funds went to national programs, such as government reform initiatives. USAID directed the majority of the funds obligated for Kabul and Kandahar—approximately $527 million of a total $647 million—toward road construction. In addition, DOD distributed approximately $47 million throughout the areas covered by the PRTs, the majority of them near Afghanistan’s border with Pakistan. As in previous years, the United States provided the largest share of international assistance to Afghanistan, contributing about 38 percent of the $3.6 billion pledged by the international community for 2004. (See fig. 6; for 2004 pledges by donor, see app. II.) The U.S. share for 2001-2003 was about 34 percent of the approximately $9.7 billion pledged by the international community. According to the Center on International Cooperation, as of February 2005, donors had obligated about 29 percent, or $3.9 billion, of the $13.4 billion pledged since 2001. U.S. humanitarian and quick-impact assistance benefited vulnerable populations and returning refugees; however, the success of efforts to accelerate large-scale reconstruction varied. USAID provided almost $60 million in emergency assistance, including food aid, and the Department of State provided about $60 million to assist refugees in fiscal year 2004. In addition, the U.S. government obligated approximately $120 million for small-scale, quick-impact projects such as the construction of wells and bridges. Further, the United States accelerated its major reconstruction programs to increase visible progress before the 2004 Afghan presidential elections. However, progress and results in each of the reconstruction sectors—agriculture, democracy and governance, economic governance, education, health, infrastructure, and gender—varied, as did the problems each sector faced. In fiscal year 2004, the U.S. government provided food and other emergency assistance to Afghanistan’s vulnerable populations and assisted the return of refugees. Afghanistan suffered its sixth year of drought and produced a below-average harvest, and the percentage of people in need of food aid rose from 20 percent in 2003 to an estimated 37 percent in 2004. In addition, approximately 900,000 refugees returned to Afghanistan, with more expected to return in coming years owing to the closing of refugee camps operated by the UN High Commissioner for Refugees (UNHCR) in Pakistan. USAID’s Office of Food for Peace, through the UN World Food Program (WFP), provided Afghanistan with 79,330 metric tons of wheat and other emergency food assistance (valued at $49 million) in fiscal year 2004, which equaled approximately 25 percent of the international food assistance that WFP requested during that time period. In addition, USAID, through its Office of Foreign Disaster Assistance, provided almost $10 million in other emergency assistance in fiscal year 2004, compared with $137.8 million in fiscal years 2002 and 2003. According to USAID, the office supported transitional shelter for refugees; the return of internally displaced persons; winter programs, such as snow clearance and road rehabilitation; and emergency funds to respond to the ongoing drought. The Department of State’s Bureau of Population, Refugees, and Migration (PRM) provided almost $63 million to help refugees, compared with $234 million in previous years. PRM provided more than half of the 2004 funding through the UNHCR to support traditional assistance, such as shelter and education for refugees. In addition, the agency facilitated out-of-country registration and voting so that Afghan refugees living in Pakistan and Iran could vote in the October 2004 Afghanistan presidential election. PRM also provided funds through the International Committee of the Red Cross and UN Children’s Fund, as well as about $17 million in direct grants to nongovernmental organizations. These grants provided shelter, water and sanitation, health care, education, and economic assistance and training to refugees and internally displaced people. PRM provided funding for the construction of 5,900 shelters; however, as of September 30, 2004, 8,000 shelters were still needed. In fiscal year 2004, USAID and DOD continued efforts to respond rapidly to small-scale reconstruction needs in Afghanistan. USAID launched the Quick-Impact Program (QIP), supplementing the activities of the existing Office for Transition Initiatives, and DOD launched the Commanders’ Emergency Response Program (CERP) to operate alongside its Overseas Humanitarian, Disaster, and Civic Aid (OHDACA) program (see table 2). The aims of these programs is to extend the reach of the Afghan central government through benefits to rural communities and to facilitate the transition to longer-term reconstruction programs. Although CERP and OHDACA funds address humanitarian needs, the projects are determined by the tactical need to obtain the support of the populace and are primarily tools for achieving U.S. security objectives. Since 2002, the U.S. government has programmed almost $136 million for about 3,600 small- scale, quick-impact projects through USAID and DOD. CERP and QIP funds worked in tandem through the PRTs in fiscal year 2004, with CERP funding smaller projects costing less than $20,000 on average and QIP funding larger, more expensive projects. DOD regulations allow PRT commanders to approve the use of up to $25,000 in CERP funds for the rapid implementation of small-scale projects, such as providing latrines for a school or a generator for a hospital. USAID representatives at PRTs used QIP funds for larger, more complex projects such as local roads, bridges, and government buildings. To ensure accountability and long-term sustainability, USAID regulations require that the mission, before granting approvals for QIP projects, conduct technical assessments and ensure Afghan government involvement in projects. DOD does not require similar assessments for CERP-funded projects. Efforts to accelerate existing USAID programs in each reconstruction sector—agriculture, democracy and governance, economic governance, education, health, and infrastructure—achieved varying degrees of progress toward project objectives and accelerated targets. Efforts to promote gender equity in each sector also demonstrated varying levels of progress. Although the 2004 Accelerating Success initiative targets for the agriculture sector were generally met or exceeded, the contractor failed to integrate project activities, thus limiting the project’s results. To address the needs of the agriculture sector, USAID implemented the Rebuilding Agricultural Markets Program in 13 of Afghanistan’s 34 provinces, concentrating the program’s activities on physical infrastructure, rural finance, and agricultural technology and market development. USAID signed the program’s $153 million primary 3-year contract on July 3, 2003. The contractor, Chemonics International, Inc., was to use a “market chain” approach to improve the operations of, and linkages between, the market chain components (i.e., farmers, processors, transporters, input suppliers, creditors, regulators, wholesalers, and retailers). As of September 2004, Chemonics had implemented numerous activities through subcontracts and grants with 40 organizations, including local and international nongovernmental organizations, private firms, and international organizations. The progress of these activities as reported by Chemonics is as follows: Physical infrastructure. By September 2004, over 320 kilometers of irrigation canals, approximately 230 irrigation structures, and about 160 kilometers of farm to market roads had been rehabilitated. In addition, nearly 120 market structures such as retail market stalls and grain and vegetable storage sheds had been constructed. Rural finance. Over 1,100 loan officers had been trained, and more than 8,000 loans had been disbursed. Agricultural technology and market development. About 565,000 farmers were served by extension services; over 4,000 women being trained in poultry management; over 20,000 chickens had been distributed to women; and more than 3,675,000 livestock had been treated, vaccinated, or both. Data provided by Chemonics and U.S. government interagency performance reports indicated the program generally met or exceeded all of the Accelerating Success targets established for the sector; however, the program did not address a key program objective. We found that Chemonics had not integrated individual activities to achieve project objectives or focused its efforts on the improvement of market chains. USAID and Chemonics officials told us in October 2004 that although many activities had been implemented, most projects were stand-alone agricultural infrastructure efforts (e.g., road and canal rehabilitation) and did not focus on improving the marketing of commodities or the integration of market chain components. Consequently, during its first 15 months, the project’s progress in strengthening Afghanistan’s market chain was limited. An internal evaluation of the Chemonics effort was conducted by USAID mission staff in Kabul in mid-fiscal year 2004. The evaluation resulted in the development of a new strategy and performance monitoring plan in an effort to refocus RAMP and better integrate program activities. USAID’s democracy and governance program produced notable successes, particularly its assistance with the creation and ratification of Afghanistan’s constitution and with the presidential elections. However, some civic education programs were uncoordinated and had limited distribution. USAID’s fiscal 2004 democracy and governance program comprised three components: Strengthening of elections and political process. The Consortium for Elections and Political Process Strengthening component addressed civic education and political party building. Under USAID’s Accelerating Success initiative, the consortium’s funding ceiling increased from $3.76 million to $13.36 million. Grantees reported training more than 2,000 district and village leaders in civic education, registering 46 political parties by the end of fiscal year 2004, and establishing eight election training and information centers. Judicial reform. In May 2004, under the Accelerating Success initiative, USAID increased its contract with Management Systems International (MSI) from $14.7 million to $16.8 million. USAID also revised the scope of work to focus on judicial rehabilitation and added court administration as an objective. MSI provided technical assistance and logistical support to the constitutional commission and its secretariat and to the constitutional loya jirga that took place in December 2003. According to USAID and MSI reports, MSI also built or rehabilitated 7 of 10 targeted courthouses by September 30, 2004; helped review, draft, and track the status of legislation; surveyed and compiled laws and legal texts; mapped courthouse administration functions; and conducted training for about 300 legal professionals. Loya jirga and elections logistics. USAID awarded the Asia Foundation a cooperative agreement to provide operational and logistic support for the constitutional loya jirga and elections. The award’s funding ceiling increased from $10 million to more than $45 million. The scope expanded to include assistance to the UN Assistance Mission to Afghanistan to conduct the loya jirga, register voters, and hold the presidential election. The foundation filled unforeseen gaps in the UN’s efforts, as UN staff faced security restrictions that limited their ability to register voters and set up polling stations. According to the UN, more than 10 million people registered to vote, out of a total population of between 24 and 30 million; about 40 percent of those registered were women. Despite these successes, the program faced setbacks, particularly in public education and courthouse construction. Parts of the civic education program were poorly timed. According to an evaluation commissioned by a USAID grantee, a listening device to enhance the public’s understanding of the election process was distributed late, in some cases just 1 week before elections and with no training, making it difficult for users to listen to all of the content; the evaluation found that users would have preferred to have received the device 2 months before the elections. In addition, there were delays in the project’s initiative to draft and pass legislation, due to shifting responsibility for legislative drafting from the Judicial Reform Commission, a temporary entity, to the permanent Ministry of Justice. Finally, despite a goal of building or rehabilitating 10 courthouses by the end of fiscal year 2004, according to USAID and contractor reporting, only 7 were completed due to late funding. Despite many achievements, problems pertaining to the selection of advisors and sustainability affected the economic governance program. In December 2002, USAID signed a 3-year, $39 million Sustainable Economic Policy and Institutional Reform Support program contract with Bearing Point, Inc., to provide technical assistance and training to the primary Afghan ministries concerned with economic governance issues. In April 2004, as part of the Accelerating Success initiative, USAID increased funding for the contract to $95.8 million. Most of the work under the contract was implemented by advisors and operations staff assigned primarily to the Ministry of Finance and the Central Bank. As of August 2004, approximately 224 advisors were working within the Ministry of Finance. Some of the advisors worked directly with the management of the ministry, others served in operational positions and were responsible for carrying out the day-to-day functions of the ministry. According to the USAID mission in Kabul, USAID’s Inspector General, and an evaluation commissioned by the Afghan government, Bearing Point made progress toward completing the approximately 120 “contractor responsibilities” listed in the contract. Accomplishments included the following: Fiscal reform. Bearing Point developed a system to estimate government revenues; introduced a taxpayer identification number system; trained Afghans to develop and monitor budgets; established a national payments system; developed a customs broker licensing program, reformed customs operations and trained customs officials; rehabilitated and equipped customs houses and border posts; and developed a database of customs revenues. Banking reform. The contractor helped the Afghan Central Bank establish national and international operations via standard banking telecommunications networks, implement bank licensing policies and procedures, restructure and equip branch banks, and draft banking laws. Trade policy. The assistance provided helped the Afghan government streamline its business license application process, reducing the time to obtain a license from months to less than 1 day and reducing the number of required signatures from 58 to 6. In addition, the contractor also assisted the government in reviewing and drafting commercial decrees and laws. Legal and regulatory reform. The contractor assisted Afghan ministries in drafting key laws and establishing a telecommunications regulatory body. Privatization. As of October 2004, little work had been conducted in the area of privatization, because the Afghan government was not ready to privatize state-owned enterprises. Despite these accomplishments and USAID’s efforts to adjust the program to meet the government of Afghanistan’s needs, the Ministry of Finance remained dissatisfied with the cost and quality of the assistance provided by some of the expatriate advisors hired under the contract and sought to terminate it. In mid 2004, the Afghan government requested a review of the program. The first evaluation, completed in September 2004, found that, among other things, Bearing Point lacked an effective means for determining ministry needs. USAID disagreed with the evaluation’s findings and maintains that Bearing Point worked closely with the ministries receiving assistance and worked proactively to meet their needs. According to the Central Bank governor, the bank—the other major recipient of assistance under the contract—was generally satisfied with the assistance provided. In November, the Ministry of Finance agreed to allow the contract to continue until its completion date, December 2005. In addition, in October 2004, the Minister of Finance and the Governor of the Central Bank were still concerned that their agencies would not be able to sustain operations after the program’s completion. To address this concern, USAID and Bearing Point initiated plans to transfer local Afghans working as their contractors to Afghan civil service. These individuals would be paid with funds provided by the international community through the Afghanistan Reconstruction Trust Fund. In addition, the Central Bank began testing the abilities of its staff on a periodic basis to determine their ability to work in the absence of international advisors and to identify areas where additional training was needed prior to the Bearing Point contract’s completion. USAID’s Afghanistan Primary Education Program (APEP) provided educational and teacher-training programs to help improve basic education; however, under the Accelerating Success initiative, very few schools were constructed and other components were not integrated into educational facilities as originally envisioned. USAID’s education program originally focused its efforts in four areas: textbook production and distribution, radio-based teacher training, accelerated learning, and school construction. USAID provided the bulk of its education assistance in Afghanistan through APEP, run by Creative Associates International, Inc. APEP was designed to ensure that newly constructed schools were functional centers of learning by providing textbooks, skilled teachers and opportunities for accelerating the learning of over-aged students. The Accelerating Success initiative increased APEP’s funding ceiling from $16.5 million to $87.6 million, but it decoupled the provision of materials and training from school construction. The initiative introduced three additional components: in-country textbook production; educational support services to help reform policies, systems, and programmatic changes; and enhanced monitoring and evaluation. Progress under the APEP contract included the following: Textbook production: According to the USAID Regional Inspector General, USAID exceeded its textbook production goal for 2004, producing about 16.5 million books, but distribution was delayed. Radio teacher training: According to the targets in the original APEP contract, the radio teacher training broadcasts aimed to improve the teaching skills of about 30,000, or about 96 percent of all primary school teachers by the end of 2004. Under the Accelerating Success initiative, the goal was reduced to reaching up to 40 percent of Afghan teachers nationwide. After a mid-2004 survey of, primarily, participants in U.S. education programs, USAID concluded that 70 to 90 percent of all primary school teachers were listening to the radio teacher training broadcasts and the number of listeners was increasing monthly. Accelerated learning: APEP’s goal was to raise the educational levels of 80,000 over-aged students in 13 provinces and move them into age- appropriate levels by the end of 2004. Under the Accelerating Success initiative, the goal was expanded to 170,000 students in 17 provinces. Under both plans, APEP intended 70 percent of beneficiaries to be female students. By the end of fiscal year 2004, USAID had met its student enrollment objective, but less than 60 percent of students enrolled were girls. School construction: According to contract and grant documentation, targets for school rehabilitation or construction shifted under the Accelerating Success initiative from 50 schools to 286 schools by the end of 2004. By September 2004, implementing partners reported that 77 schools had been refurbished and 8 were substantially complete. Educational support services: Under the Accelerating Success initiative, APEP began to provide the Ministries of Education and Higher Education with advisors to help draft education law, improve planning capacity, and assess English language instruction needs. Monitoring and evaluation: Under the original contract, APEP produced weekly or biweekly updates, quarterly progress reports, and an evaluation of the radio-based teacher training component. The expanded scope includes a national study of students trained under the accelerated program. USAID also instituted several other smaller education-focused projects in 2004. These included a $10 million teacher-training institute and literacy initiative and an $11 million dormitory to house between 1,100 and 1,500 university women in Kabul. Despite some accomplishments, the project’s ambitious program made it difficult to meet targets on many fronts. To address health sector issues, USAID’s Rural Expansion of Afghanistan’s Community-based Healthcare (REACH) program, in conjunction with the Ministry of Public Health, established a nascent health care system and provided health services and training for health providers. USAID’s REACH program, implemented by Management Sciences for Health (MSH) under a 3-year contract, was designed to improve the health of women of reproductive age and children younger than 5 years. USAID’s Accelerating Success initiative increased MSH’s original ceiling of approximately $100 million to $129 million. The initiative expanded the scope of work to include, among other things, tertiary care in addition to the project’s original focus on rural health care. USAID and MSH noted progress in five health care areas. The progress in these areas as reported by MSH is as follows: Health care facilities and community outreach. The program reached its stated target of awarding $53 million in service delivery grants to more than 250 clinical facilities in 13 provinces. Service in these facilities covers a population of approximately 4.8 million. Training for rural health care providers. The REACH program did not meet its fiscal year 2004 target of training 46 midwives and 2,060 community health workers. By the end of 2004, according to Management Sciences for Health reports, 75 midwives were in training in Kabul, although none had finished the course, and almost 1,500 community health workers had been trained. Public health education programs. REACH developed a policy on health education with the Ministry of Public Health and helped the ministry develop standard health promotion messages. Management Sciences for Health reporting also indicated that REACH produced seven radio dramas and trained 21 ministry and radio staff in radio programming. Ministry of Public Health capacity. REACH developed a national health management information system and played an advisory role in health sector reform, financing, and planning, as well as hospital management. The REACH program helped to create a new human resources department in the health ministry and to review and update the national human resources policy for health staff. REACH also contributed to the development of the National Drug Policy, the National Medicine Agency, and donation guidelines for drug and equipment donors. According to a midterm evaluation of the program, REACH’s efforts have been sustainable. Clinic construction. According to contract and grant documentation, targets for clinic rehabilitation or construction shifted under the Accelerating Success initiative from 50 clinics to 253 clinics by the end of 2004. By September 2004, implementing partners reported that 15 clinics had been substantially constructed and none had been refurbished. USAID’s ambitious health care program stretched the capacity of contractors, making it difficult to implement many projects simultaneously. For example, according to the December 2004 USAID-commissioned midterm evaluation, despite targets established in 2003, few clinics had communication materials designed to change the health-related practices of Afghans and most clinics remained focused on curative, rather than preventative, care. Further, the evaluation found that despite a greater need for community midwives than for hospital midwives, REACH developed the capacity to train equal numbers of hospital and community midwives. In addition, although the development of a national infection prevention program was added to the Accelerating Success initiative, the program’s schedule was delayed by 2 to 3 months owing to a delay in finding a program manager. USAID’s infrastructure program focused on some of Afghanistan’s large infrastructure needs, including construction of the primary highway and emergency electricity provision to four cities; however, progress has been limited. USAID designed the Rehabilitation of Economic Facilities and Services (REFS) program, to promote economic recovery and political stability in Afghanistan by repairing selected infrastructure. To ensure sustainability, USAID also designed REFS to strengthen pertinent institutions’ management capacity, and to strengthen Afghan construction companies’ ability to build according to international standards. The Louis Berger Group, Inc., implemented most of the infrastructure work through 2004 under the 3-year REFS contract. Under the Accelerating Success initiative, USAID increased REFS’ funding from $143 million to $665 million and added 12 new awards with a collective funding ceiling of almost $400 million. USAID’s fiscal year 2004 infrastructure programs included the following: Primary roads. USAID completed the first phase of the construction of the Kabul-Kandahar Ring Road, which decreased travel time between the two cities from several days to 6 hours. The second phase—adding layers of asphalt, bridges, culverts, shoulders, and signage—was to be complete by October 2004, but repair work continued into 2005. USAID also mobilized contractors and started survey work to begin the next section of the road, from Kandahar to Herat. Secondary and urban roads. By fiscal year 2004, work had begun on one urban road in Kabul, diverting traffic away from the U.S. Embassy, and one secondary road to provide access between Kabul and a southern city, Gardez. In addition, according to contractor reports, planning or construction of nine additional secondary roads began; however, by the end of fiscal year 2004, four were postponed due to lack of funding. Power. To increase the power supply around Kandahar in the south, USAID began rehabilitation of two turbines for the Kajaki Dam. By the end of fiscal year 2004, USAID was negotiating for construction of a third turbine as well as seeking a solution to a power shortage in Kabul. In the meantime, USAID supplied emergency power to Kabul, Kandahar, Lashkar Gah, and Qalat by providing fuel for generators at the cost of approximately $3 million per month. Irrigation. USAID began work on several irrigation projects: the emergency rehabilitation and reconstruction of the Saur-e-haus Dam, spillway, and diversion channel; rehabilitation of the Zana Khan Dam; the Sardeh Irrigation System; and three intake systems. By the end of fiscal year 2004, construction of the Sardeh Irrigation System and two of the intake systems had been completed. Water/wastewater. By September 2004, USAID had contracted for water availability assessments for two planned communities and for water system assessments and design upgrades for three provincial capitals. Schools and clinics construction. In the initial infrastructure contract with the Berger Group, USAID included the construction or rehabilitation of 40 schools and clinics as an illustrative target to be achieved by the end of 2003, with an additional 60 buildings to be completed by the end of 2004. The actual job orders signed in July 2003 show that the Berger Group agreed to complete 55 schools and 78 clinics. By the end of September 2003, only 1 building was completed. USAID reduced the Berger Group’s responsibility to 105 buildings and in May 2004 provided grants to five additional organizations, with the goal of rehabilitating or constructing a total of 774 buildings by October 31, 2004. In mid-2004, owing to, among other things, the education and health ministries’ insistence on producing new as opposed to refurbished buildings and a lack of progress by all implementing partners, USAID, according to grant and contract documentation, reduced its expectations to about 530 buildings and extended the completion deadline to December 2004. By the end of fiscal year 2004, the implementing partners reported having refurbished 77 buildings and substantially completed new construction of 23 buildings. Because the Accelerating Success initiative emphasized visible construction, in addition to time and funding constraints, USAID largely abandoned the REFS contract’s objective of building Afghan ministry capacity in 2004. The Berger Group had recruited and hired experts to supply intellectual capacity at the ministries of Public Works, Irrigation, Health and Education; however, this project was discontinued in June 2004. Although U.S. agencies focused on 10 of the 13 women-centered objectives legislated by Congress, the overall impact of these efforts has not been measured. (See app. III, table 17 for progress on objectives.) Unlike programs for most reconstruction sectors, no overarching contract was let to implement women-centered programs. Instead, U.S.-funded programs incorporated components that advanced the social, economic, and political rights and opportunities of women, dedicating about $196 million to such initiatives. For example, USAID provided more than 90,000 girls with education equivalent to one or more grade levels through the APEP accelerated learning program and trained community health care workers and midwives through REACH. USAID also provided democracy and governance technical assistance, which helped over 3 million women register for, and vote in, the 2004 presidential elections. USAID also implemented other projects, such as reconstructing a women’s dormitory to enable more than 1,100 young women to attend university in Kabul, establishing a women’s teacher training institute, and, according to USAID, completing 3 of 17 planned women’s resource centers in provincial capitals. Other U.S. government agencies also incorporated women’s issues into their work. For example, the Department of State granted $75,000 to train four Afghan woman judges in civil and family law. Likewise, the DOD included “Principles of non-discrimination: Women in Society” and other pertinent classes in their curriculum for training the Afghan National Police. In addition, the U.S.-Afghan Women’s Council was created to accelerate progress by promoting public-private partnerships between U.S. and Afghan institutions and mobilizing private sector resources to benefit women. The council raised about $135,000 of private sector funds from entities such as an America Online women executives group and Daimler- Chrysler. These funds supplemented various U.S. government projects, including training of women judges, the Afghan Family Health Book, and community banks and microfinance loans. Many of the other projects sponsored by the council, such as the Women’s Teacher Training Institute, were funded and managed through USAID. Whereas USAID’s reconstruction sector programs tend to target a broad range of women and have a national scope, many of the council-supported projects impact a small number of women. Although U.S. legislation and assistance programs have included efforts to address the needs of Afghan women, as of the time of this report, no evaluation had been conducted to determine the overall impact of U.S. gender-related efforts. Problems associated with the management and coordination of U.S. assistance to Afghanistan occurred throughout fiscal year 2004. As in fiscal years 2002-2003, the persistence of project management problems affected agencies’ oversight of reconstruction contracts. U.S. financial data on assistance to Afghanistan remained fragmented and incomplete, and USAID continued to operate without a comprehensive operational strategy to guide its efforts. In addition, USAID did not always enforce required contract provisions, USAID directives, or a federal acquisition regulation necessary to hold contractors accountable for their performance. In addition, comprehensive performance indicators in most sectors were lacking. Consequently, decision makers in Washington did not receive meaningful information about the results of USAID–implemented projects. Problems with project monitoring also continued in 2004, and although USAID has taken steps to improve project monitoring, limited staffing and security restrictions reduced its ability to provide proper oversight for much of the fiscal year. Finally, although coordination of U.S. efforts occurred daily throughout 2004, the evolving roles of U.S. organizations and the coordination of international assistance were problematic. During fiscal year 2004, a number of management problems negatively affected the U.S. agencies implementing reconstruction projects and prevented agency officials from providing project oversight. The tracking of U.S. financial data for Afghanistan assistance remained fragmented and incomplete. In addition, USAID continued to operate without a comprehensive strategy to guide its overall assistance effort. Contract management and performance measurement problems also impeded oversight. Finally, staff turnover and travel restrictions negatively affected USAID’s ability to provide regular on-site monitoring of project activities. In fiscal year 2004, tracking of U.S. assistance financial data for Afghanistan improved but remained fragmented and incomplete. In June 2004, we reported that the Coordinator for U.S. assistance to Afghanistan, as well as others involved in the management of the assistance effort, lacked complete and accurate financial data for fiscal years 2002 and 2003. Because of the lack of accessible and timely financial data, program managers were hampered in their ability to, among other things, allocate resources and determine whether strategic goals were being met. Although more information on assistance obligations was available in fiscal year 2004 than in previous years, U.S. agencies remained unable to readily supply complete and accurate financial data for programs in Afghanistan. There was no single, consolidated source of fiscal year 2004 obligation and expenditure data for U.S. assistance to Afghanistan. Consequently, as in 2002 and 2003, the embassy and the coordinator’s office continued to lack complete and accurate financial data to inform their decisions. According to the Embassy Interagency Planning Group, numerous organizations with little coordination or oversight track the U.S. budgetary process for assistance to Afghanistan, including obligation and expenditure data. To address this problem, the embassy created an interagency resource office in November 2004 to provide better visibility over all U.S. assistance financial matters in Afghanistan. As in previous years, USAID operated in fiscal year 2004 with an interim, rather than a more complete, standard strategy for its activities in Afghanistan. USAID directives allow the use of interim strategic plans in countries experiencing high uncertainty because of drastic political, military, and/or economic events. In June 2004, we reported that although the USAID mission in Afghanistan developed an interim strategy and action plan in August 2002, these documents did not clearly articulate measurable goals or provide details on time frames, resources, responsibilities, objective measures, or the means to evaluate results for each of the sectors targeted by the strategy, as required by USAID directives. The mission obtained yearly waivers allowing it to postpone developing a comprehensive strategy until February 2005. According to USAID officials, the mission did not complete a comprehensive strategy in fiscal year 2004 because it wanted to wait until the Afghan presidential elections had been completed and a new government formed. USAID officials informed us in July 2005 that a more comprehensive strategy had been completed and approved by USAID management in Washington, D.C. Although a new strategy was completed prior to the end of fiscal 2005, more than 3 years have passed between the time USAID began providing postconflict assistance to Afghanistan and the completion of a comprehensive USAID assistance strategy for Afghanistan. The lack of a comprehensive strategy impedes USAID’s ability to ensure progress toward development goals, make informed resource allocation decisions, and meet agency and congressional accountability reporting requirements on the effectiveness of agency programs. Contract management problems affected most reconstruction sectors, making it difficult to hold contractors accountable. Oversight of the USAID assistance contracts for Afghanistan was essential owing to the inherent risks associated with the use of cost-plus-fixed-fee contracts; the awarding of contracts through bidding procedures that were not fully open and competitive; the large initial dollar value and scope of the contracts and large increases in the dollar values and scopes over time; and the requirement to demonstrate progress quickly. Despite the need for strong oversight of USAID assistance contracts, we found that USAID did not provide adequate contract oversight, including holding contractors to stipulated requirements and conducting required annual reviews of contractor performance. Agriculture. USAID did not hold its primary contractor to the contract’s requirement to conduct five crop subsector assessments that were to serve as the basis for the contractor’s annual work plans and all future activities. According to the contractor, it did not complete the assessments because USAID was pressing it to produce visible progress through the construction of, among other things, irrigation canals and farm-to-market roads by the Accelerating Success deadline. Although USAID documented the contractor’s lack of performance, as of October 2004, it had not required the contractor to complete the assessments. Economic governance. USAID’s regional Office of Inspector General reported in August 2004 that, because the contractor failed to produce contractually required quarterly work plans and schedules, the Office of the Inspector General could not determine whether the economic governance program was on schedule to achieve planned outputs. According to the Inspector General’s report, USAID officials did not require the contractor to produce the plans, in part because mission staff in Kabul lacked time to review them. To correct this problem, the contractor began producing quarterly work plans and schedules in July 2004. Infrastructure. The use of grants instead of contracts to accelerate the construction of some schools and clinics in fiscal 2004 made it difficult for USAID to hold grantees accountable, because no-penalty causes were included in the grant agreements. Further, neither USAID nor its initial contractor developed a quality assurance plan for the school and clinic reconstruction effort. Such a plan could have guided USAID’s oversight efforts and assisted in the identification of problems. Similarly, although the main infrastructure contractor was required to develop and submit a comprehensive quality control and assurance program for the Kabul–Kandahar Road construction project, this was not done. According to a September 2004 USAID Inspector General report, USAID did not inspect contract quality control laboratories until 21 months after road construction began. The Regional Office of the Inspector General also found deficiencies in the contractor’s quality control program, such as untrained personnel and lack of adherence to testing standards. All sectors. Because of staffing constraints and competing priorities, USAID did not perform annual contractor performance evaluations in any sector as required by USAID policy directives in 2004 and federal regulation. The evaluations are intended to document contract quality, cost control, and timeliness; and inform future award decisions. According to USAID, five additional contracting staff were hired in early fiscal 2005 and efforts to conduct evaluations subsequently began. In addition, according to contract provisions, technical and contracting officers are to meet quarterly and annually with contractors to discuss performance and other administrative and technical issues. Although USAID maintains that staff met frequently with contractors throughout 2004 and conducted in-house reviews of some of the major programs, most meetings were ad hoc and records of the discussions were not always formally documented and reported. The absence of such records makes it difficult to determine the nature and extent of problems with individual contractors or across multiple contractors’ efforts. Such records would also facilitate conducting annual contractor performance evaluations. The USAID mission in Kabul did not develop a performance management plan for 2004. In addition, performance information in several sectors was lacking, making it difficult to determine the results of USAID assistance. Finally, because of—among other problems—weaknesses in contractor reporting and the lack of a performance management system, the information reported by USAID to decision makers in Washington, D.C., did not accurately portray the status of each sector or the overall assistance effort. USAID directives state that performance management represents the agency’s commitment to manage programs with greater accountability and that operating units must prepare a complete performance management plan to manage the process of assessing and reporting progress toward strategic objectives. However, since the mission in Kabul was operating under a wavier that permitted it to use an interim strategy rather than a more comprehensive strategy, it was also allowed to operate without a comprehensive performance management plan. Although a performance management plan was not required, USAID directives state that when an interim strategy is used, program performance should still be measured— country volatility may require intensive monitoring and measurement of program implementation. USAID officials stated that although a formal plan was not prepared, goals, indicators, baselines, and targets were included in major contracts. However, without a performance management plan that meets the requirements stipulated in USAID directives, USAID cannot develop a complete and accurate assessment of the status of its assistance efforts. The United States has pledged to maintain a long-term presence in Afghanistan, in part by increasing the number and scope of USAID contracts. Consequently, the need for a comprehensive plan and for the greater integration of performance measurement into the work of contractors will continue to be important in future years. Now that a new overall, longer-term strategy for USAID’s efforts in Afghanistan is approved, USAID has stated that the mission in Kabul will develop a performance management plan that complies with USAID directives. Further, USAID directives state that performance data collection should be integrated with implementing contractors’ activities and incorporated into the contractors’ work plans. USAID did not stipulate the requirement for contractors to develop sector-specific performance plans in three of the six major reconstruction contracts. In two of the three contracts where a requirement was stipulated, little information on what should be included in the plans was prescribed. We found problems with performance measures in the following sectors: Agriculture. The contractor was required to report to USAID on the status of 14 performance measures. (See app. III, table 6 for a list of the measures.) However, the contractor did not collect or report information for most of the measures, making it difficult for USAID to accurately determine the extent to which the program was achieving expected results. Efforts were underway in June 2005 to improve agriculture-related performance measures. Democracy and governance. In the grant awarded for civic education and political party building, USAID did not require the implementing partners to establish specific targets or develop performance management plans, making it difficult to assess whether the program was on schedule or achieving intended results. Economic governance. For most of fiscal year 2004, the contractor did not develop performance measures, which would have helped USAID monitor the sector’s results. An Afghan government review of USAID’s economic governance program stated that the contractor had not developed a formal process for assessing advisors as required in the statement of work. Consequently, it was unclear how USAID or the contractor assessed advisors’ performance, determined whether the advisors’ knowledge had been transferred to Afghan counterparts (a key aspect of the program), or monitored the program’s progress. USAID officials stated that the program’s progress was tracked through weekly and monthly progress reports. We found that although these reports provided information on the status of activities, they did not contain specific performance indicators to determine the impact of the project. To correct this weakness, the contractor initiated efforts to produce periodic performance measures in the last quarter of fiscal year 2004. Health. According to an evaluation of the health sector, the contractor’s data management system was unable to collate data from service delivery subgrantees into a comprehensive picture for the overall service delivery effort, making it difficult for managers and USAID to judge progress or results. Owing to these weaknesses and other problems, the performance measures that the Kabul mission provided to decision makers in Washington, D.C., did not completely portray the status of each sector or the overall Accelerating Success initiative. For example, we found that the reported agriculture sector measures, such as kilometers of canal repaired, did not provide the information necessary to determine whether the program was meeting the primary objectives stated in the contract— increasing agricultural productivity and farmers’ incomes. Likewise, the only two measures where reported for the democracy and governance sector, the number of courthouses constructed, and the number of judicial personnel trained. Those measures did not capture the performance of the diverse activities implemented in this sector. Further, the data reported in some sectors did not always match contractor reports. For example, although contractor reports indicted that 77 schools were refurbished and 8 were substantially complete, reports provided to Washington indicated that only 39 schools had been constructed or rehabilitated. Program managers need accurate operational information, including performance measures, to determine whether strategic objectives are being met. According to USAID officials in Washington, D.C., only 3 days were allowed for the development of the Accelerating Success performance measures. The measures were selected based on what USAID thought it could accomplish by the June 2004 target date rather than what was needed to determine progress and results in each sector. USAID conditioned meeting the targets on, among other things, the existence of a secure environment and the receipt of funding by July 2003. Neither of these conditions existed. USAID does not believe that the performance measures currently used are an effective way of measuring progress toward program objectives and plans to introduce more meaningful performance measures in fiscal 2006. The agency maintains that it is crucial to take into account lessons learned related to the difficulty of the reconstruction environment when developing future measures. (App. III presents information on the performance measures that were included in the main contracts for each sector and the Accelerating Success performance measures provided to decision makers in Washington, D.C.) Throughout fiscal year 2004, staffing problems and security restrictions limited on-site project monitoring. Although the USAID mission in Kabul had more staff and better working conditions by late fiscal year 2004 than in previous years, staff levels and turnover continued to pose a challenge. Staff at the USAID mission in Kabul continued to manage a much larger amount of assistance than their counterparts at other missions. Specifically, as of June 2004, staff at the mission in Kabul managed approximately $27.5 million in assistance per staff member while counterparts at other missions managed $1.2 million per staff member. This ratio improved by September 2004 after USAID increased its staff from 41 to 101. In September, the ratio was reduced to about $11.2 million per staff person while the average across USAID missions remained about $1.2 million per staff person. Further, staff turnover in key positions continued in 2004. For example, the mission had three different directors in fiscal year 2004. Similarly, the agriculture sector had five different technical officers in 2004, owing to staff performance problems and delays in finding a permanent officer. According to USAID, the mission in Kabul also did not have sufficient staff in Afghanistan with the technical knowledge to monitor reconstruction projects. To increase technical knowledge, 10 members of the U.S. Army Corps of Engineers were assigned to the Kabul mission by September 2004. To increase the recruitment pool for staff assigned to the USAID mission, in November 2004, USAID’s Administrator requested all staff to consider serving in one of four critical posts including Afghanistan, Iraq, Pakistan, and Sudan. In addition, in order to attract and retain more U.S. direct hire staff for extended periods, USAID has increased pay incentives, such as hazard pay, cost of living, and overtime remuneration. Security restrictions limited the travel of U.S. direct-hire personnel to program sites outside Kabul, making routine program monitoring difficult. In its April 2004 risk assessment of the USAID mission in Kabul, USAID’s Inspector General cited the inability to travel freely to project sites because of security concerns as a material weakness and an overriding constraint to managing assistance activities in Afghanistan. For much of fiscal year 2004, USAID staff had limited access to project sites and depended on reporting from its contractors and grantees. To improve project monitoring, USAID contracted with a nongovernmental organization to conduct site evaluations; however, the contract was not signed until May 2004. Although coordination of U.S. assistance efforts occurred daily throughout fiscal year 2004, new initiatives to improve coordination of U.S. assistance in Afghanistan had mixed results. Also, despite efforts by the Afghan government to better coordinate international donor assistance, problems associated with the effectiveness of coordination mechanisms persisted in 2004. U.S. assistance to Afghanistan in fiscal year 2004 was coordinated primarily through daily meetings of the Afghanistan Interagency Operations Group. The group included representatives from the Department of State’s Office for Afghanistan, USAID, DOD, and other agencies delivering assistance. According to Department of State officials, this formal, interagency committee provided a uniform process for making, and informing the President of, policy-level decisions and for sharing information among agencies. In Afghanistan, U.S. assistance was coordinated through the U.S. Embassy country team. (See fig. 14.) The United States undertook several initiatives in fiscal year 2004 to improve coordination of U.S. assistance in Afghanistan. Specifically, the office of the Commander of the Combined Forces Command-Afghanistan (CFC-A) was moved to the embassy from Bagram Airbase (27 miles north of Kabul) to improve coordination between civilian and military efforts in Afghanistan. Further, according to embassy officials we interviewed and documents we reviewed, the Ambassador did not believe that the existing embassy management structure was sufficient to plan, coordinate, and monitor U.S. operations and did not have confidence in the accuracy of reconstruction assistance reporting. To improve reconstruction management, planning, and reporting, the ambassador created the Embassy Interagency Planning Group, staffed by military officers, to improve reporting on reconstruction projects, facilitate the development and execution of the Mission Performance Plan, and act as a liaison between the embassy, CFC-A, the Afghanistan Interagency Operations Group, and others. The Departments of State and Defense also created the Afghanistan Reconstruction Group (ARG) in fiscal year 2004, recruiting private sector and other experts to serve as strategists to the Ambassador and as sector advisors to key Afghan government ministries. However, the group’s mandate, mission, roles, and responsibilities were not delineated or incorporated into the embassy’s Mission Performance Plan. In addition, according to ARG, USAID, and Department of State officials, the ARG focused its efforts on criticizing USAID programs rather than providing constructive advice. As a result, animosity developed between the ARG advisors and some USAID and embassy staff. According to USAID and Department of State officials we spoke to, some ARG advisors did not coordinate Afghan ministry meetings with embassy staff or inform them about the meetings’ results. State and USAID officials stated that because separate meetings were being held, Afghan government ministries sometimes received conflicting messages about U.S. reconstruction activities. Further, some USAID contractors became confused by ARG advisors’ efforts to direct the reconstruction effort. For example, ARG advisors responsible for economic governance issues tried to direct the activities of USAID’s contractor for that sector. To clarify lines of authority, USAID informed its contractors that they were to take direction from USAID alone. Most U.S. officials we spoke to in November 2004 stated that coordination with the ARG had improved; however, the roles and responsibilities of the ARG remained unclear. To enhance reconstruction efforts, the U.S. government increased the presence in the PRTs of civilian personnel from the Department of State, USAID, and other agencies. By September 2004, about 13 Department of State, 8 USDA, and 13 USAID representatives were stationed alongside U.S. military personnel in PRTs across Afghanistan. However, in the absence of a common doctrine or set of best practices for incorporating civilian personnel, coordination varied depending on each PRT commander’s priorities and personal relationships with civilian agency representatives. In addition, we found that stationing civilian personnel in the PRTs did not improve oversight of Kabul-based projects. In general, USAID personnel at the PRTs focused on identifying, implementing, and coordinating PRT- based, quick-impact projects. Few USAID technical officers stationed in Kabul used the USAID PRT staff to help monitor reconstruction projects. We reported that in fiscal year 2003, neither USAID officers stationed in Kabul nor at PRTs were able to identify the location of many Kabul-directed projects in the field. This problem persisted in 2004 despite the addition of a Kabul-based USAID-PRT coordinator to facilitate logistics and communication. Despite some efforts by the Afghan government to coordinate assistance from international donors, problems associated with the effectiveness of coordination mechanisms persisted throughout 2004. The Afghan government established the National Development Framework and Budget and consultative groups to coordinate international assistance. The development framework and budget established broad national goals and policy direction for a reconstructed Afghanistan. The consultative groups were designed to assist in the planning and implementation of the national budget and to coordinate the international community’s independent efforts and political objectives. (See fig. 15.) Afghanistan Development Forum (ADF) (National Budget) Consultative Group Standing Committee (CGSC) In June 2004, we reported that the coordination of international assistance and the consultative groups had not been effective. We found, among other things, that some donors independently pursued development efforts in Afghanistan; the international community asserted that the Afghan government lacked the capacity and resources to effectively assume the role of coordinator; the terms of reference for the consultative groups were unclear and too broad; the groups were too large and lacked strong leadership; member commitment was uneven; and the overall potential of the mechanism had not been maximized. International coordination improved somewhat in 2004. The national Consultative Group Standing Committee met frequently; the Afghan government presented a consolidated national budget to focus international donations at the Afghanistan Development Forum; and more donors demonstrated increasing commitment to use the national budget to focus their assistance. However, the then Minister of Finance stated that some international donors continue to provide assistance based on what they want to provide rather than on the Afghan government’s needs. However, problems with the consultative groups and USAID’s coordination with the Afghan government persisted in fiscal 2004. According to the then Minister of Finance, the consultative group mechanism had not matured into a real decision-making forum. More than 1 year after their creation, most groups met infrequently and 5 of the 16 groups had not yet developed terms of reference to guide their efforts. Others that did not produce results, such as the natural resources consultative group, were effectively disbanded. Although USAID participated in a number of the consultative groups, some coordination issues remained. For example, according to USAID officials there were extensive contacts between USAID, contractors, and ministry officials, and ministries had to approve building designs and site locations. However, officials from the Ministries of Education and Health believed they had been excluded from participating in the management of the construction of schools and clinics. Further, the lack of coordination among the Ministry of Health, USAID, and the REACH contractor to match clinic construction site selection with the location of health service delivery grant activities caused a significant barrier to expanding the basic provision of health services. The Minister of Agriculture stated that he was not regularly informed about the U.S. agriculture program’s progress and was unable to respond to public inquiries about the program, increasing skepticism as to whether any assistance was being delivered. Similarly, according to an evaluation commissioned by the Afghan government, the Minister of Finance and his department heads had little input into the initial identification and selection of some of the USAID contracted advisors and were dissatisfied with their qualifications and work. In fiscal year 2004 Afghanistan’s security situation remained volatile and, in some parts of the country, seriously deteriorated. Attacks on assistance projects occurred throughout the year, resulting in project delays and the deaths of assistance workers. In addition, dramatic increases in opium cultivation continued to threaten stability in Afghanistan; efforts to reverse the trend, including the development and implementation of a U.S. counternarcotics strategy, began in late 2004. Further, delayed funding continued to hamper the U.S. assistance effort in Afghanistan. Most of the funding needed to meet June 2004 Accelerating Success initiative targets was not available until February, just 5 months prior to the target date. In fiscal year 2004 the security situation in Afghanistan was volatile and deteriorated in some regions. Attacks against aid workers, Afghan security forces, and international forces increased. According to U.S. security data and UN reports in August and November 2004, deteriorating security in the south and southeast caused large areas to be “effectively out of bounds to the assistance community” (see fig. 16). In the north—an area commonly viewed as the safest in the country—attacks resulted in the deaths of foreigners and Afghans. Direct attacks on UN compounds and convoys occurred in Kandahar, Konduz, and Hirat provinces as well as other provinces. According to USAID, eighty-one people involved in assistance activities were killed in 2004. During fiscal year 2004, 70 attacks directly affected USAID programs, causing delays in reconstruction projects. For example, equipment was damaged, work was delayed, and construction workers were kidnapped, wounded, or killed by antigovernment forces attacking USAID’s highway construction project. In addition, secondary road projects, agricultural training programs, the distribution of vaccines and medicines, and the construction of schools and clinics, among other reconstruction projects, were delayed or terminated because of attacks. For example, school construction in Uruzgan, Helmand, Paktiya, and Ghazni provinces was at a standstill owing to security threats. Stability across the country in 2004 was threatened by local authorities and military commanders who acted with impunity and were viewed as responsible for a wide range of repressive activities, including acts of intimidation, extortion, arbitrary arrest, illegal detentions, and extrajudicial killings and torture, according to the Department of State, the UN, and human rights groups. Factional fighting among warlords in seven provinces in the north and the west of Afghanistan continued in 2004, resulting in the deaths of at least 100 combatants and civilians. Although large areas of the country and some warlords remained beyond the control of the Afghan government in 2004, the Afghan government made some progress in asserting its authority. For example, the Afghan President appointed new governors in about half of the country’s 34 provinces. However, according to the Department of State’s 2004 human rights report for Afghanistan, the government or its agents carried out extrajudicial killings. For example, on August 14, 2004, 17 bodies were discovered at the Shindand market place, with evidence that 6 of the 17 individuals were tortured and beheaded. The United States and the international community continued to take steps to improve security in Afghanistan. Specifically, DOD, coalition, and NATO forces increased the number of provincial reconstruction teams from 4 to 19 in 2003-2004 to enhance security for reconstruction activities. In addition, DOD accelerated its effort to train and deploy Afghan National Army combat troops. As of March 2005, 18,300 troops had been trained and 10,500 troops had been deployed to Kabul central command and 7,800 to four regional commands. However, efforts to equip troops and build supporting military organizations were behind schedule. Further, the United States and Germany had trained more than 35,000 police by January 2005, but the lack of infrastructure and equipment at the provincial and district levels, along with other problems, negatively affected police effectiveness. Finally, as of February 2005, about 40,000 of Afghanistan’s estimated 100,000 official militia forces had been demobilized; however, an estimated 65,000 to 80,000 unofficial militia fighters were still at large. The Department of State views the demobilization and reintegration of these forces as critical to improving the country’s security and succeeding in the international recovery effort. In 2004, dramatic increases in opium cultivation continued to threaten stability, reconstruction, and state-building in Afghanistan. According to the UN, Afghan drug production increased by approximately 25 percent between 2002 and 2004, owing to high returns, a growing market, rural poverty, political fragmentation, weak law enforcement, and deteriorating security. (See app. IV for 2002-2004 production and revenue statistics.) The UN estimated 2004 opium production at 4,200 metric tons, which represents almost 90 percent of the world’s illicit opium supply. Disease and drought kept the yield low; without these mitigating environmental factors, the U.S. government estimated that total production would have been more than 9,700 metric tons. According to the Department of State, the UN, the Afghan government, and others, opium cultivation, drug trafficking, and associated financial gains are having an increasingly harmful influence on Afghanistan society. Specifically, some national level officials and many district and provincial government leaders have some criminal connection to the opium trade. With opium-related revenues equivalent to 50 to 60 percent of its GDP over the past 3 years, Afghanistan is on the verge of becoming a narco-state. The increase in opium production and trafficking is threatening reconstruction and state-building in Afghanistan, as well as the nation’s longer-term peace. It is undermining legitimate economic activities and the establishment of the rule of law and is responsible for supporting factional agendas and antigovernment elements, including warlords, local commanders, and terrorist organizations. The drug trade is also impeding the disarmament, demobilization, and reintegration of former combatants, because those involved in the drug economy are developing and funding private militias needed to run the drug business. Further, the unchecked development of an illicit narcotics-based economy, and the funds it provides to the entrenched interests of antigovernment elements in the provinces, exacerbates problems associated with the central government’s effort to extend its writ outside Kabul. The Afghan government and the international community have taken a number of actions to address the narcotics problem in Afghanistan since the signing of the Bonn Agreement in December 2001. These actions have included imposing bans on opium cultivation, drafting counternarcotics strategies, establishing Afghan counternarcotics police organizations, and launching limited eradication efforts (see app. IV for more details). U.S. counternarcotics efforts in 2004 were led by the Department of State’s Bureau for International Narcotics and Law Enforcement (INL) Affairs. Additional assistance was provided by DOD. INL obligated $36.5 million primarily to eradicate poppies and provide alternative livelihoods and spent about $8.9 million, to assist the Afghan Government’s central eradication force. DOD obligated $25.7 million on counternarcotics efforts by October 2004 and spent about $8.9 million, to train and equip the Afghan Government Counternarcotics Police, build a public affairs capacity within the Ministry of Interior, and create a counternarcotics intelligence organization. However, these and other counternarcotics efforts failed to have any significant impact on the cultivation and processing of opium in 2004 owing to limited security and stability across Afghanistan. For example, as of October 2004, efforts led by the Afghan, UK, and U.S. governments to manually eradicate poppy fields failed. In 2004, eradication efforts began after most of the country’s opium had been harvested, primarily targeted producers in only 1 of 34 provinces, and resulted in the eradication of less than 1 percent of the hectares cultivated. Meanwhile, although a number of clandestine processing labs were destroyed and limited quantities of opiates seized, no major narcotics traffickers were arrested, and piecemeal training and limited funding have impeded the development of Afghanistan’s Counternarcotics Police. According to the Department of State, counternarcotics is now one of the top U.S. priorities. Between June and October 2004, a $776.5 million, five- pillar strategy, implementation plan, and budget for 2005 were developed. The budget would fund five areas: $299 million for eradication programs, $180 million for law enforcement, $172.5 million for interdiction, $120 million to provide legal livelihood alternatives for poppy farmers, and $5 million for a public information campaign. The 2005 strategy faces a number of challenges that may limit its success. The strategy calls for a robust eradication program that includes the use of aerial methods. However, the Afghan government vetoed the use of aerial eradication, making it impossible to affect large areas quickly. During our visit in October 2004, the Governor of the Afghan National Bank stated that eradicating 30 to 50 percent of the country’s opium would have a destabilizing impact on the economy. He added that the U.S. government had not consulted the National Bank regarding the economic impact of eradication. U.S. officials stated that funding for the overall U.S.-led effort was needed in January 2005, 4 months before the beginning of the harvest season, but some of the funding was not available until May 2005. The interdiction capabilities of the Afghan government are rudimentary at best, because the government lacks the laws or legal infrastructure needed to investigate and prosecute drug-related crimes. Delayed funding continued to negatively impact the U.S. assistance effort in Afghanistan in fiscal year 2004. In our previous report, we noted that delays in fiscal year 2003 funding prevented USAID, in particular, from undertaking major reconstruction activities. As in prior years, most reconstruction money in fiscal year 2004 was provided through emergency supplemental appropriations, with smaller amounts in the agencies’ regular appropriations. USAID received reconstruction money through two appropriations (see fig. 17). In November 2003, Congress appropriated $672 million in emergency supplemental legislation; the Office of Management and Budget (OMB) apportioned $270 million of this funding to USAID in late January 2004 and $372 million in early February. In addition, Congress appropriated $283 million in January 2004 for USAID’s fiscal year 2004 budget for Afghanistan reconstruction. However, the first portion of these funds did not become available for programming by the USAID mission in Kabul until March 2004 owing to delays introduced by the apportionment processes within State, OMB, and USAID. All of these funds were for programs that, under the Accelerating Success initiative, had initial targets of June 2004, giving the agency approximately 3 to 6 months to demonstrate progress. According to USAID officials, to compensate for the funding delays USAID was forced to postpone the start or expansion of programs and move funds between programs to keep faster paced programs operating. USAID continues to face funding delays in fiscal year 2005. In December 2004, Congress passed regular appropriations for the agency, matching USAID’s Afghanistan budget request of $397 million; USAID officials stated at the time that they would be unable to fully implement programs with the amount of their regular appropriations and would rely on supplemental funding to carry out the agency’s planned activities. However, the almost $1.1 billion fiscal 2005 emergency supplemental appropriation was not passed until May 2005. USAID, Department of State, and Afghan officials told us that it is difficult to plan and implement large development programs that depend heavily on the passage of uncertain supplemental appropriations. Afghanistan has made progress since the fall of the Taliban in October 2001. As part of an international effort, U.S. assistance, led by USAID, helped Afghanistan elect its first president, return millions of children to school, and repatriate millions of refugees. Despite these gains, Afghanistan’s needs remain great. It ranks as the world’s fifth poorest country; half of all Afghans live below the poverty line and more than 20 percent cannot meet their daily food requirements. Further, factional elements remain in control of some areas of the country, perpetrating crimes against citizens, and insurgents continue to infiltrate the country. These conditions leave the nation at risk of once again becoming a threat to itself and others. The U.S. has pledged to maintain a long-term presence in Afghanistan, including increasing the number and scope of USAID contracts. In 2004, the focus of U.S. support to Afghanistan shifted from primarily emergency assistance to reconstruction programs, with large scopes of work and costs, in an effort to accelerate progress. Despite its considerable investment in Afghanistan’s reconstruction, USAID struggled with contract management and project oversight. Although a long-term, country-level strategy was approved as of July 2005, USAID operated throughout 2004 without a comprehensive strategy. In addition, USAID has not developed a performance management plan to monitor project performance, nor has it focused contractors’ efforts on developing project-specific performance plans. Without such plans, the U.S. government cannot accurately assess the results of its assistance efforts. Consequently, decision makers in Washington and Kabul cannot effectively target resources to accomplish the goal of creating a stable Afghan society. To improve on existing efforts to measure and assess the progress of U.S. reconstruction projects toward achieving U.S. policy goals, and to provide a basis for planning future reconstruction projects, we recommend that the Administrator of USAID take the following three actions (1) establish a performance management plan that complies with USAID directives, (2) clearly stipulate in all future reconstruction contracts that contractors are to develop performance management plans specific to the work they are conducting, and (3) more completely communicate the performance information obtained from the performance management plans to executive branch decision makers in Kabul and Washington. We provided a draft of this report to the Departments of State and Defense and to USAID to obtain their comments. The Departments of State and Defense declined to comment on the report. USAID commented that in general it found the report to be a comprehensive and detailed assessment of the U.S. civilian reconstruction efforts in Afghanistan during fiscal year 2004. USAID concurred with the report’s recommendations and indicated that it has made progress in improving its strategic planning and performance measurement processes. Specifically, USAID completed its first long-term country-level strategy for Afghanistan to cover the period from 2005 through 2010. The agency also indicated that it has begun developing a performance management plan. USAID also provided information on more recent activities and technical comments, which we incorporated where appropriate. Copies of this report are being sent to the Secretary of Defense, the Secretary of State, the Administrator of USAID, relevant congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be made available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-3149 or at gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and major contributors are listed in appendix V. The Afghanistan Freedom Support Act of 2002 directs GAO to monitor U.S. humanitarian and reconstruction assistance to Afghanistan. To meet the requirements of the directive and provide Congress with a comprehensive accounting of U.S. assistance to Afghanistan for the fiscal year 2004 period, we analyzed (1) U.S. obligations and expenditures, (2) the progress and results of U.S. humanitarian and reconstruction efforts, (3) the management of U.S. assistance and mechanisms to coordinate U.S. and international assistance, and (4) the major factors that obstructed the advancement of the assistance effort and the achievement of U.S. policy goals. We collected data on fiscal year 2004 obligations and expenditures from the U.S. departments and agencies responsible for implementing U.S. government–funded projects in Afghanistan. These include the U.S. Departments of Agriculture, Defense, Health and Human Services, Labor, State, and Treasury; the Broadcasting Board of Governors; the U.S. Trade and Development Agency; and the U.S. Agency for International Development (USAID). Because no single repository contains financial information for all U.S. assistance in Afghanistan, we contacted each agency directly. For the Department of State, we contacted each bureau and office separately—the Bureau of Population, Refugees, and Migration; the Bureau for International Narcotics and Law Enforcement Affairs; the Office of Humanitarian Demining Programs; and the Office to Monitor and Combat Trafficking In Persons—because the Department of State does not have a consolidated financial reporting mechanism for programs in Afghanistan that tracks both obligations and expenditures. To distinguish funding for humanitarian and quick-impact projects from longer-term reconstruction funding, we requested the agencies to designate their funding appropriately. For USAID, we generally relied on the stated mission of the responsible funding bureau to determine the funds’ purpose unless the agency informed us otherwise. For example, we assumed that funding for the Office of Foreign Disaster Assistance and the Office of Transition Initiatives was generally used, in accordance with their respective missions, to address emergency situations and implement quick- impact projects; funds for the offices’ various long-term projects were clearly marked in the financial reporting that USAID supplied to us. To delineate the distribution of funding and projects by province, we report information that USAID provided from a programmatic, rather than a financial, database. The financial database did not include data by location, and the programmatic database included only province-level obligation data. Because data on nationwide programs were not included in the programmatic database, we were unable to compare overall totals between the financial and programmatic databases to verify consistency. Also, because the programmatic database tracks only obligations, we were unable to determine USAID’s expenditures by province. To assess the reliability of the obligations and expenditures data from U.S. agencies providing assistance to Afghanistan, we (1) interviewed officials at the Department of Defense (DOD), the Department of State, and the U.S. Agency for International Development (USAID) regarding their methods of gathering, management, and use of data; (2) reviewed USAID’s financial audit statement; and (3) compared the data we gathered with USAID’s Congressional Budget Justifications and State’s 150 account documentation, as well as with the governmentwide Afghanistan assistance compiled by State’s Bureau of Resource Management. According to a Department of State official, the data compiled by the agency’s Bureau of Resource Management are not complete, owing in part to differences in how the agencies track data, a disconnect between agencies’ Washington and Kabul offices, and variation in the frequency of reporting. However, the Department of State relies on these data for decision-making purposes and to report to Congress. Based on our assessment, we concluded that the data on obligations and expenditures we collected from each agency are sufficiently reliable for the purpose of showing, in gross numbers, the levels of U.S. nonsecurity-related assistance to Afghanistan in fiscal year 2004. To assess the reliability of data for pledges by international donors, we (1) interviewed the Department of State official responsible for compiling these data based on information provided by the government of Afghanistan and (2) compared the data’s reliability with that of other information sources. We determined that the data are sufficiently reliable for the purpose of broadly comparing the United States’ contributions with those of other major donors and the combined total for all other donors. However, we noted several limitations in the data, notably, that they are self-reported by donor nations to the Afghan government. Furthermore, the data for larger donors are considered more reliable than the data for smaller donors, according to the Department of State. Owing to these limitations and our lack of access to donor nations’ financial records, we were unable to determine the reliability of the dollar amounts reported to have been pledged by each donor. Nevertheless, we present the reported pledges in appendix II for the purpose of broadly comparing the U.S. contributions with those of other major donors. To examine the results of assistance projects through September 30, 2004, we focused our efforts on the major USAID reconstruction contracts signed prior to the start of fiscal year 2004. The contracts account for approximately 85 percent of the U.S.’ reconstruction expenditures for the fiscal year. We collected and analyzed information from the Departments of State and Defense, and USAID in Washington, D.C., outlining policy goals, basic strategies, program objectives, and monitoring efforts. We reviewed the periodic progress reports provided by both USAID and its implementing partners for all the major reconstruction projects. To assess the reliability of these reports, we contacted each of USAID’s cognizant technical officers in Kabul about the reliability of the information provided in the implementing partners’ reports. While they noted that security restrictions and the large territory in Afghanistan make monitoring difficult, all of the cognizant technical officers we contacted consider the data to be generally reliable for the purposes of providing an overall status of the projects. In October 2004, we traveled to Afghanistan to examine the implementation of USAID and Defense’s assistance-related operations. While in Afghanistan, we spent 12 days in the capital city, Kabul, interviewing officials from the Afghan Ministries of Finance, Health, and Agriculture; the Central Bank; the U.S. Departments of State and Defense; and USAID. We also met with most of USAID’s primary implementing partners (including the International Organization for Migration, the Louis Berger Group, Inc., Creative Associates International Inc., Chemonics, Bearing Point, the International Republican Institute, the International Foundation for Election Systems, Management Sciences for Health, Management Systems International, Population Services International, Technologists Incorporated, and the Asia Foundation). In addition, we met with the officials from the British Embassy in Kabul responsible for counternarcotics initiatives. In Kabul, we inspected the rehabilitation of the Rabia Balkhi Women’s hospital. We also spent 8 days in the Ghazni, Hirat, Kunduz, and Nangahar provinces, where we reviewed U.S.- funded projects, implemented primarily by USAID’s Office of Transition Initiatives, USAID’s PRT-based staff, or Defense’s PRTs. While in these provinces, we met with provincial governors, district leaders, teachers, healthcare workers, and other community members involved in, or affected by, U.S. reconstruction projects. Constraints placed on our movement within Afghanistan by the U.S. Embassy due to security concerns limited the number of project sites we could visit. To analyze the assistance coordination mechanisms developed by the U.S. government and the international community we met with State Department of State staff responsible for assistance coordination. We also met with and staff from USAID, and the Departments of Agriculture, Commerce, Defense, U.S. Trade and Development Agency, and Treasury who were involved in the provision of assistance, to obtain their views on the coordination of assistance. In addition, we reviewed the U.S. National Security Strategy; the State-USAID consolidated strategic plan for fiscal years 2004-2009; the President’s Security Strategy for Afghanistan; the U.S. Embassy–Kabul Mission Program Plan; and USAID’s strategy and action plan for Afghanistan. Our analysis of international coordination mechanisms included a review of United Nations (UN) and Afghan government documents, including the Afghan National Development Framework and Budget, pertaining to the international coordination mechanisms utilized in Afghanistan in fiscal year 2004. In addition, we met with officials from the Afghan Ministries of Agriculture, Finance, and Health, and from the Central Bank to obtain their views on the evolution and status of the consultative group mechanism. To analyze the obstacles that affected the implementation of U.S. reconstruction assistance we reviewed reports produced by the Departments of State and Defense, USAID, the UN, the International Crisis Group, and the Afghanistan Research and Evaluation Unit. To assess the reliability of the UN data on opium production we reviewed the methodology used by the UN to estimate levels of opium poppy cultivation and opium production. We determine that the UN data is sufficiently reliable for the purpose of this report. Finally, we discussed the obstacles and their impact with officials from the Afghan ministries of Agriculture, Finance, and Health; the Afghan Central Bank; the Afghan Counternarcotics Directorate; USAID; and the Department of State. We conducted our review from August 2004 to May 2005 in accordance with generally accepted government auditing standards. The Accelerating Success initiative performance measures reported to the Afghanistan Interagency Operations Group in Fiscal 2004 were initially developed by USAID during a 3-day period in June 2003. The measures were modified during that fiscal year with input from other agencies and represent a subset of the measures reported for each of the major reconstruction contracts. The development of performance measures for each major contract varied. In some sectors, such as agriculture, performance measures were included in the contract. In other sectors, such as health, the measures were developed in a performance management plan developed by the contractor after the contract was awarded or, as in the economic sector, developed late in the project and published in periodic progress reports. The tables below describe the Accelerating Success performance measures reported by the Afghanistan Interagency Operations Group and the more detailed measures developed by individual contractors for the major reconstruction contracts. The RAMP contract contains 14 performance measures (see table 6), including program outputs such as the implementation of 615 irrigation projects and project outcomes such as increasing the average productivity of approximately 500,000 farm families by more than 100 percent. However, the contractor did not have systems in place to capture information for all measures. Of three primary awards for democracy and governance activities, USAID required only one implementing partner MSI, to develop a performance monitoring plan containing performance measures (see table 8). The other two partners, The Consortium for Elections and Political Process Strengthening and the Asia Foundation, were required to produce quarterly reports but were not required to develop specific targets or intermediate results. Consequently, the quarterly reports described activities undertaken during that time period, rather than progress achieved against specified targets. The economic governance contract did not specifically require the contractor to develop performance measures. Instead, it required the reporting of “milestones” in quarterly work plans. No quarterly plans were produced until July 2004; consequently, no measures were reported until that time. (See table 10.) USAID required the health contractor, Management Sciences for Health (MSH), to develop implementation plans and performance monitoring plans. MSH reports on selected performance indicators in these plans semiannually. (See Table 14) The reported measures also provide detailed narrative about progress on primary and secondary road projects and ongoing power-generation projects. Irrigation projects are tracked as part of the agriculture sector, and school and clinic construction and renovation are tracked as parts of the education and health sectors, respectively. The measures do not track water and sanitation projects. See table 16 below. The Afghanistan Freedom Support Act of 2002 and the 2004 Emergency Supplemental legislated assistance to Afghan women. USAID implemented and tracked most of these objectives either as part of their other sector programs or through individual women-targeted projects (see table 17.) However, no gender-specific performance measures of sector programs nor the results of individual women-targeted projects were reported to the Afghanistan Interagency Operations Group. John Hutton, David Bruno, Miriam A. Carroll, and Christina Werth made key contributions to this report. In addition, Martin de Alteriis, Mark Dowling, Etana Finkler, Reid Lowe and Adam Vodraska provided technical assistance.
In October 2001, coalition forces forcibly removed the Taliban regime from Afghanistan, responding to their protection of al Qaeda terrorists who attacked the United States. Congress subsequently passed the Afghanistan Freedom Support Act of 2002 authorizing funds to help Afghanistan rebuild a stable, democratic society. The act directed GAO to monitor the implementation of U.S. humanitarian, development, and counternarcotics assistance. This report analyzes, for fiscal year 2004, (1) U.S. obligations and expenditures, (2) progress and results of assistance efforts, (3) assistance management and coordination, and (4) major obstacles that affected the achievement of U.S. goals. The United States spent $720 million on nonsecurity-related assistance to Afghanistan in fiscal year 2004. Approximately 75 percent paid for reconstruction activities, with the remainder supporting humanitarian and quick-impact projects. Conversely, in 2002-2003, humanitarian and quick-impact assistance accounted for more than three-fourths of U.S. spending. The United States continued to be the largest donor, contributing about 38 percent of the $3.6 billion pledged by the international community. U.S. humanitarian assistance benefited vulnerable populations in fiscal year 2004. Further, the United States increased reconstruction assistance to Afghanistan and made notable progress in several sectors through its "Accelerating Success Initiative". Although progress varied among sectors, the United States did not meet all of its targets due to security and other obstacles. For example, USAID intended to rehabilitate or build 286 schools by the end of 2004. However, owing to poor contractor performance and security problems, by September 2004 it had completed only 8. As in 2002-2003, complete financial information was not readily available, and USAID lacked a comprehensive strategy to direct its efforts. Further, USAID did not consistently require contractors to fulfill contract provisions needed to ensure accountability and oversight. USAID also did not systematically collect information needed to assess the progress of its major projects. Moreover, measures provided by the embassy to decision-makers in Washington did not comprehensively portray progress in each sector or the overall U.S. program. Deteriorating security, increased opium production, and delayed funding continued to obstruct U.S. reconstruction efforts in fiscal year 2004 and threatened the achievement of U.S. goals. Deteriorating security rendered large areas inaccessible to the assistance community, and the continued rise in opium production undermined legitimate economic activity. In addition, most assistance funds were not available until nearly 6 months into the fiscal year, preventing USAID from accelerating reconstruction efforts.
Linking efficiency to physician payment policy has been a subject of interest among policymakers and health policy analysts. For example, the Institute of Medicine has recently recommended that Medicare payment policies should be reformed to include a system for paying health care providers differentially based on how well they meet performance standards for quality or efficiency or both. In April 2005, CMS initiated a demonstration mandated by the Medicare, Medicaid, and SCHIP Benefits Improvement and Protection Act of 2000 (BIPA) to test this approach. Under the Physician Group Practice demonstration, 10 large physician group practices, each comprising at least 200 physicians, are eligible for bonus payments if they meet quality targets and succeed in keeping the total expenditures of their Medicare population below annual targets. Several studies have found that Medicare and other purchasers could realize substantial savings if a portion of patients switched from less efficient to more efficient physicians. The estimates vary according to assumptions about the proportion of beneficiaries changing physicians. In 2003, the Consumer-Purchaser Disclosure Project, a partnership of consumer, labor, and purchaser organizations, asked actuaries and health researchers to estimate the potential savings to Medicare if a small proportion of beneficiaries started using more efficient physicians. The Project reported that Medicare could save between 2 and 4 percent of total costs if 1 out of 10 beneficiaries moved to more efficient physicians. This conclusion is based on information received from one actuarial firm and two academic researchers. One researcher concluded, based on his simulations, that if 5 to 10 percent of Medicare enrollees switched to the most efficient physicians, savings would be 1 to 3 percent of program costs—which would amount to about $5 billion to $14 billion in 2007. The Congress has also recently expressed interest in approaches to constrain the growth of physician spending. The Deficit Reduction Act of 2005 required the Medicare Payment Advisory Commission (MedPAC) to study options for controlling the volume of physicians’ services under Medicare. One approach for applying volume controls that the Congress directed MedPAC to consider is a payment system that takes into account physician outliers. In our report on which this statement is based, we sought information about other purchasers’ profiling efforts designed to encourage physicians to practice efficiently. We selected 10 health care purchasers that profiled physicians in their networks—that is, compared physicians’ performance to an efficiency standard to identify those who practiced inefficiently. To measure efficiency, the purchasers we spoke with generally compared actual spending for physicians’ patients to the expected spending for those same patients, given their clinical and demographic characteristics. Most purchasers said they also evaluated physicians on quality. The purchasers linked their efficiency profiling results and other measures to a range of physician-focused strategies to encourage the efficient provision of care. Some of the purchasers said their profiling efforts produced savings. Having considered the efforts of other health care purchasers in profiling physicians for efficiency, we conducted our own profiling analysis of physician practices in Medicare and found individual physicians who were likely to practice medicine inefficiently in each of 12 metropolitan areas studied. We selected areas that were diverse geographically and in terms of Medicare spending per beneficiary. We focused our analysis on generalists—physicians who described their specialty as general practice, internal medicine, or family practice. Although we did not include specialists in our analysis, our method does not preclude profiling specialists, as long as enough data are available to make meaningful comparisons across physicians. Under our methodology, we computed the percentage of overly expensive patients in each physician’s Medicare practice. To identify overly expensive patients, we grouped the Medicare beneficiaries in the 12 areas according to their health status, using diagnostic and demographic information. We classified beneficiaries as overly expensive if their total Medicare expenditures—for services provided by all health providers, not just physicians—ranked in the top fifth of their health status cohort for 2003 claims. Within each health status cohort, we observed large differences in total Medicare spending across beneficiaries. For example, in one cohort of beneficiaries whose health status was about average, overly expensive beneficiaries—the top fifth ranked by expenditures—had average total expenditures of $24,574, as compared with the cohort’s bottom fifth, averaging $1,155. (See fig. 1.) This variation may reflect differences in the number and type of services provided and ordered by these patients’ physicians as well as factors not under the physicians’ direct control, such as a patient’s response to and compliance with treatment protocols. Holding health status constant, overly expensive beneficiaries accounted for nearly one-half of total Medicare expenditures even though they represented only 20 percent of beneficiaries in our sample. Once these patients were identified and linked to the physicians who treated them, we were able to determine which physicians treated a disproportionate share of these patients compared with their generalist peers in the same location. We classified these physicians as outliers—that is, physicians whose proportions of overly expensive patients would occur by chance less than 1 time in 100. Notably, all physicians had some overly expensive patients in their Medicare practice, but outlier physicians had a much higher percentage of such patients. We concluded that these outlier physicians were likely to be practicing medicine inefficiently. Based on 2003 Medicare claims data, our analysis found outlier generalist physicians in all 12 metropolitan areas we studied. The Miami area had the highest percentage—almost 21 percent—of outlier generalists, followed by the Baton Rouge area at about 11 percent. (See table 1.) Across the other areas, the percentage of outliers ranged from 2 percent to about 6 percent. In 2003, outlier generalists’ Medicare practices were similar to those of other generalists, but the beneficiaries they treated tended to experience higher utilization of certain services. Outlier generalists and other generalists saw similar average numbers of Medicare patients (219 compared with 235) and their patients averaged the same number of office visits (3.7 compared with 3.5). However, after taking into account beneficiary health status and geographic location, we found that beneficiaries who saw an outlier generalist, compared with those who saw other generalists, were 15 percent more likely to have been hospitalized, 57 percent more likely to have been hospitalized multiple times, and 51 percent more likely to have used home health services. By contrast, they were 10 percent less likely to have been admitted to a skilled nursing facility. Medicare’s data-rich environment is conducive to identifying physicians who are likely to practice medicine inefficiently. Fundamental to this effort is the ability to make statistical comparisons that enable health care purchasers to identify physicians practicing outside of established standards. CMS has the tools to make statistically valid comparisons, including comprehensive medical claims information, sufficient numbers of physicians in most areas to construct adequate sample sizes, and methods to adjust for differences in patient health status. Among the resources available to CMS are the following: Comprehensive source of medical claims information. CMS maintains a centralized repository, or database, of all Medicare claims that provides a comprehensive source of information on patients’ Medicare-covered medical encounters. Using claims from the central database, each of which includes the beneficiary’s unique identification number, CMS can identify and link patients to the various types of services they received and to the physicians who treated them. Data samples large enough to ensure meaningful comparisons across physicians. The feasibility of using efficiency measures to compare physicians’ performance depends, in part, on two factors: the availability of enough data on each physician to compute an efficiency measure and numbers of physicians large enough to provide meaningful comparisons. In 2005, Medicare’s 33.6 million FFS enrollees were served by about 618,800 physicians. These figures suggest that CMS has enough clinical and expenditure data to compute efficiency measures for most physicians billing Medicare. Methods to account for differences in patient health status. Because sicker patients are expected to use more health care resources than healthier patients, the health status of patients must be taken into account to make meaningful comparisons among physicians. Medicare has significant experience with risk adjustment, a methodological tool that assigns individuals a health status score based on their diagnoses and demographic characteristics. For example, CMS has used increasingly sophisticated risk adjustment methodologies over the past decade to set payment rates for beneficiaries enrolled in managed care plans. On the related topic of measuring resource use, CMS noted in comments on a draft of our report that emerging “episode grouper” technology was a promising approach to measuring resource use associated with a given episode of care. We agree, but we also consider our measurement of resource use on a per capita basis, capturing total health care expenditures for a given period of time, equally promising. To conduct profiling analyses, CMS would likely make methodological decisions similar to those made by the health care purchasers we interviewed. For example, the health care purchasers we spoke with made choices about whether to profile individual physicians or group practices; which risk adjustment tool was best suited for a purchaser’s physician and enrollee population; whether to measure costs associated with episodes of care or the costs, within a specific time period, associated with the patients in a physician’s practice; and what criteria to use to define inefficient practice patterns. As for ways CMS could use profiling results, actions taken by other health care purchasers we interviewed may be instructive in suggesting future directions for Medicare. For example, all purchasers in our study used physician education as part of their strategy to change behavior. Educational outreach to physicians has been a long-standing and widespread activity in Medicare as a means to change physician behavior based on profiling efforts to identify improper billing practices and potential fraud. Outreach includes letters sent to physicians alerting them to billing practices that are inappropriate. In some cases, physicians are given comparative information on how the physician varies from other physicians in the same specialty or locality with respect to use of a certain service. A physician education effort based on efficiency profiling would therefore not be a foreign concept in Medicare. For example, CMS could provide physicians a report that compares their practice’s efficiency with that of their peers. This would enable physicians to see whether their practice style is outside the norm. In its March 2005 report to the Congress, MedPAC recommended that CMS measure resource use by physicians and share the results with them on a confidential basis. MedPAC suggested that such an approach would enable CMS to gain experience in examining resource use measures and identifying ways to refine them while affording physicians the opportunity to change inefficient practices. In commenting on a draft of our report, CMS noted that the agency would incur significant recurring costs in developing reports on physician resource use and disseminating them nationwide. We agree that any such undertaking would need to be adequately funded. Another application of profiling results used by the purchasers we spoke with entailed sharing comparative information with enrollees. CMS has considerable experience comparing certain providers on quality measures and posting the results to a Web site. Currently, Medicare Web sites with comparative information exist for hospitals, nursing homes, home health care agencies, dialysis facilities, and managed care plans. In its March 2005 report to the Congress, MedPAC noted that CMS could share results of physician performance measurement with beneficiaries once the agency gained sufficient experience with its physician measurement tools. Several structural features of the Medicare program would appear to pose challenges to the use of other strategies designed to encourage efficiency. These features include a beneficiary’s freedom to choose any licensed physician permitted to be paid by Medicare; the lack of authority to exclude physicians from participating in Medicare unless they engage in unlawful, abusive, or unprofessional practices; and a physician payment system that does not take into account the efficiency of the care provided. Under these provisions, CMS would not likely be able—in the absence of additional legislative authority—to assign physicians to tiers associated with varying beneficiary copayments, tie fee updates of individual physicians to meeting performance standards, or exclude physicians who do not meet practice efficiency and quality criteria. In commenting on our draft report, CMS was silent with regard to the need for legislative authority. The agency noted that it is studying and implementing initiatives that link assessment of physician performance to financial and other incentives, such as public reporting. Regardless of the use made of physician profiling results, the involvement of, and acceptance by, the physician community and other stakeholders of any actions taken is critical. Several purchasers described how they had worked to get physician buy-in. They explained their methods to physicians and shared data with them to increase physicians’ familiarity with and confidence in the purchasers’ profiling. CMS has several avenues for obtaining the input of the physician community. Among them is the federal rule-making process, which generally provides a comment period for all parties affected by prospective policy changes. In addition, CMS forms federal advisory committees—including ones composed of physicians and other health care practitioners—that regularly provide it with advice and recommendations concerning regulatory and other policy decisions. Having considered the tools CMS has available and the structural challenges the agency would likely face in seeking to implement certain incentives used by other purchasers, we recommended in our April 2007 report that the Administrator of CMS develop a profiling system—seeking legislative authority, as necessary—that includes the following elements: total Medicare expenditures as the basis for measuring efficiency, adjustments for differences in patients’ health status, empirically based standards that set the parameters of efficiency, a physician education program that explains to physicians how the profiling system works and how their efficiency measures compare with those of their peers, financial or other incentives for individual physicians to improve the efficiency of the care they provide, and methods for measuring the impact of physician profiling on program spending and physician behavior. Policymakers have expressed interest in linking physician performance to Medicare payment so that incentives under FFS for physicians to practice inefficiently can be reversed. In our view, Medicare should adopt an approach that relies not only on physician education but also financial or other incentives—such as discouraging patients from obtaining care from physicians who are determined to be inefficient. A primary virtue of profiling is that, coupled with incentives to encourage efficiency, it can create a system that operates at the individual physician level. In this way, profiling can address a principal criticism of the SGR system, which only operates at the aggregate physician level. Although any savings from physician profiling alone would clearly not be sufficient to correct Medicare’s long-term fiscal imbalance, it could be an important part of a package of reforms aimed at future program sustainability. Mr. Chairman, this concludes my prepared remarks. I will be pleased to answer any questions you or the Subcommittee Members may have. For future contacts regarding this testimony, please contact A. Bruce Steinwald at (202) 512-7101 or at steinwalda@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions include Phyllis Thorburn, Assistant Director; Todd Anderson; Hannah Fein; Richard Lipinski; and Eric Wedum. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO was asked to discuss--based on Medicare: Focus on Physician Practice Patterns Can Lead to Greater Program Efficiency, GAO-07-307 (Apr. 30, 2007)--the importance in Medicare of providing feedback to physicians on how their use of health care resources compares with that of their peers. GAO's report discusses an approach to analyzing physicians' practice patterns in Medicare and ways the Centers for Medicare & Medicaid Services (CMS) could use the results. In a related matter, Medicare's sustainable growth rate system of spending targets used to moderate physician spending growth and annually update physician fees has been problematic, acting as a blunt instrument and lacking in incentives for physicians individually to be attentive to the efficient use of resources in their practices. GAO's statement focuses on (1) the results of its analysis estimating the prevalence of inefficient physicians in Medicare and (2) the potential for CMS to profile physicians in traditional fee-for-service Medicare for efficiency and use the results in ways that are similar to other purchasers' efforts to encourage efficiency. Having considered efforts of 10 private and public health care purchasers that routinely evaluate physicians for efficiency and other factors, GAO conducted its own analysis of physician practices in Medicare. GAO focused the analysis on generalists--physicians who described their specialty as general practice, internal medicine, or family practice--and selected metropolitan areas that were diverse geographically and in terms of Medicare spending per beneficiary. Although GAO did not include specialists in its analysis, its method does not preclude profiling specialists, as long as enough data are available to make meaningful comparisons across physicians. Based on 2003 Medicare claims data, GAO's analysis found outlier generalist physicians--physicians who treat a disproportionate share of overly expensive patients--in all 12 metropolitan areas studied. Outlier generalists and other generalists saw similar numbers of Medicare patients and their respective patients averaged the same number of office visits. However, after taking health status and location into account, GAO found that Medicare patients who saw an outlier generalist--compared with those who saw other generalists--were more likely to have been hospitalized, more likely to have been hospitalized multiple times, and more likely to have used home health services. By contrast, they were less likely to have been admitted to a skilled nursing facility. GAO concluded that outlier generalists were likely to practice medicine inefficiently. CMS has tools available to evaluate physicians' practices for efficiency, including a comprehensive repository of Medicare claims data to compute reliable efficiency measures and substantial experience adjusting for differences in patients' health status. The agency also has wide experience in conducting educational outreach to physicians with respect to improper billing practices and potential fraud--providing individual physicians, in some cases, comparative information on how the physician varies from other physicians in the same specialty or in other ways. A physician education effort based on efficiency profiling would therefore not be a foreign concept in Medicare. For example, CMS could provide physicians a report that compares their practice's efficiency with that of their peers, enabling physicians to see whether their practice style is outside the norm. As for implementing other strategies to encourage efficiency, such as the use of certain financial incentives, CMS would likely need additional legislative authority. CMS agreed with the need to measure physician resource use in Medicare but raised concerns about the costs involved in reporting the results and was silent on other strategies discussed beyond physician education. GAO concurs that resource use measurement and reporting activities would require adequate funding; however, GAO is concerned that efforts to achieve efficiency that rely solely on physician education without financial or other incentives for physicians to curb inefficiencies will be suboptimal.
Over the past several decades, the introduction of two types of drugs— traditional and atypical antipsychotic drugs—for treating schizophrenia, and in some cases, bipolar disorder, have enabled physicians to better manage their patients’ mental illnesses, resulting in a better quality of life for many veterans. Because schizophrenia severely impairs thinking, language, perception, mood, and behavior, schizophrenics often withdraw from society and retreat into a world of delusions, hallucinations, and fantasies. With drug treatment, approximately 60 to 70 percent of schizophrenics experience either complete remission or only mild symptoms of the disease; the remaining 30 to 40 percent continue to experience psychotic symptoms. Most patients with schizophrenia are maintained on antipsychotic drugs throughout their lives since symptoms return in over 70 percent of stable patients who stop taking their drugs. Bipolar disorder, also known as manic-depressive illness, is characterized by extreme and unpredictable mood swings, ranging from high excitement or euphoria—where the patient is energetic and confident—to despair or deep depression, where the patient may feel sad, helpless, apathetic, angry, or suicidal. As with schizophrenia, bipolar disorder can impair a patient’s ability to function. To control bipolar episodes, physicians often prescribe mood-stabilizing drugs, but in cases where these drugs are not effective, physicians may prescribe antipsychotic drugs on a short-term basis. The introduction of traditional and atypical antipsychotic drugs has also helped facilitate a shift in treatment settings for adults with severe mental illness, both in the VA system and in the general medical community, from expensive inpatient care in hospitals to less costly outpatient care in community-based treatment facilities. Traditional antipsychotic drugs were first introduced in the 1950s. While these drugs are effective in treating psychosis, they can often cause severe side effects, such as involuntary body and facial movements, tremors, and contractions. For example, after 5 years of taking traditional drugs, patients have a 32 percent chance of developing a sometimes irreversible movement disorder, and after 25 years, they have a 68 percent chance. The Food and Drug Administration first approved atypical antipsychotic drugs in 1989, and five are currently available for use. (See table 1.) They are considered as effective as traditional drugs in treating psychosis, but they are much less likely to cause the severe involuntary movements associated with the traditional drugs. While atypical drugs also have side effects— some of which can be serious—most occur with less severity than the side effects associated with traditional drugs. The side effects vary among the atypical drugs and include sedation, sexual dysfunction, cardiac problems, and sudden drops in blood pressure. Additional side effects are weight gain and elevated cholesterol that could lead to heart disease and diabetes. Various studies and psychiatrists we interviewed have concluded that because the side effects are reduced, patients are more likely to stay on their drug therapy and have fewer relapses of psychosis when taking atypical antipsychotic drugs. Over the last few years, the number of prescriptions for atypical antipsychotic drugs has increased dramatically in VA. In fiscal year 1999, 62 percent of all antipsychotic drug prescriptions were for atypical drugs; by fiscal year 2001, more than 80 percent were for atypical drugs. Antipsychotic drugs—both traditional and atypical—are VA’s third most expensive class of drugs. In fiscal year 2001, VA filled more than 1.5 million 30-day antipsychotic prescriptions for more than 176,000 patients at a cost of $158 million, accounting for 7 percent of its total pharmacy budget. Overall, atypical antipsychotic drugs are more costly than traditional antipsychotic drugs. For VA, the average daily cost of atypical drugs is about 17 times higher than the average daily cost of traditional drugs. However, the average daily cost among atypical drugs varies. In fiscal year 2001, clozapine cost about $8 a day per patient, while quetiapine cost less than $3 a day. (See fig. 1.) In 2001, 30-day prescriptions were written for all five atypical antipsychotic drugs, with olanzapine and risperidone prescribed most often to veterans. (See fig. 2.) In 2001, most patients began treatment on risperidone, olanzapine, or quetiapine. (See fig. 3.) Choosing which atypical antipsychotic drug to prescribe for patients can be difficult. Experts have concluded that the scientific evidence is not sufficient to favor any one of the atypical drugs. Each has been proven effective in clinical trials, but effectiveness appears to depend on the particular patient. A panel of academic researchers reviewed available scientific evidence in 1999 and concluded that the three most widely used atypicals—risperidone, quetiapine, and olanzapine—are comparable in efficacy, safety, and patient tolerability. The Cochrane Collaboration, an organization that systematically reviews randomized clinical medical trials, reviewed the evidence comparing two of the atypical drugs, risperidone and olanzapine. It concluded that little evidence exists to suggest choosing one drug over the other. Three internal VA panels in the last 3 years agreed that none of the three most widely used atypical drugs could be judged better than the others. Studies conducted by drug manufacturers have been inconclusive in comparing atypical drugs. The studies often were too short in duration to draw conclusions about the drugs’ long-term effects. In addition, many studies excluded substance abusers and those who were violent and uncooperative—a significant problem in determining effectiveness because many schizophrenics meet these criteria. The National Institute of Mental Health has recently funded a $42 million study that will compare the five atypical antipsychotic drugs available today to each other and to a traditional antipsychotic drug. This study, the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE), will examine 1,800 schizophrenic patients, including patients with substance abuse and other medical problems. Four VA facilities are among the 53 medical facilities participating in CATIE. One of the objectives of the CATIE study is to identify specific patient profiles for each drug in order to guide physicians in selecting the best atypical drug for their patients. The study’s results are expected to be available by 2006. VA physicians are generally free to prescribe any drug on the formulary. VISNs and facilities can place restrictions on some drugs that require close monitoring to ensure appropriate use, but these restrictions cannot be based solely on cost. Usually psychiatrists are the practitioners that prescribe atypical drugs for psychotic patients, although some facilities allow other types of physicians to write refill prescriptions or to prescribe these drugs for nonpsychotic patients with dementia or diseases such as Parkinson’s and Alzheimer’s. VA policy requires that all drugs on the formulary be available at each VA pharmacy. VA further requires its 22 VISNs to establish approval processes for prescribers to obtain drugs not listed on their formularies. In addition, to provide flexibility in meeting local patient needs, VA allows VISNs to add drugs to network formularies to supplement the national formulary. Pharmacy and Therapeutics committees in each VISN, consisting of physicians, pharmacists, and other health care professionals are usually responsible for selecting these additional drugs. According to VA’s Pharmacy Benefits Management Strategic Health Care Group’s Medical Advisory Panel officials, VA chose not to limit the number of atypical drugs available on the formulary because such limits potentially restrict physicians’ ability to prescribe the most appropriate drug for their patients. Four of the five atypical antipsychotic drugs— olanzapine, risperidone, quetiapine, and clozapine—are listed on VA’s national formulary. The fifth, ziprasidone, which the Food and Drug Administration approved in 2001, is not listed on the national formulary, but is available through local nonformulary approval processes. VA generally does not place drugs on the formulary until they have been on the market at least 1 year. To educate physicians about the increasing importance and cost of atypical antipsychotics and to provide uniform information in the face of increasing pharmaceutical industry marketing to VA psychiatrists, VA issued the guideline for prescribing atypical antipsychotics to supplement VA’s overall treatment guidelines for managing patients with psychosis. In addition to discussing appropriate drug therapy, the overall treatment guidelines include sections on the evaluation, diagnosis, and social rehabilitation of patients with psychoses. The overall treatment guidelines currently recommend that psychiatrists treating patients with psychosis either prescribe moderate doses of traditional antipsychotic drugs or prescribe atypical antipsychotic drugs. VA is currently revising the guidelines for managing patients with psychosis, including recommending atypical drugs before traditional drugs for these patients. While the National Alliance for the Mentally Ill (NAMI) and the National Mental Health Association stated that physicians could consider cost when prescribing antipsychotic drugs, each has voiced concerns that some local VA officials might use the guideline more stringently to cut costs—either by restricting physician access to more expensive atypical drugs for new patients or by switching stable patients to the less expensive atypical drugs. Officials from the American Psychiatric Association and the National Association of VA Physicians and Dentists have expressed similar concerns. VA’s guideline for prescribing atypical antipsychotic drugs is consistent with published clinical practice guidelines commonly used by public and private health care systems. Like most other practice guidelines, VA’s guideline recommends that physicians use their best medical judgment, based on clinical circumstances and patients’ needs, when choosing among the atypical drugs. VA’s prescribing guideline also recommends that physicians use cost as a factor in deciding which atypical antipsychotic to prescribe when no clinical reason exists to choose one drug over another— a practice most of the public and private sector psychiatric experts we interviewed agreed is reasonable, appropriate, and consistent with providing quality cost-effective medical care. VA’s prescribing guideline, which supplements its broader psychosis treatment guidelines, is similar to the four clinical guidelines most widely accepted by public and private health systems—the Texas Medication Algorithm Project (TMAP); The Expert Consensus Guideline Series: Treatment of Schizophrenia; The Schizophrenia Patient Outcomes Research Team (PORT); and the American Psychiatric Association Practice Guideline for the Treatment of Patients with Schizophrenia. (See table 2.) Like VA’s guideline, each suggests that therapy be based on physicians’ assessment of patient needs and is not intended to interfere with clinical judgment. VA’s prescribing guideline aims to assist physicians in selecting from its national formulary the most cost-effective atypical antipsychotic drugs for their patients without interfering with their clinical judgment. VA’s prescribing guideline for atypical antipsychotic drugs is reprinted in appendix II. For information on how the guideline was developed, see appendix III. Specifically, VA’s guideline states that the guideline is to be used only for new patients or for patients not responding favorably to traditional medications, therapy is ultimately based on physicians’ assessment of patient needs and the guidelines are not intended to interfere with clinical judgment, and because no consensus exists in scientific literature to support that one atypical antipsychotic drug is superior to another, physicians should begin treatment with one of the less expensive atypical antipsychotic drugs on VA’s national formulary if there are no patient specific reasons to prescribe one drug over another. For cases where no clinical reason exists to prescribe one atypical drug over another, VA’s guideline includes an algorithm showing the suggested treatment order for prescribing the four atypical antipsychotic drugs on VA’s formulary. The guideline’s algorithm recommends that physicians first prescribe risperidone or quetiapine, in either order, to patients with a first episode of psychosis or patients with chronic psychosis who have relapsed. The algorithm lists olanzapine as the next drug that physicians should try, and clozapine as the last drug. The guideline’s treatment order reflects VA’s prices of the drugs—risperidone and quetiapine are significantly less expensive than olanzapine. Clozapine is not only the most expensive drug, but it is seldom used because of its risk of causing a life-threatening blood disorder. Because ziprasidone has only recently received Food and Drug Administration approval, it is not included on VA’s national formulary, and it is not included in the algorithm. However, the guideline states that it may be considered for patients with intolerance or a poor response to the other atypical drugs. In the preface to its algorithm VA’s prescribing guideline discusses the importance of cost-effective high quality care. According to officials responsible for developing the TMAP and PORT guidelines, their guidelines did not include cost because they were meant to be broad and apply to a wide variety of organizations. Nevertheless, some health care systems that use these guidelines also consider cost. For example, the Texas Department of Mental Health and Mental Retardation has a supplemental policy that recommends using the less expensive atypical antipsychotics before other atypicals when appropriate. It asks its physicians to choose the least expensive of the three drugs recommended by TMAP for new patients when their clinical judgment does not indicate the use of one atypical drug over another. The Massachusetts Medicaid behavioral health program has a similar approach. It follows the PORT guidelines, and in 1999 issued a memorandum with additional guidance and a cost- effectiveness study to its psychiatrists pointing out that risperidone was less expensive and just as effective as olanzapine for new patients. The memorandum and study were issued to highlight the importance of using cost as a factor in deciding which drug to prescribe. Because available scientific evidence and expert opinion suggest that all atypical drugs are appropriate treatment for psychosis, incorporating cost into VA’s prescribing guideline is reasonable, appropriate, and consistent with providing cost-effective health care. The Institute of Medicine has concluded that when no marginal therapeutic benefit is expected from more expensive drugs, guideline developers may reasonably recommend less expensive drugs. Almost all of the psychiatric experts we interviewed—including those in charge of TMAP and PORT—said that asking physicians to consider drug cost as a factor when prescribing atypical antipsychotic drugs is reasonable, appropriate, and consistent with providing cost-effective quality medical care to patients. Psychiatrists from the National Institute of Mental Health, which funds antipsychotic drug research, also agreed that it was appropriate for psychiatrists to consider less expensive atypical drugs. The co-chairman of VA’s Committee on Care of Severely Chronically Mentally Ill Veterans stated that the VA guideline represents quality medical care because no scientific evidence exists to recommend one drug over another and because physicians make the final prescribing decisions based on their medical judgment. State mental health officials from California, Georgia, and Florida—states that do not use cost to rank medications—recognize the importance of considering costs when choosing among them. For example, the medical director of Georgia’s Division of Mental Health, Mental Retardation and Substance Abuse stated that in the face of recent state budget cuts of 2.5 to 5 percent, the state may consider adopting guidelines similar to VA’s that include cost as a factor. While neither Florida nor California officials suggest that physicians should use atypical drugs in any particular order, state health officials agreed that cost could be a factor in prescribing these drugs. Most VISNs use VA’s prescribing guideline. The policies and procedures for implementing the guideline vary as some facilities have added a requirement for prescribing atypical antipsychotic drugs. This additional requirement calls for pharmacists or senior psychiatrists to review prescriptions for one of the atypical antipsychotic drugs and to confer with the prescribing psychiatrist on the appropriateness of the prescription. The vast majority of psychiatrists who responded to our survey reported they are free to prescribe the atypical antipsychotic drugs consistent with their best clinical judgment. However, we identified some facility policies and procedures that conflict with the intent of VA’s prescribing guideline, which asks physicians to consider cost only if there is no clear clinical choice for one drug over another. We contacted the formulary leaders at each VISN and had further discussions with psychiatrists and pharmacists in selected VISNs to determine if the prescribing guideline was being used. Eighteen of the formulary leaders reported that their VISNs use VA’s prescribing guideline. Two other formulary leaders reported that their VISNs were using different guidelines—one VISN modified the guideline to include ziprasidone in the algorithm and the other VISN developed a guideline that does not suggest a treatment order or use cost as a determining factor under any circumstances. The remaining two formulary leaders stated that their VISNs do not use guidelines for prescribing atypical antipsychotic drugs. In implementing VA’s prescribing guideline, some VISNs simply distributed the guideline to facilities for use, and some facilities combined guideline distribution with group discussions on the costs of atypical antipsychotic drugs. Officials from one VISN distributed the guideline to its facilities along with pocket-sized cards for each psychiatrist showing the prices and doses for every antipsychotic drug. Despite the fact that most VISNs use the guideline, not all psychiatrists told us they were aware of it. Specifically, in our survey we asked psychiatrists if they had seen or been briefed on the prescribing guideline. Of those responding, 66 percent reported that they had, 11 percent reported that they were unsure, and 23 percent reported that they had not. In addition, formulary leaders, psychiatrists, and pharmacists in five VISNs told us that several facilities require physicians to follow additional policies and procedures for prescribing atypical antipsychotic drugs. (See table 3.) Some of them also told us that the need to manage cost is the primary reason for implementing additional prescribing procedures for atypical antipsychotic drugs at their facilities. Since VA issued the prescribing guideline, it has reiterated its policy that the guideline not interfere with physicians’ clinical judgment. Most psychiatrists we interviewed agree that the intent of VA’s policy is being followed. The vast majority of the psychiatrists who responded to our survey—91 percent—indicated that they have been able to prescribe the atypical antipsychotic drugs that are best for their patients. Nevertheless, a number of psychiatrists—9 percent of those who responded to our survey—reported they did not feel free to prescribe the antipsychotic drug of their choice. These psychiatrists are generally concentrated in a few VISNs. For example, in VISN 22, 33 percent of responding psychiatrists reported that they did not feel free to prescribe the atypical antipsychotic drug that they believed was best for some of their patients, and in VISN 18, the rate was 22 percent. Three other VISNs had rates of more than 10 percent. Conversely, four VISNs had no psychiatrists who felt they could not exercise their clinical judgment in prescribing these drugs. (See fig. 4.) (See appendix IV for additional survey information for each VISN.) Our survey showed that several VISNs with one or more facilities that have additional prescribing requirements for atypical antipsychotic drugs also had relatively high percentages of psychiatrists who reported they were not always free to prescribe the most appropriate atypical drug. For example, VISN 22—which had the highest percentage of physicians who reported they were not free to prescribe the drug of their choice—has four facilities that require pharmacists to review prescriptions for olanzapine. Psychiatrists’ concerns may be related to cost control procedures at some facilities that have limited access to atypical antipsychotic drugs— practices which conflict with the prescribing guideline. For example, the Miami VA Medical Center no longer requires physicians to first select among the traditional antipsychotic drugs before prescribing any atypical drugs, but it does require that psychiatrists prescribe risperidone and quetiapine before prescribing olanzapine. The chief pharmacist at the center told us that this policy was implemented to control cost. This policy conflicts with the prescribing guideline, because cost has greater weight than physicians’ clinical judgment. Furthermore, VA psychiatrists at other facilities reported that their managers exerted pressure to prescribe the lower cost atypical drugs. One psychiatrist stated that facility administrators pushed for prescribing less expensive atypical drugs, even though the psychiatrist’s evaluation of some patients indicated that these drugs would be less effective than the more costly atypical drug olanzapine. In addition, 31 of the 876 psychiatrists that we included in our survey analysis reported that they believed prescribing high-cost atypical antipsychotic drugs could affect their performance ratings. About 22 percent of the psychiatrists who responded to our survey reported that they are required to follow additional VISN or facility procedures for prescribing olanzapine. While these procedures help the facility manage pharmaceutical use, they have the potential to overemphasize cost-containment if they put pressure on physicians to prescribe the less expensive drugs. Examples where this could happen are discussed below. In VA’s Greater Los Angeles Healthcare System, Los Angeles, California, part of VISN 22, all psychiatrists provide written justifications for olanzapine prescriptions, which are reviewed by pharmacists or senior psychiatrists. For routine requests—such as those for VA patients who are already stable on olanzapine or patients who did not respond favorably to other atypical antipsychotic drugs—the pharmacist fills the prescription. For nonroutine requests—such as those for new patients who have not previously taken atypical antipsychotic drugs—the pharmacist forwards the request and written justification to a senior psychiatrist who reviews them and may discuss recommended treatment options with the prescribing physician. In the 4 months after the prescribing guideline was implemented, 11 percent of all olanzapine requests were denied as part of its cost containment procedures. However, according to a member of the facility’s Pharmacy and Therapeutics Committee, the facility may eliminate these cost- control measures entirely as a result of a January 2002 notice from the Under Secretary for Health that discusses VA policy when treating patients with psychosis. The VA San Diego Healthcare System, San Diego, California, part of VISN 22, also regulates olanzapine use, but it does not require prescribing physicians to provide written justification. Instead, pharmacists trained in the use of drugs to treat mental illness are required to review all prescriptions for olanzapine and discuss treatment options with prescribing physicians, recommending the lower cost risperidone or quetiapine first for patients who have not tried them. For cases where the psychiatrist does not agree with the pharmacist’s recommendation, the case is forwarded to the chief psychiatrist or the facility’s pharmacy and therapeutics committee for final approval or denial. The Carl T. Hayden VA Medical Center, Phoenix, Arizona, part of VISN 18, requires clinical pharmacists to review prescriptions for olanzapine for patients who have not tried less expensive atypical drugs and to discuss with the prescribing physician the clinical reason for choosing one drug over another. The pharmacist may recommend risperidone and quetiapine; however, if the psychiatrist disagrees with the recommendation, the prescription is referred to the chief psychiatrist for review. If the matter is still not resolved, another psychiatrist will review the case. If the original prescribing psychiatrist still disagrees with the recommendation, the matter is referred to and decided by the facility’s chief of medicine. In addition, psychiatrists have been asked to examine their cases of veterans who are currently on olanzapine to determine if these veterans could be switched to a less expensive atypical drug. If this practice results in switching, using cost to justify changing the drugs of patients would not be consistent with the intent of VA’s prescribing guideline. In July 2001, the Secretary of Veterans Affairs testified before the Senate Committee on Veterans’ Affairs that physicians are free to prescribe any medication on the VA formulary, consistent with VA policy that formulary drugs cannot be restricted based solely on cost. At the same time, the Deputy Under Secretary for Health asked VISN directors to ensure that none of their facilities’ policies or procedures restrict physician access to the atypical drugs. Further, the Assistant Deputy Under Secretary for Health stated that the clinical judgment of each veteran’s individual psychiatrist should determine which atypical antipsychotic drug to prescribe. Also, the conference report on VA’s fiscal year 2002 appropriations directed the Secretary of Veterans Affairs to communicate to physicians existing VA policy that physicians are to use their best clinical judgment when choosing atypical antipsychotic drugs. In response, VA’s Under Secretary for Health issued a notice on January 16, 2002, reiterating the conference report’s message. Atypical antipsychotic drugs are essential to providing quality mental health care; however, they vary significantly in cost. To educate physicians on the effectiveness of atypical antipsychotic drugs and their costs, VA implemented a prescribing guideline, based on scientific evidence and expert consensus. This guideline is consistent with widely accepted guidelines in other public and private health care systems. If properly implemented, it would result in both quality and cost-effective mental health care, and providing it to VA physicians is appropriate. In managing pharmacy costs, one of the major challenges facing managers at VA facilities is the high cost of atypical antipsychotic drugs. Consultations between prescribing physicians, senior psychiatrists, and pharmacists on the appropriate use of atypical drugs—including asking physicians to explain their drug choices and to consider using an alternative less expensive atypical drug—could be effective ways to help manage the cost of drugs as well as to educate physicians on the clinical aspects of each drug. Such consultations provide vital information for consideration by physicians when choosing the most appropriate drugs for their patients with psychosis, and nationally the vast majority of psychiatrists report that their clinical judgment, not cost factors, determines which atypical drugs they prescribe. However, procedures at a few facilities have limited or could restrict access to certain atypical antipsychotic drugs on VA’s national formulary because of cost considerations. Such procedures are contrary to VA’s prescribing guideline for atypical antipsychotic drugs. To ensure that the atypical antipsychotic prescribing guideline is implemented consistent with VA intent, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to monitor implementation of the guideline by VISNs and facilities. In doing so, the Secretary should ensure that facility policies and procedures conform to the intent of the guideline and allow physicians to prescribe the most appropriate atypical antipsychotic drugs for their patients. VA provided written comments on a draft of this report, which are reprinted in appendix V. VA concurred with our recommendation that the prescribing guideline be implemented consistently throughout the VA health care system. VA also stated that the Veterans Health Administration (VHA) will continue to coordinate with VISN clinical managers to ensure the intent of the guideline is understood by all involved and appropriately implemented systemwide. VA also stated that VHA would continue to routinely monitor prescribing patterns of atypical antipsychotic drugs through its national drug utilization database in order to identify and address any outliers in drug usage that might become apparent. However, we found that while VHA was periodically reviewing atypical antipsychotic drug utilization mainly at the national and VISN levels, it had no formal plan to systematically review the data to monitor compliance with the guideline at the facility level. Thus, we caution VA from relying too heavily on national and VISN data. Doing so might not detect individual facility policies that could restrict access to the more costly atypical antipsychotic drugs. We are sending copies of this report to the Secretary of Veterans Affairs; appropriate congressional committees; and other interested parties. We will also make copies available to others upon request. If you have any questions on matters discussed in this report, please contact me at (202) 512-7101. Another contact and key contributors are listed in appendix VI. To determine how the Department of Veterans Affairs (VA) developed its prescribing guideline, and what it expected to accomplish with it, we interviewed and obtained relevant documentation from the officials who developed the guideline, including officials from VA’s Pharmacy Benefits Management Strategic Healthcare Group, its Medical Advisory Panel, the Mental Health Strategic Healthcare Group, and the Office of Quality and Performance. We also spoke with VA’s Assistant Deputy Under Secretary for Health, obtained records of internal VA communication concerning the guideline, and reviewed testimony from senior VA officials. To determine the clinical guidelines for atypical antipsychotic drugs that are commonly used and accepted by the general medical community, and to compare VA’s prescribing guideline on atypical antipsychotic drugs to these guidelines, we interviewed officials and obtained documentation from several organizations, including the Department of Health and Human Services’ National Institute of Mental Health, Substance Abuse and Mental Health Services Administration, and Centers for Medicare and Medicaid Services, and National Association of State Mental Health Program Directors. We compared VA’s guideline with the four most commonly used guidelines—The Texas Medication Algorithm Project; The Expert Consensus Guideline Series: Treatment of Schizophrenia; The Schizophrenia Patient Outcomes Research Team; and the American Psychiatric Association Practice Guideline for the Treatment of Patients with Schizophrenia—and interviewed officials from the Texas Medication Algorithm Project and the Schizophrenia Patient Outcomes Research Team. We also interviewed experts on the use of atypical antipsychotic drugs. To determine commonly used policies for prescribing atypical antipsychotic drugs, we interviewed officials from private mental health care delivery systems, pharmacy benefits management companies, and the Department of Defense. For geographical dispersion, we selected and obtained information from five states’ Medicaid or mental health departments in California, Florida, Georgia, Massachusetts, and Texas. To determine the nature and extent of the guideline’s implementation in VA’s Veterans Integrated Service Networks (VISN), we interviewed each VISN formulary leader. Formulary leaders are the liaisons between VISN management and VA officials responsible for managing the national formulary. We visited or contacted the following 14 VA facilities chosen in part because of their procedures for prescribing atypical antipsychotic drugs: VISN 1 – Edith Nourse Rogers Memorial Veterans Hospital, Bedford, Massachusetts; and Providence VA Medical Center, Providence, Rhode Island. VISN 2 - Canandaigua VA Medical Center, Canandaigua, New York; Samuel S. Stratton VA Medical Center, Albany, New York; and VA Healthcare Network Upstate New York at Syracuse, Syracuse, New York. VISN 7 - Atlanta VA Medical Center, Decatur, Georgia. VISN 8 – James A. Haley Veterans Hospital, Tampa, Florida; and Miami VA Medical Center, Miami, Florida. VISN 11 – VA Ann Arbor Healthcare System, Ann Arbor, Michigan; and John D. Dingell VA Medical Center, Detroit, Michigan. VISN 18 – Carl T. Hayden VA Medical Center, Phoenix, Arizona. VISN 20 – VA Puget Sound Health Care System, Seattle, Washington. VISN 22 - VA Greater Los Angles Healthcare System, Los Angeles, California; and VA San Diego Healthcare System, San Diego, California. To determine local policies and practices on atypical antipsychotic drug usage at the 14 facilities that we visited or contacted, including how the guideline was implemented, we interviewed pharmacy leadership, mental health leadership, or individual psychiatrists and we collected relevant documents. To assess the effect of these guidelines and other atypical antipsychotic drug policies and procedures on psychiatrists throughout the VA system, we surveyed VA psychiatrists. Using electronic mail, we distributed an internet-based survey to VA’s entire November 2001 reported population of 1,723 psychiatrists. Of these psychiatrists, 903 or approximately 52 percent responded. Response rates by VISN ranged from 33 percent to nearly 72 percent. However, for analysis purposes, we included only the 876 psychiatrists who prescribed an atypical antipsychotic drug in the 12 months prior to the mailing of our survey in November 2001. We took steps to determine if psychiatrists who reported they lacked freedom to prescribe the more costly atypical antipsychotic drugs were more likely to respond to our survey than were psychiatrists who reported they had such freedom. For each VISN, we compared the response rate from its psychiatrists with their responses to the survey question “When, in your clinical judgment, a more costly atypical antipsychotic drug is warranted, do you feel free to prescribe the more costly drug?” We found no indication that psychiatrists’ answers to the question were related to their VISN’s response rate. In addition, we conducted telephone interviews on a random sample of 29 nonrespondents. We asked them the same question—if they felt free to prescribe the more costly atypical antipsychotic drugs. Their responses to this question were similar to those from psychiatrists who responded to the survey. Based on these results, we have no reason to believe that psychiatrists who felt restricted in their prescribing practices were over-represented in our survey results and therefore, our results are generalizable to the entire population. To help identify problems with guideline implementation, we interviewed officials or reviewed documents from two large mental health advocacy groups--the National Alliance for the Mentally Ill and the National Mental Health Association. We also interviewed officials from the National Association of VA Physicians and Dentists and VA’s Committee on the Care of Severely Chronically Mentally Ill Veterans. In addition, we reviewed correspondence from the American Psychiatric Association regarding VA’s prescribing guideline. Department of Veterans Affairs Pharmacy Benefits Management, Medical Advisory Panel, and Mental Health Strategic Healthcare Group Guideline for Atypical Antipsychotic Use Selection of therapy for individual patients is ultimately based on physicians' assessment of clinical circumstances and patient needs. At the same time, prudent policy requires appropriate husbanding of resources to VA to meet the needs of all our veteran patients. These guidelines are not intended to interfere with clinical judgment. Rather, they are intended to assist practitioners in providing cost effective, consistent, high quality care. The following recommendations are dynamic and will be revised, as new clinical data become available. 1) Prioritize the use of atypical antipsychotic medication for new antipsychotic medication starts and for patients not responding to or having problematic side effects on typical antipsychotic medication. 2) Though differences in the clinical effectiveness and pharmacoeconomic profile of the atypicals have been suggested by some studies, there is no consensus in the literature to support one being globally superior to another; therefore, once the physician determines there are no patient specific issues, begin therapy with an effective, less expensive agent. At the present time, this would lead to the preference of quetiapine and risperidone over olanzapine. 3) Utilize current local approaches of clinical assessment to determine response to medication and whether medication changes are indicated. Such assessments should include the presence and severity of positive and negative symptoms, AIMS score, tremor, weight and GAF. 4) For patients currently on olanzapine, consider a trial of risperidone or quetiapine in the face of relapse or significant/ problematic weight gain or other side effects. First episode of psychosis or *may need to adjust for age, co- morbidities, and other factors a.) Risperidone OR (trial for up to 10 weeks) Response? b.) Quetiapine OR (trial for up to 10 weeks) Response? c.) Olanzapine (trial for up to10 weeks) d.) Clozapine (trial for up to 6 months) Response? Typical antipsychotic if never tried (trial for up to 10 weeks) OR Clozapine if never tried (trial for 6 months) Consider a trial of haloperidol or fluphenazine decanoate for patients non-adherent to therapy. In February 2001, VA’s Pharmacy Benefits Management Strategic Healthcare Group’s Medical Advisory Panel formed a task force of two VA psychiatrists and two VA pharmacists to develop a guideline for prescribing atypical antipsychotic drugs. According to the panel, such a guideline would help physicians prescribe them appropriately and cost effectively. The task members were selected based on their mental health clinical expertise and diverse skills. See figure 5 for the timeline and process of the task force. The task force reviewed scientific literature on the effectiveness, including side effects, of the atypical antipsychotic drugs and examined existing VISN guidance on prescribing these drugs. Based on these reviews, the task force drafted the guideline for prescribing atypical drugs. The draft guideline was then reviewed and modified by the Medical Advisory Panel and VA mental health officials. VISN pharmacy leaders and the Medical Advisory Panel approved the guideline. In July 2001, VA Pharmacy Benefits Management posted the guideline to its web site and sent it to the VISNs. In the past, VA Pharmacy Benefits Management has used the same process to develop several similar guidelines for prescribing other classes of drugs. The Institute of Medicine, in a recent report on VA’s national formulary, commended VA for these previous pharmacy-specific guidelines, stating that they were based on current scientific and clinical research data and its recommendations were consistent with recommendations of other leading medical organizations. VA’s commissioning of a task force of health care professionals to review medical literature and develop a guideline based on that literature is an accepted practice. For example, the Department of Defense and the American Psychiatric Association developed clinical practice guidelines this way. Supplementing the literature with input from medical experts, as VA did, is also consistent with accepted medical practice. An Institute of Medicine report on developing clinical guidelines strongly urges that processes for developing and revising guidelines be firmly based on scientific evidence and expert clinical judgment. Most other published guidelines for atypical antipsychotic drugs were developed using some combination of evidence from scientific literature and experts’ judgments. Percentage answering “Yes” “No” VISN (location) 1 (Boston) 2 (Albany) 3 (Bronx) 4 (Pittsburgh) 5 (Baltimore) 6 (Durham) 7 (Atlanta) 8 (Bay Pines) 9 (Nashville) 10 (Cincinnati) 11 (Ann Arbor) 12 (Chicago) 13 (Minneapolis) 14 (Omaha) 15 (Kansas City) 16 (Jackson) 17 (Dallas) 18 (Phoenix) 19 (Denver) 20 (Portland) Francisco) 22 (Long Beach) 1. Prior to receiving GAO’s email notifying you of this survey, had you been briefed on or provided a copy of these guidelines? 2. When prescribing _________, do psychiatrists at your facility have to follow procedures not required for most other drugs, such as obtaining approval, providing justification, or some other procedure? 3. When, in your clinical judgment, a more costly atypical antipsychotic drug is warranted, do you feel free to prescribe the more costly drug? In addition to the contact named above, Cherie M. Starck, Beverly J. Brooks-Hall, William R. Simerl, Michael Tropauer, Karen M. Sloan, Deborah L. Edwards, and Susan Lawes made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
The Department of Veterans Affairs (VA) provides health care services to veterans who have been diagnosed with psychosis--primarily schizophrenia, a disorder that can substantially limit their ability to care for themselves, secure employment, and maintain relationships. These veterans also have a high risk of premature death, including suicide. Effective treatment, especially antipsychotic drug therapy, has reduced the severity of their illnesses and increased their ability to function in society. VA's guideline for prescribing atypical antipsychotic drugs is sound and consistent with published clinical practice guidelines used by public and private health care systems. VA's prescribing guideline, recommends that physicians use their best clinical judgment, based on clinical circumstances and patients' needs, when choosing among the atypical drugs. Most Veterans Integrated Service Networks and facilities use VA's prescribing guideline; however, five VISNs have additional policies and procedures for prescribing atypical antipsychotic drugs. Although these procedures help manage pharmaceutical cost, they also have the potential to result in more weight given to cost than clinical judgment which is not consistent with the prescribing guideline.
In 1956, Congress established the Judgment Fund—a permanent, indefinite appropriation—to pay judgments against federal agencies that are not otherwise provided for in agency appropriations. Among other things, the fund is intended to allow for more prompt payments to claimants, thereby reducing the assessment of interest against federal agencies (where allowed by law) during the period between the rendering and payment of an award. In 1961, legislation was enacted allowing the fund to pay Department of Justice settlements of ongoing or imminent lawsuits against federal agencies. The No FEAR Act requires federal agencies to reimburse the Judgment Fund for payments of judgments, awards, or settlements that the fund makes to employees, former employees, or job applicants in connection with litigation alleging violation of certain federal laws. The Senate committee report accompanying the No FEAR Act explains that the act is intended to prompt federal agencies to pay more attention to their equal employment opportunity and whistleblower complaint activities and act more expeditiously to resolve complaints before they get to court. Accordingly, No FEAR Act cases include those brought before federal courts under discrimination statutes and certain cases brought before the Merit Systems Protection Board (MSPB), including discrimination and whistleblower protection claims. These latter cases, however, typically result in either a settlement while the case is pending at MSPB or an award issued by MSPB, both of which are paid out of agency funds, not the Judgment Fund. As provided for under the No FEAR Act, the President designated OPM to issue regulations to carry out the agency reimbursement provisions of the law. OPM’s interim final regulations issued earlier this year state that the procedures that agencies must use to reimburse the Judgment Fund are those prescribed by FMS. Under procedures prescribed by Treasury, FMS Judgment Fund branch analysts, in consultation with FMS’s Office of the Chief Counsel, certify whether a judgment, award, or settlement is appropriate for payment and whether the agency on whose behalf payment was made must reimburse the fund. FMS does not review the merits underlying the claim nor certify the merits of the judgment or award. FMS estimates that in fiscal year 2003 it spent about $334,000 to certify, pay, and seek reimbursement for CDA claim payments and about $240,000 to certify and pay discrimination claims (see tables 1 and 2). FMS estimates that it will have to allocate approximately $171,500 for personnel costs to seek reimbursement for discrimination claims under the No FEAR Act in fiscal year 2004 (see table 3). This estimate includes about $119,500 in costs to set up and administer accounts receivable and seek reimbursement for No FEAR Act payments. FMS estimates that it will also incur a one-time start-up cost of about $52,000 for its information technicians to upgrade computer systems to create and track No FEAR Act accounts receivable. FMS expects no increase in either the number of personnel or budgeted funds to handle No FEAR Act reimbursements. FMS’s estimates assume that it will pay the same number of discrimination claim payments under the No FEAR Act in fiscal year 2004 as it paid the previous year. FMS estimates also assume there will be no increase in the cost for processing discrimination claim payments in fiscal year 2004. Treasury’s estimate of fiscal year 2004 costs for No FEAR Act claim payments also assumed that agency compliance with the No FEAR Act would be similar to that under CDA. According to the Judgment Fund branch, actual costs to Treasury may vary from the estimate because of differences in the nature of the claims under the two laws. Although the certification, payment, and accounting mechanisms that FMS uses for No FEAR Act and CDA payments are virtually the same, some of Treasury’s current and anticipated procedures to seek reimbursement from federal agencies for claims paid under the two laws differ. For both No FEAR Act and CDA payments, the Judgment Fund branch analysts ensure that all documents submitted by the agency and other parties have (1) the proper signatures and court seals, (2) contact name and telephone number, and (3) an appropriate address. Payment from the Judgment Fund is then certified by FMS and made through Treasury’s Philadelphia Financial Center by check or electronic funds transfer. Once payment is made, FMS reduces the fund’s balance, records an expense by the fund, and records an account receivable in its recoveries account for the federal agency on whose behalf the payment was made. The debtor federal agency is required to record an account payable to the Judgment Fund. Those amounts remain a receivable on FMS’s books and a payable on the agency’s books until it reimburses the fund. FMS sends letters to agencies to verify account balances quarterly. The agencies must also review their balances and confirm them to FMS. According to FMS, on the basis of the cash receipts history for federal agencies and the age of some of the Judgment Fund’s accounts receivable, it expects that a percentage of the money owed by federal agencies will probably not be paid back. To allow for this, FMS calculates a percentage, which it calls an allowance factor, based on the age of the receivable and the agency’s payment history. According to FMS, it applies the allowance factor to an agency’s outstanding accounts receivable to arrive at a dollar amount that FMS puts into an allowance account, which is used by FMS to report on the status of the Judgment Fund in its financial statement. According to FMS, although it records the debt in the allowance account as an uncollectible loss, the debt is not written off. FMS expects each agency to record the amount of unreimbursed debt as a liability, which will remain until the agency repays Treasury or Congress provides write-off authority. For CDA reimbursements, FMS sends a letter to the head of the agency contracting unit or budget officer seeking reimbursement for payments made either the same day or the day after payment is made from the fund. If the agency fails to contact FMS within 30 business days of this letter, a follow-up letter is sent to the agency. If the agency fails to respond within 60 business days of the initial contact letter, FMS sends a letter to the agency’s Chief Financial Officer (CFO). The agency CFO has 30 business days to contact FMS. For No FEAR Act reimbursements, as provided under the OPM regulations, FMS provides notice to the agency’s CFO within 15 business days after payment for the No FEAR Act claim from the Judgment Fund. It further requires an agency to either reimburse the Judgment Fund or work out a payment arrangement with FMS within 45 business days of being notified by FMS. Under OPM’s No FEAR Act regulations, FMS is required to annually post on Treasury’s public Web site those agencies that either fail to make reimbursements or fail to contact FMS within 45 business days of notice to make arrangements in writing for reimbursement. There is no similar posting requirement for CDA reimbursements, and FMS said it has no plans to post CDA reimbursement information on Treasury’s public Web site. Reimbursement rates for CDA payments were low for the 3 years we examined and, despite promises of repayment, at least 18 agencies had not repaid amounts owed to the fund by the end of each of these years. According to Treasury, while its No FEAR Act collection efforts are just beginning, reimbursement rates under the act may be as low as under CDA because the No FEAR Act, like CDA, does not impose reimbursement deadlines on agencies, and Treasury has very little authority to enforce reimbursement. The Judgment Fund was reimbursed for fewer than one of every five dollars agencies owed for each of the 3 fiscal years (see table 4). Further, the total unpaid amounts to the Judgment Fund increased as of each fiscal year end. The total amount and percentage collected was at its highest in fiscal year 2001 and was lowest in fiscal year 2002. While the total amount and percentage collected increased in fiscal year 2003, they remained less than in fiscal year 2001. Our review of a sample of agencies’ correspondence in response to the Judgment Fund branch’s requests for CDA reimbursement showed that agencies most often deferred payment because of the adverse effect they said it would have on their programs and mission-critical activities. The agencies promised to continue to seek opportunities to provide repayment through the budget and appropriation process. Neither CDA nor the No FEAR Act set deadlines for reimbursement. We have acknowledged that agencies are allowed to exercise reasonable discretion in determining the timing of CDA reimbursements so as not to cause the disruption of ongoing programs or activities. Similar flexibility exists under the No FEAR Act. While the No FEAR Act states that “agencies are expected to reimburse the within a reasonable time,” the statute also states that an agency may need to extend reimbursement over several years to avoid reductions in force, furloughs, other reductions in compensation or benefits for the agency workforce, or an adverse effect on the mission of the agency. Recognizing that agencies are often confronted with practicalities of this sort, we have suggested that while an agency may not be in a position to make CDA reimbursements during the year in which the fund made payment, we would expect the agency to manage its budgetary resources to accommodate reimbursement of the fund before the beginning of the second fiscal year following the fiscal year in which the award is paid. According to FMS, the lack of a reimbursement deadline under CDA and the No FEAR Act may be one reason that reimbursement rates under the No FEAR Act may be as low as they have been under CDA. Another key reason that FMS officials cite for this possibility is that Treasury has very little authority to enforce reimbursement. Like CDA, the No FEAR Act provides no sanctions that would compel agencies to reimburse the Treasury, and no Treasury authority to take money owed directly from the agency. FMS officials recognize that the requirement for FMS to annually post the names of agencies that fail to make No FEAR Act reimbursements or make arrangements for reimbursement may provide an incentive for agencies to comply with the regulations. Because posting has yet to begin, it remains to be seen what impact this requirement will have. On March 18, 2004, we provided a draft of this report to Treasury for review and comment. Treasury officials had no official comment on this report, but provided technical and clarifying comments, which we have incorporated as appropriate. We will send copies to Representative James F. Sensenbrenner, Representative John Conyers, other interested congressional committees, the Secretary of the Treasury, and the Commissioner, Financial Management Service. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have questions about this report, please call me at (202) 512-6806 or Belva Martin, Assistant Director, on (202) 512-4285. Key contributors to this report are listed in appendix II. To address our objectives, we reviewed relevant laws, procedures, and guidelines, and interviewed officials in FMS, its Judgment Fund branch, and FMS’s Office of the Chief Counsel. Judgment Fund officials provided us with the number and amount of CDA and discrimination claims paid from the Judgment Fund from fiscal year 2001 through 2003. Since FMS does not track the cost of processing Judgment Fund claim payments, agency officials could only provide us with estimates of the costs for processing payments and reimbursements for CDA and discrimination payments and the estimated increase in costs for fiscal year 2004 for processing discrimination and any other No FEAR Act claim payments. The Judgment Fund's cost estimates do not include costs for processing payments of whistleblower protection claims because the fund generally does not pay these claims. To arrive at their estimate of the personnel costs involved, Judgment Fund officials used the percentage of staff time spent processing CDA and discrimination payments. To determine the extent of federal agencies’ compliance with CDA’s reimbursement requirement, we obtained data through FMS from Treasury’s central accounting system on the amount of money sought and received from agencies in fiscal years 2001, 2002, and 2003. We interviewed Judgment Fund and FMS officials to obtain their views of how effective the reimbursement collection efforts allowed under the No FEAR Act may be. To assess the reliability of the data from Treasury’s financial system, we reviewed available supporting documentation and interviewed Judgment Fund officials and the FMS accountant. In addition, we tested the reasonableness of the fiscal year 2003 estimated personnel costs of processing CDA and discrimination claims by calculating the percentage of personnel costs in the fund’s total fiscal year 2003 estimate and comparing this to the percentage of CDA and discrimination claims in fiscal year 2003 to determine if they were disproportionately large when compared to the total number of claims processed. On the basis of our test of the reasonableness of the personnel cost estimates provided by FMS and our assessment of the reliability of the data generated by the accounting system used by FMS and the Judgment Fund branch database, we determined that the data for fiscal years 2001 through 2003 were sufficiently reliable for the purposes of our report. In addition to the person named above, Karin Fangman, Amy Friedlander, Domingo Nieves, and Michael Rose made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The Notification and Federal Employee Antidiscrimination and Retaliation (No FEAR) Act, which took effect October 1, 2003, requires agencies to repay discrimination settlements and judgments paid on their behalf. The No FEAR Act is similar to the Contract Disputes Act (CDA) of 1978, which holds agencies accountable for payment in contract disputes. Under both laws, federal agencies must reimburse the Judgment Fund, which is administered by the Treasury Department. Before the No FEAR Act, agencies did not have to repay the fund. The No FEAR Act requires GAO to review the financial impact on Treasury of administering that law and CDA. Based on this requirement, this report provides information on (1) Treasury's estimates of its costs to process discrimination claim payments and CDA payments in fiscal year 2003 and its costs to process and seek reimbursement for claim payments under lawsuits covered by the No FEAR Act beginning in fiscal year 2004, (2) differences in claims processing and reimbursement efforts under CDA and the No FEAR Act, and (3) the extent of federal agency compliance with CDA's reimbursement requirements and Treasury's view of how effective its No FEAR Act collection efforts may be. We make no recommendations in this report. Treasury officials had no official comment on the report. Treasury estimates that it cost about $334,000 to certify, pay, and seek reimbursement for CDA claim payments in fiscal year 2003, and about $240,000 to certify and pay discrimination claims that year. For fiscal year 2004, assuming relatively constant case and processing cost levels, and agency compliance with reimbursement requirements similar to that experienced under CDA, Treasury estimates that it will incur about $171,500 in personnel costs in order to seek reimbursements for No FEAR claim payments. These include recurring costs to set up and administer accounts receivable and seek reimbursement from agencies for claims paid out of the Judgment Fund and a one-time cost for in-house personnel to upgrade computer systems. Although the certification, payment, and accounting processes that Treasury uses for the No FEAR Act are virtually the same as those used for CDA, the procedures Treasury is required to use to seek reimbursement for claims paid under the No FEAR Act will differ. For example, as part of Treasury's effort to seek reimbursement for No FEAR Act claims paid, No FEAR Act regulations require Treasury to record on its public Web site the failure of agencies to make reimbursement or arrange to make reimbursement within a specified time limit. There is no similar requirement under CDA claims. During fiscal years 2001, 2002, and 2003, federal agencies reimbursed Treasury for fewer than one of every five dollars owed under CDA, with at least 18 agencies having unpaid amounts at the end of each fiscal year. According to Treasury, while its No FEAR Act collection efforts are just beginning, reimbursement rates under the act may be as low as under CDA because the No FEAR Act, like CDA, does not impose reimbursement deadlines on agencies, and Treasury has very little authority to enforce reimbursement.
Under the existing, or “legacy” system, the military’s disability evaluation process begins at a military treatment facility when a physician identifies a condition that may interfere with a servicemember’s ability to perform his or her duties. On the basis of medical examinations and the servicemember’s medical records, a medical evaluation board (MEB) identifies and documents any conditions that may limit a servicemember’s ability to serve in the military. The servicemember’s case is then evaluated by a physical evaluation board (PEB) to make a determination of fitness or unfitness for duty. If the servicemember is found to be unfit due to medical conditions incurred in the line of duty, the PEB assigns the servicemember a combined percentage rating for those unfit conditions, and the servicemember is discharged from duty. Depending on the overall disability rating and number of years of active duty or equivalent service, the servicemember found unfit with compensable conditions is entitled to either monthly disability retirement benefits or lump sum disability severance pay. In addition to receiving disability benefits from DOD, veterans with service-connected disabilities may receive compensation from VA for lost earnings capacity. VA’s disability compensation claims process starts when a veteran submits a claim listing the medical conditions that he or she believes are service-connected. In contrast to DOD’s disability evaluation system, which evaluates only medical conditions affecting servicemembers’ fitness for duty, VA evaluates all medical conditions claimed by the veteran, whether or not they were previously evaluated in DOD’s disability evaluation process. For each claimed condition, VA must determine if there is credible evidence to support the veteran’s contention of a service connection. Such evidence may include the veteran’s military service records and treatment records from VA medical facilities and private medical service providers. Also, if necessary for reaching a decision on a claim, VA arranges for the veteran to receive a medical examination. Medical examiners are clinicians (including physicians, nurse practitioners, or physician assistants) certified to perform the exams under VA’s Compensation and Pension program. Once a claim has all of the necessary evidence, a VA rating specialist evaluates the claim and determines whether the claimant is eligible for benefits. If so, the rating specialist assigns a percentage rating. If VA finds that a veteran has one or more service-connected disabilities with a combined rating of at least 10 percent, the agency will pay monthly compensation. In November 2007, DOD and VA began piloting the IDES, a joint disability evaluation system, to eliminate duplication in their separate systems and expedite receipt of VA benefits for wounded, ill, and injured servicemembers. The IDES merges DOD and VA processes, so that servicemembers begin their VA disability claim while they undergo their DOD disability evaluation, rather than sequentially, making it possible for them to receive VA disability benefits shortly after leaving military service (see fig. 1). Specifically, the IDES merges DOD and VA’s separate exam processes into a single exam process conducted to VA standards. This single exam (which may involve more than one medical examination, for example, by different specialists), in conjunction with the servicemembers’ medical records, is used by military service PEBs to make a determination of servicemembers’ fitness for continued military service, and by VA as evidence of service-connected disabilities. The exam may be performed by medical staff working for VA, DOD, or a private provider contracted with either agency. consolidates DOD and VA’s separate rating phases into one VA rating phase. If the PEB has determined that a servicemember is unfit for duty, VA rating specialists prepare two ratings—one for the conditions that DOD determined made a servicemember unfit for duty, which DOD uses to provide military disability benefits, and the other for all service-connected disabilities, which VA uses to determine VA disability benefits. provides VA case managers to perform outreach and nonclinical case management and explain VA results and processes to servicemembers. In August 2010, DOD and VA officials issued an interim report to Congress summarizing the results of their evaluation of the IDES pilot as of early 2010. In that report, the agencies concluded that, as of February 2010, servicemembers who went through the IDES pilot were more satisfied than those who went through the legacy system, and that the IDES process met the agencies’ goals of delivering VA benefits to active duty servicemembers within 295 days and to reserve component servicemembers within 305 days. Furthermore, they concluded that the IDES pilot has achieved a faster processing time than the legacy system, which they estimated to be 540 days. While our review of DOD and VA’s data and reports generally confirm DOD and VA’s findings, as of early 2010, we also found that not all of the service branches were achieving the same results, case processing times have increased since February, and other agency goals have not been met. Servicemember satisfaction: Our reviews of the survey data indicate that, on average, servicemembers in the IDES pilot have had higher satisfaction levels than those who went through the legacy process. However, Air Force members—who represented a small proportion (7 percent) of pilot cases—were less satisfied. We reviewed the agencies’ survey methodology and generally found their survey design and conclusions to be sound. Average case processing times: The agencies have been meeting their 295- day and 305-day timeliness goals for much of the past 2 years, but the average case processing time for active duty servicemembers has steadily increased from 274 days in February 2010 to 296 days, as of August 2010. While still an improvement over the 540-day estimate for the legacy system, the agencies missed their timeliness goal by 1 day. Among the military service branches, only the Army—which comprised about 60 percent of cases that had completed the pilot process—met the agencies’ timeliness goals in August, while average processing times for each of the other services exceeded 330 days. Across all military service branches, processing times for individual pilot sites have generally increased as their caseloads have increased. We reviewed the reliability of the case data upon which the agencies based their analyses and generally found these data to be sufficiently reliable for purposes of these analyses. Goals to process 80 percent of cases in targeted time frames: DOD and VA had indicated in their planning documents that they had goals to deliver VA benefits to 80 percent of servicemembers within the 295-day and 305-day targets. As of February 2010, these goals were not met. For both active duty and reserve cases, about 60 percent (rather than 80 percent) of cases were meeting the targeted time frames. By service branch, the Army had the highest rate of active duty cases (66 percent) meeting the goal, and the Air Force had the lowest (42 percent). Although DOD and VA’s evaluation results indicate promise for the IDES, the extent to which the IDES is an improvement over the legacy system cannot be known because of limitations in the legacy data. DOD and VA’s estimate of 540 days for the legacy system was based on a small, nonrepresentative sample of cases. DOD officials told us that they planned to use a broader sample of legacy cases to compare against pilot cases with respect to processing times and appeal rates. However, significant gaps in the legacy case data precluded such comparisons. Specifically, DOD compiled the legacy case data from each of the military services and the VA, but the military services did not track the same information. In addition, VA was not able to provide data on the date VA benefits were delivered for legacy cases, which are needed to determine the full processing time from referral to final delivery of VA benefits. Limited comparisons of pilot and legacy timeliness are possible with Army data, which appears to be reliable on some key processing dates. Our analysis of Army legacy data suggests that active duty cases took on average 369 days to complete the DOD legacy process and reach the VA rating phase—which does not include time to complete the VA rating and deliver the VA benefits to servicemembers. In comparison, it took on average 266 days to deliver VA benefits to soldiers in the pilot, according to the agencies’ August data. However, Army comparisons cannot be generalized to the other services. As DOD and VA tested the IDES at different facilities and added cases to the pilot, they encountered several challenges that led to delays in certain phases of the process. Staffing: Most significantly, most of the 10 sites we visited reported experiencing staffing shortages and related delays to some extent, in part due to workloads exceeding the agencies’ initial estimates. The IDES involves several different types of staff across several different DOD and VA offices, some of which have specific caseload ratios set by the agencies, and we learned about insufficient staff in many key positions. With regard to VA positions, officials cited shortages in examiners for the single exam, rating staff, and case managers. With regard to DOD positions, officials cited shortages of physicians who serve on the MEBs, PEB adjudicators, and DOD case managers. In addition to shortages cited at pilot sites, DOD data indicate that 19 of the 27 pilot sites did not meet DOD’s caseload target of 30 cases per manager. Local DOD and VA officials attributed staffing shortages to higher than anticipated caseloads and difficulty finding qualified staff, particularly physicians, in rural areas. These staffing shortages contributed to delays in the IDES process. Two of the sites we visited—Fort Carson and Fort Stewart—were particularly challenged to provide staff in response to surges in caseload, which occurred when Army units were preparing to deploy to combat zones. Through the Army’s predeployment medical assessment process, large numbers of servicemembers were determined to be unable to deploy due to a medical condition and were referred to the IDES within a short period of time, overwhelming the staff. These two sites were unable to quickly increase staffing levels, particularly of examiners. As a result, at Fort Carson, it took 140 days on average to complete the single exam for active duty servicemembers, as of August 2010, far exceeding the agencies’ goal to complete the exams in 45 days. Exam summaries: Issues related to the completeness and clarity of single exam summaries were an additional cause of delays in the VA rating phase of the IDES process. Officials from VA rating offices said that some exam summaries did not contain information necessary to determine a rating. As a result, VA rating office staff must ask the examiner to clarify these summaries and, in some cases, redo the exam. VA officials attributed the problems with exam summaries to several factors, including the complexity of IDES pilot cases, the volume of exams, and examiners not receiving records of servicemembers’ medical history in time. The extent to which insufficient exam summaries caused delays in the IDES process is unknown because DOD and VA’s case tracking system for the IDES does not track whether an exam summary has to be returned to the examiner or whether it has been resolved. Medical diagnoses: While the single exam in the IDES eliminates duplicative exams performed by DOD and VA in the legacy system, it raises the potential for there to be disagreements about diagnoses of servicemembers’ conditions. For example, officials at Army pilot sites informed us about cases in which a DOD physician had treated members for mental disorders, such as major depression. However, when the members went to see the VA examiners for their single exam, the examiners diagnosed them with posttraumatic stress disorder (PTSD). Officials told us that attempting to resolve such differences added time to the process and sometimes led to disagreements between DOD’s PEBs and VA’s rating offices about what the rating should be for purposes of determining DOD disability benefits. Although the Army developed guidance to help resolve diagnostic differences, other services have not. Moreover, PEB officials we spoke with noted that there is no guidance on how disagreements about servicemembers’ ratings between DOD and VA should be resolved beyond the PEBs informally requesting that the VA rating office reconsider the case. While DOD and VA officials cited several potential causes for diagnostic disagreements, the number of cases with disagreements about diagnoses and the extent to which they have increased processing time are unknown because the agencies’ case tracking system does not track when a case has had such disagreements. Logistical challenges integrating VA staff at military treatment facilities: DOD and VA officials at some pilot sites we visited said that they experienced logistical challenges integrating VA staff at the military facilities. At a few sites, it took time for VA staff to receive common access cards needed to access the military facilities and to use the facilities’ computer systems, and for VA physicians to be credentialed. DOD and VA staff also noted several difficulties using the agencies’ multiple information technology (IT) systems to process cases, including redundant data entry and a lack of integration between systems. Housing and other challenges posed by extended time in the military disability evaluation process: Although many DOD and VA officials we interviewed at central offices and pilot sites felt that the IDES process expedited the delivery of VA benefits to servicemembers, several also indicated that it may increase the amount of time servicemembers are in the military’s disability evaluation process. Therefore, some DOD officials noted that servicemembers must be cared for, managed, and housed for a longer period. The military services may move some servicemembers to temporary medical units or to special medical units such as Warrior Transition Units in the Army or Wounded Warrior Regiments in the Marine Corps, but at a few pilot sites we visited, these units were either full or members in the IDES did not meet their admission criteria. Where servicemembers remain with their units while going through the IDES, the units cannot replace them with able-bodied members. In addition, officials at two sites said that members are not gainfully employed by their units and, left idle, are more likely to be discharged due to misconduct and forfeit their disability benefits. However, DOD officials also noted that servicemembers benefit from continuing to receive their salaries and benefits while their case undergoes scrutiny by two agencies, though some also acknowledged that these additional salaries and benefits create costs for DOD. DOD and VA plan to expand the IDES to military facilities worldwide on an ambitious timetable—to 113 sites during fiscal year 2011, a pace of about 1 site every 3 days. Expansion is scheduled to occur in four stages, beginning with 28 sites in the southeastern and western United States by the end of December 2010. In preparing for IDES expansion military-wide, DOD and VA have many efforts under way to address challenges experienced to date, though their efforts have yet to be implemented or tested. For example, the agencies have completed a significant revision of their site assessment matrix—a checklist used by local DOD and VA officials to ascertain their readiness to begin the pilot—to address areas where prior IDES sites had experienced challenges. In addition, local senior-level DOD and VA officials will be expected to sign the site assessment matrix to certify that a site is ready for IDES implementation. This differs from the pilot phase where, according to DOD and VA officials, some sites implemented the IDES without having been fully prepared. Through the new site assessment matrix and other initiatives, DOD and VA are addressing several of the challenges identified in the pilot phase. Ensuring sufficient staff: With regard to VA staff, VA plans to increase the number of examiners by awarding a new contract through which sites can acquire additional examiners. To increase rating staff, VA has filled vacant rating specialist positions and anticipates hiring a small number of additional staff. With regard to DOD staff, Air Force and Navy officials told us they have added adjudicators for their PEBs or are planning to do so. Both DOD and VA indicated they plan to increase their numbers of case managers. Meanwhile, sites are being asked in the assessment matrix to provide longer and more detailed histories of their caseloads, as opposed to the 1-year history that DOD and VA had based their staffing decisions on during the pilot phase. The matrix also asks sites to anticipate any surges in caseloads and to provide a written contingency plan for dealing with them. Ensuring the sufficiency of single exams: VA has begun the process of revising its exam templates to better ensure that examiners include the information needed for a VA disability rating decision and to enable them to complete their exam reports in less time. VA is also examining whether it can add capabilities to the IDES case tracking system that would enable staff to identify where problems with exams have occurred and track the progress of their resolution. Ensuring adequate logistics at IDES sites: The site assessment matrix asks sites whether they have the logistical arrangements needed to implement the IDES. In terms of information technology, DOD and VA are developing a general memorandum of agreement intended to enable DOD and VA staff access to each other’s IT systems. DOD officials also said that they are developing two new IT solutions—one currently being tested is intended to help military treatment facilities better manage their cases, while another still at a preliminary stage of development would reduce multiple data entry. However, in some areas, DOD and VA’s efforts to prepare for IDES expansion do not fully address some challenges or are not yet complete. Ensuring sufficient DOD MEB physician staffing: DOD does not yet have strategies or plans to address potential shortages of physicians to serve on MEBs. For example, the site assessment matrix does not include a question about the sufficiency of military providers to handle expected numbers of MEB cases at the site, or ask sites to identify strategies for ensuring sufficient MEB physicians if there is a caseload surge or staff turnover. Ensuring sufficient housing and organizational oversight for IDES participants: Although the site assessment matrix asks sites whether they will have sufficient temporary housing available for servicemembers going through the IDES, the matrix requires only a yes or no response and does not ensure that sites will have conducted a thorough review of their housing capacity. In addition, the site assessment matrix does not address plans for ensuring that IDES participants are gainfully employed or sufficiently supported by their organizational units. Addressing differences in diagnoses: According to agency officials, DOD is currently developing guidance on how staff should address differences in diagnoses. However, since the new guidance and procedures are still being developed, we cannot determine whether they will aid in resolving discrepancies or disagreements. Significantly, DOD and VA do not have a mechanism for tracking when and where disagreements about diagnoses and ratings occur and, consequently, may not be able to determine whether the guidance sufficiently addresses the discrepancies. As DOD and VA move to implement the IDES worldwide, they have some mechanisms in place to monitor challenges that may arise in the IDES, such as regular reporting of data on caseloads, processing times, and servicemember satisfaction, and preparation of an annual report on challenges in the IDES. However, DOD and VA do not have a system-wide monitoring mechanism to help ensure that steps they took to address challenges are sufficient and to identify problems in a more timely basis. For example, they do not collect data centrally on staffing levels at each site relative to caseload. As a result, DOD and VA may be delayed in taking corrective action, since it takes time to assess what types of staff are needed at a site and to hire or reassign staff. DOD and VA also lack mechanisms or forums for systematically sharing information on challenges, as well as best practices between and among sites. For example, DOD and VA have not established a process for local sites to systematically report challenges to DOD and VA management and for lessons learned to be systematically shared system-wide. During the pilot phase, VA surveyed pilot sites on a monthly basis about challenges they faced in completing single exams. Such a practice has the potential to provide useful feedback if extended to other IDES challenges. By merging two duplicative disability evaluation systems, the IDES shows promise for expediting the delivery of VA benefits to servicemembers leaving the military due to a disability. However, piloting of the system has revealed several significant challenges that require careful management attention and oversight. DOD and VA are currently taking steps to address many of these challenges. However, given the agencies’ ambitious implementation schedule—more than 100 sites in a year—it is unclear whether these steps will be completed before DOD and VA deploy the IDES to additional military facilities. Ultimately, the success or failure of the IDES will depend on DOD and VA’s ability to sufficiently staff the various offices involved in the IDES and to resolve challenges not only at the initiation of the transition to IDES, but also on an ongoing, long-term basis. Because they do not have a mechanism for routinely monitoring staffing and other risk factors, DOD and VA may not be able to know whether their efforts to address these factors are sufficient or to identify new problems as they emerge, so that they may take immediate steps to address them before they become major problems. We have draft recommendations aimed at helping DOD and VA further address challenges surfaced during the pilot, which we plan to finalize in our forthcoming report after fully considering agency comments. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Committee may have at this time. For further information about this testimony, please contact Daniel Bertoni at (202) 512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. In addition to the individual named above, key contributors to this testimony include Michele Grgich, Yunsian Tai, Jeremy Conley, and Greg Whitney. Key advisors include Bonnie Anderson, Rebecca Beale, Mark Bird, Brenda Farrell, Valerie Melvin, Patricia Owens, Roger Thomas, Walter Vance, and Randall Williamson. Veterans’ Disability Benefits: Further Evaluation of Ongoing Initiatives Could Help Identify Effective Approaches for Improving Claims Processing. GAO-10-213. Washington, D.C.: January 29, 2010. Recovering Servicemembers: DOD and VA Have Jointly Developed the Majority of Required Policies but Challenges Remain. GAO-09-728. Washington, D.C.: July 8, 2009. Recovering Servicemembers: DOD and VA Have Made Progress to Jointly Develop Required Policies but Additional Challenges Remain. GAO-09-540T. Washington, D.C.: April 29, 2009. Military Disability System: Increased Supports for Servicemembers and Better Pilot Planning Could Improve the Disability Evaluation Process. GAO-08-1137. Washington, D.C.: September 24, 2008. DOD and VA: Preliminary Observations on Efforts to Improve Care Management and Disability Evaluations for Servicemembers. GAO-08-514T. Washington, D.C.: February 27, 2008. DOD and VA: Preliminary Observations on Efforts to Improve Health Care and Disability Evaluations for Returning Servicemembers. GAO-07-1256T Washington, D.C.: September 26, 2007. Military Disability System: Improved Oversight Needed to Ensure Consistent and Timely Outcomes for Reserve and Active Duty Service Members. GAO-06-362. Washington, D.C.: March 31, 2006. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since 2007, the Departments of Defense (DOD) and Veterans Affairs (VA) have been pilot testing a new disability evaluation system designed to integrate their separate processes and thereby expedite veterans' benefits for wounded, ill, and injured servicemembers. Having piloted the integrated disability evaluation system (IDES) at 27 military facilities, they are now planning for its expansion military-wide. This testimony is based on GAO's ongoing review of the IDES pilot and draft report, which is currently with DOD and VA for agency comment. GAO conducted this review pursuant to the National Defense Authorization Act for Fiscal Year 2008. This review specifically examined: (1) the results of the agencies' evaluation of the IDES pilot, (2) challenges in implementing the IDES pilot to date, and (3) whether the agencies' plans to expand the IDES adequately address potential future challenges. To address these questions, GAO analyzed data from DOD and VA, conducted site visits at 10 military facilities, and interviewed DOD and VA officials. In their evaluation of the IDES pilot, DOD and VA concluded that, as of February 2010, the pilot had (1) improved servicemember satisfaction relative to the existing "legacy" system and (2) met their established goal of delivering VA benefits to active duty and reserve component servicemembers within 295 and 305 days, respectively, on average. While these results are promising, average case processing times have since steadily increased--for example, for active duty servicemembers, the average has increased from 274 days in February 2010 to 296 days in August 2010. At 296 days, processing time for the IDES is still an improvement over the 540 days that DOD and VA estimated the legacy process takes to deliver VA benefits to servicemembers. However, the full extent of improvement of the IDES over the legacy system is unknown because (1) the 540-day estimate was based on a small, nonrepresentative sample of cases and (2) limitations in legacy case data prevent a comprehensive comparison of processing times, as well as appeal rates. In piloting the IDES, DOD and VA have run into several implementation challenges that have contributed to delays in the process. The most significant challenge was insufficient staffing by DOD and VA. Staffing shortages and process delays were particularly severe at two pilot sites we visited where the agencies did not anticipate caseload surges. For example, at one of these sites, due to a lack of medical examiners, it took 140 days on average to complete one of the key features of the pilot--the single exam--compared with the agencies' goal to complete this step of the process in 45 days. The single exam posed other challenges that contributed to process delays, such as disagreements between DOD and VA medical staff about diagnoses for servicemembers' medical conditions. Cases involving such disagreements often required further attention, adding time to the process. Pilot sites also experienced logistical challenges, such as incorporating VA staff at military facilities and housing and managing personnel going through the process. As DOD and VA move forward with plans to expand the IDES worldwide, they have taken steps to address a number of these challenges; however, these mitigation efforts have yet to be tested, and not all challenges have been addressed. For example, to address staffing shortages and ensure timely processing, VA is developing a contract for additional medical examiners, and DOD and VA are requiring local staff to develop written contingency plans for handling surges in caseloads. On the other hand, the agencies have not yet developed strategies for ensuring sufficient military physicians to handle anticipated workloads. Significantly, DOD and VA do not have a comprehensive monitoring plan for identifying problems as they occur--such as staffing shortages and disagreements about diagnoses--in order to take remedial actions as early as possible. GAO has draft recommendations aimed at helping DOD and VA, as they move forward with IDES expansion plans, to further address challenges surfaced during the pilot, which GAO plans to finalize in the forthcoming report after fully considering agency comments.
Under PPACA, health-insurance marketplaces were intended to provide a single point of access for individuals to enroll in participating private health plans, apply for income-based subsidies to offset the cost of these plans, and, as applicable, obtain an eligibility determination or assessment of eligibility for other health-coverage programs, such as Medicaid or the Children’s Health Insurance Program. To be eligible to enroll in a “qualified health plan” offered through a marketplace—that is, one providing essential health benefits and meeting other requirements under PPACA—an individual must be a U.S. citizen or national, or otherwise lawfully present in the United States; reside in the marketplace service area; and not be incarcerated (unless incarcerated while awaiting disposition of charges). To be eligible for Medicaid, individuals must meet federal requirements regarding residency, U.S. citizenship or immigration status, and income limits, as well as any additional state-specific criteria that may apply. Under the Medicaid expansion, states may choose to provide Medicaid coverage to nonelderly adults who meet income limits and other criteria. Under PPACA, the federal government is to fully reimburse states through fiscal year 2016 for the Medicaid expenditures of “newly eligible” individuals who gained Medicaid eligibility through the expansion. According to the CMS Office of the Actuary, federal expenditures for the Medicaid expansion are estimated at $430 billion from 2014 through 2023. PPACA requires marketplaces to verify application information to determine eligibility for enrollment and, if applicable, determine eligibility for the income-based subsidies or Medicaid. These verification steps include validating an applicant’s Social Security number, if one is provided; verifying citizenship, status as a U.S. national, or lawful presence by comparison with Social Security Administration (SSA) or Department of Homeland Security records; and verifying household income by comparison with tax-return data from the Internal Revenue Service (IRS), data on Social Security benefits from SSA, and other available current income sources. In particular, PPACA requires that consumer-submitted information be verified, and that determinations of eligibility be made, through either an electronic verification system or another method approved by HHS. To implement this verification process, CMS developed the data services hub (data hub), which acts as a portal for exchanging information between the federal Marketplace, state-based marketplaces, and Medicaid agencies, among other entities, and CMS’s external partners, including other federal agencies. The Marketplace uses the data hub in an attempt to verify that applicant information necessary to support an eligibility determination is consistent with external data sources. For qualifying applicants, the act provides two possible forms of subsidies for consumers enrolling in individual health plans, both of which are generally paid directly to insurers on consumers’ behalf. One is a federal income-tax credit, which enrollees may elect to receive in advance, and which reduces a consumer’s monthly premium payment. When taken in advance, this benefit is known as the advance premium tax credit (APTC). The other, known as cost-sharing reduction (CSR), is a discount that lowers the amount consumers pay for out-of-pocket charges for deductibles, coinsurance, and copayments. Under PPACA, an applicant’s filing of a federal income-tax return, including a required additional form, is a significant eligibility requirement for continued receipt of federal subsidies. When applicants apply for coverage, they report family size and the amount of projected income. On the basis, in part, of that information, the Marketplace will calculate the maximum allowable amount of APTC. An applicant can then decide whether he or she wants all, some, or none of the estimated credit paid in advance, in the form of payment to the applicant’s insurer that reduces the applicant’s monthly premium payment. If an applicant chooses to have all or some of his or her credit paid in advance, the applicant is required to “reconcile” on his or her federal income-tax return the amount of advance payments the government sent to the applicant’s insurer on the applicant’s behalf with the tax credit for which the applicant qualifies based on actual reported income and family size. Reconciliation is accomplished using IRS Form 8692, Premium Tax Credit (PTC). To receive advance payment of the tax credit at time of application, applicants must attest they will file a tax return for the year for which they receive APTC. CMS announced that, beginning with the open- enrollment period for 2016 coverage, APTC and CSR subsidies will be discontinued for 2016 coverage for enrollees who received APTC in 2014, but did not comply with the requirement to file a federal income-tax return and reconcile receipt of their APTC subsidy. Under PPACA’s application process, “inconsistencies” are generated when individual applicant information does not match information from federal data sources—either because information an applicant provided does not match information contained in data sources that a marketplace uses for eligibility verification at the time of application, or because such information is not available. If there is an application inconsistency, the marketplace is to determine eligibility using the applicant’s attestations and ensure that subsidies are provided on behalf of the applicant, if qualified to receive them, while the inconsistency is being resolved. Under the marketplace process, applicants will be asked to provide additional information or documentation for the marketplaces to review to resolve the inconsistency. Our undercover testing for the 2016 coverage year found that the eligibility determination and enrollment processes of the federal and state marketplaces we reviewed remain vulnerable to fraud, as we previously reported for the 2014 and 2015 coverage years. For each of our 15 fictitious applications, the marketplaces approved coverage, including for 6 fictitious applicants who had previously obtained subsidized coverage but did not file the required federal income-tax returns. Although IRS provides information to marketplaces on whether health-care applicants have filed required returns, the federal Marketplace and our selected state marketplace allowed applicants to instead attest that they had filed returns, saying the IRS information was not sufficiently current. The marketplaces we reviewed also relaxed documentation standards or extended deadlines for filing required documentation. After initial approval, all but one of our fictitious enrollees maintained subsidized coverage, even though we sent fictitious documents, or no documents, to resolve application inconsistencies. Marketplace officials told us that without specific identities of our fictitious applicants—which we declined to provide, to protect the identities—they could not comment on individual outcomes. In general, however, they told us our results indicate their marketplace processes worked as designed. For each of our 15 fictitious applications, the federal or state-based marketplaces approved coverage at time of application—specifically, 14 applications for qualified health plans, and 1 application for Medicaid. Each of the 14 applications for qualified health plans was also approved for APTC subsidies. These subsidies totaled about $5,000 on a monthly basis, or about $60,000 annually. These 14 qualified-health-plan applications also each obtained CSR subsidies, putting the applicants in a position to further benefit if they used medical services. However, our successful applicants did not seek medical services. These subsidies are not paid directly to enrolled consumers; instead, the federal government pays them to health-plan issuers on consumers’ behalf. For the first time in our three rounds of undercover application testing since the 2014 coverage year, we successfully cleared an online identity- checking step for one fictitious applicant. Known as “identity proofing,” the process uses personal and financial history on file with a credit-reporting agency. The marketplace generates questions that only the applicant is believed likely to know. According to CMS, the purpose of identity proofing is to prevent someone from creating an account and applying for health coverage based on someone else’s identity and without the other person’s knowledge. Although intended to counter such identity theft involving others, identity proofing also serves as an enrollment control for those applying online. For our 2014 and 2015 undercover testing, we failed to clear identity proofing in each online application we made. In this latest round of testing, we cleared identity proofing in one online application by successfully answering the identity questions presented: (1) name the county for the applicant address provided, (2) identify the high school from which the applicant graduated, and (3) identify the last four digits of a cellular phone number. Although our applicant’s identity was fictitious, the eligibility system may still have been able to generate questions based on a “likely” match, CMS officials told us. For our state marketplace applications, in four of five cases, marketplace representatives were unable to verify our applicants’ identities and, as a result, suggested that we visit enrollment counselors to present identification in person. As a representative said to one of our applicants, “I can’t look at your picture ID” and, “I have to be able to confirm that you are who you say you are … in case you were an impostor calling us.” We avoided such in-person visits, however, by filing paper applications (which under PPACA must be an option available to applicants). In such cases, an applicant signature is provided under penalty of perjury and threat of civil or criminal penalty. In our paper applications, we provided signatures for our fictitious identities and filed the forms. In another of the state marketplace cases, we were able to complete the application over the phone, without being asked identity-proofing questions. Our federal Marketplace applicants received no similar instructions on visiting enrollment counselors or submitting paper applications. For the 14 qualified-health-plan applications, we attempted to pay the required premiums to put policies into force, as we did in both of our previous rounds of testing. For 11 applicants, we successfully made premium payments. However, for three applicants, our initial premium payments—made to insurers we selected—were unsuccessful, and we were unable to resolve the issue. While we believed we had received confirmation of premium payments, insurers said payments were not received on a timely basis. As a result, our coverage was not put into effect in these three cases. At that point, because these cases had experienced different treatment than our other applications and no longer matched our original testing profile, we elected to discontinue them from further testing. Thus, the remainder of our discussion centers on the 12 cases for which we did not encounter payment issues—11 applications for qualified health plans, and 1 for Medicaid. As discussed in following sections, we divided the remaining 12 cases into those involving reconciliation of APTC subsidies and those involving other issues. Figure 1 shows a breakdown of our applications, from the original group of 15 down to the division into the tax-filing and other-issue groups. CMS announced that beginning with the open-enrollment period for 2016 coverage, APTC and CSR subsidies would be discontinued for 2016 coverage for enrollees who received APTC in 2014 but did not comply with the requirement to file a 2014 federal income-tax return and reconcile APTC received. Figure 2 illustrates how the reconciliation process is designed to work, and how failing to reconcile is to affect the ability to retain subsidized coverage. As discussed later, IRS tax-return processing time, and taxpayer-requested extension of the filing deadline, can affect the timeliness of tax-filing data that IRS reports to marketplaces. Because reconciliation is a key requirement for receipt of subsidies, and CMS announced that loss of subsidy would be enforced for the first time in 2016, we focused a number of our undercover applicant scenarios on this process. At the outset of our testing, we made 6 of our 15 applications using identities from our 2014 testing, when we obtained subsidized coverage for them. After the payment-processing issue noted earlier, four of these six identities remained in active testing. In addition to obtaining coverage, each of the four remaining fictitious applicants was also approved for APTC subsidies. For these four applicants, these subsidies totaled about $1,100 on a monthly basis, or about $13,000 annually. They also obtained CSR subsidies. Figure 3 summarizes results by scenario. In two of the four cases, Marketplace representatives asked our applicants if they had filed the requisite income-tax return, to which they replied falsely that they had done so. For one of these applicants, a federal Marketplace representative initially told us that we were not approved for subsidies, for tax-related reasons. However, when we provided the representative with verbal assurances that we had filed the necessary tax return, the representative dropped the matter, and we were approved for subsidized coverage. In May 2016, we received a Marketplace notice stating that if we do not file a 2014 tax return and reconcile APTC, our subsidies would end. As of August 2016, our subsidized coverage remained in force. In the two other of the four cases, our fictitious state marketplace applicants were not asked whether they received APTC subsidies in 2014 or whether they filed income-tax returns. As noted earlier, for our state marketplace applicants, we filed some applications by paper form. The state marketplace’s paper application form, however, did not ask whether we had filed a 2014 tax return, or otherwise require us to demonstrate that we had filed the return. To support the tax-reconciliation requirement, IRS began reporting to federal and state-based marketplaces, in response to 2016 queries made to the data hub, cases in which an applicant or a member of the applicant’s tax household received APTC subsidies for 2014 but had not filed a 2014 income-tax return. IRS reports subsidy-recipient tax-return filing status based on information received from marketplaces on who received APTC subsidies. It matches the marketplace-provided information against records of who has filed tax returns, to identify those reported as receiving subsidies but who did not file a return. As shown in table 1, about one-quarter of 2014 APTC—totaling about $4 billion— had not been reconciled as of December 2015, according to summary information provided by IRS. As table 1 indicates, the largest category for unreconciled APTC, both in number of filers and value of APTC, is those who filed tax returns, but did not, as part of their filing, complete the necessary form for reconciliation. Although IRS reports to marketplaces whether an applicant has filed a tax return, it does not make eligibility determinations for APTC on applications for coverage. Instead, it passes the filing information to marketplaces, which then make the determinations. As IRS officials noted to us, reporting that an applicant has not filed a required tax return ends IRS’s role. In the case of the federal Marketplace, CMS generally elected not to rely on the IRS data identifying 2014 subsidy recipients who failed to file income-tax returns when making federal Marketplace eligibility determinations for 2016. Instead, if IRS reported that applicants had not filed a tax return, CMS chose to offer applicants the opportunity to attest they had made the proper tax filing, to be followed by CMS postapproval checks of IRS data. CMS officials told us they chose to allow applicant attestations of tax filing, rather than rely solely on IRS failure-to-file data, for two reasons: Time lag between when tax returns are filed and when filings are reflected in information IRS provides to marketplaces. This is due to: normal IRS processing time; additional time required to update tax- return-filing status in information provided to marketplaces; and because taxpayers can request a tax-filing deadline extension, to October 15, beyond the normal filing date of April 15. IRS officials told us that assuming a return is complete, normal processing time is typically 3 to 12 weeks. They also confirmed that the IRS status updates, which occur monthly, can add additional time. Enrollees receiving the APTC subsidy had not previously been required to reconcile the credit as part of their taxes and were unfamiliar with the reconciliation process. CMS officials told us that in May 2016, seeking to check individuals who received APTC after attesting to filing a 2014 tax return, the agency began a postapplication approval process for tax-filing verification. Under this process, the officials said, the federal Marketplace would first check IRS tax-filing-requirement data for applicants who attested on their applications that they had filed a tax return; next, would notify remaining APTC recipients who have not filed of the obligation to do so; and then would conduct a final IRS check of tax-filing status for those who had received warning notices. After that, nonfilers will lose APTC and income- based CSR subsidies. According to the officials, this process will be complete, with subsidies terminated, by October 2016. If such a determination is made to end subsidies, on the schedule CMS identified, those losing financial assistance for failure to file may have received subsidized coverage for January to September, or 9 months of the 2016 coverage year, according to CMS officials. IRS officials said subsidy recipients would still be responsible for reconciling APTC provided. As noted, that could increase or decrease tax liability depending on the individual situation. According to CMS officials, the tax-filing recheck process—done following applicant attestations on tax-filing earlier—began in May 2016, 4 months after the close of 2016 open enrollment. The CMS officials told us this timing was chiefly because the federal Marketplace did not have the system capability earlier to both first determine on a large-scale basis whether applicants had made required tax filings, and then also to subsequently end subsidies for those who had not done so. The process of both comprehensively checking tax-return-filing status and also taking action against those not making the required filings is a difficult, complex task, CMS officials told us, due to coordination required with IRS and restrictions on disclosure of protected federal taxpayer information. Although building a system to do so has been a priority, they said, competing priorities, coupled with complexity of building a new system, meant system capability to remove APTC on a large-scale basis following a recheck process would not be available until August or September 2016 at the earliest. CMS officials said that among enrollees in Marketplace coverage with APTC subsidies who had attested to filing a 2014 tax return, there were about 19,000 applications for which IRS indicated no 2014 return had been processed at the start of the recheck process in May 2016. CMS officials said they do not have information on the value of APTC and CSR subsidies associated with this coverage. Although the recheck process began 4 months after the end of 2016 open enrollment, the Marketplace hopes to begin the tax-filing check-and-termination process earlier for the upcoming 2017 coverage year, because the system will already be in place, the officials said. But a timetable has not been set, they said. For our 2016 fictitious applicants, because they did not file a tax return and reconcile APTC subsidies they received, IRS could have reported the failure to file in response to marketplace queries if the applicant had a valid Social Security number, CMS officials told us. Since the federal Marketplace process allowed applicant attestation instead, that is likely what accounted for our fictitious applicants successfully gaining coverage despite not having filed tax returns, according to the officials. IRS officials told us they expressed concerns to CMS about CMS’s attestation approach. Allowing applicants to attest to tax filing, without making some validation attempt at the time of application, raises the possibility that improper APTC payments would be made, in the case that it is determined later that an applicant in fact did not make the required filing. The issue arises because the APTC is paid before reconciliation status is ultimately known, the officials told us. From IRS’s perspective, if someone has not reconciled, the person has not met the obligation necessary to continue to receive APTC, they said. IRS officials said they have had ongoing discussions with CMS about this issue, but noted that CMS has the decision-making authority in the matter. In the case of the state marketplace we tested, Covered California similarly opted for applicant attestation rather than relying on IRS tax-filing data, marketplace officials told us. State officials told us that for 2016 enrollment, the marketplace added an attestation form to its online application system for those who had received subsidized coverage for 2014 and were renewing their coverage. In November 2015, the marketplace conducted an outreach campaign, sending notices to consumers who were reported by IRS as not having filed, warning that they were at risk of losing subsidies if they did not file tax returns. Covered California followed up with a reminder notice in January 2016, the officials said. Then, in May 2016, the marketplace rechecked IRS tax- filing data, officials told us, for those reported by IRS as not having filed, plus those who had not attested earlier that they had filed tax returns. On the basis of that check, the marketplace then ended subsidized coverage for those still showing as not having filed, officials told us. Covered California also allowed an opportunity to regain subsidized coverage, however. The state marketplace sent a notice of loss of subsidy, explaining that the change was based on failure to file. But the marketplace also told those losing subsidized coverage that if they believed the determination was wrong, they could attest to having filed. If consumers made that attestation, subsidized coverage would be restored, officials told us. Thus, a consumer could have attested (at renewal, when submitting a new application, or while reporting a change) that he or she had filed, or would file; next, have had subsidized coverage end when IRS data did not support the attestation; but then have the subsidized coverage reestablished through another attestation. The reason for this ultimate reliance on attestation is that officials were “very mindful” that IRS data being reported to marketplaces may not be current. While relying on attestation, the marketplace does not have information on the extent to which people who attested to filing have actually done so, the officials said. The officials also could not provide information on the number of applications where the IRS non-filing code was received, but the marketplace relied on attestation. As of May 2016, among 14,000 consumers that had not attested, 5,000 lost APTC during the redetermination process, Covered California officials told us. After subsidized coverage is restored following a postcutoff attestation, there are no further checks of IRS data for the coverage year, officials told us. This practice is because they deemed a new round of checks for the 2017 coverage year, beginning in about October 2016, as an opportune time to make the next check. However, the officials said the October check would not distinguish between failure to file for the first year subject to the reconciliation requirement—2014—or instead for the second year for which the requirement was in effect, 2015. Nevertheless, if a consumer has the IRS nonfiling code at that time, her or she will not be renewed with a subsidy, the officials said. In general, Covered California only takes action on the most recent tax-filing status code returned from the data hub, the officials said, which currently does not distinguish between years. In our California work we also identified that the state marketplace used a paper application that did not include a tax-filing query, which means that applicants filing by paper would not have to attest to tax-filing status. As a result, two of our fictitious applicants that submitted paper applications were not asked whether they filed taxes (as noted in fig. 3). Covered California officials confirmed the form has been in use since the first PPACA open-enrollment period began in 2013. The state marketplace is seeking to revise its paper application, with CMS to review the changes, Covered California officials told us. The state marketplace has limited ability to know whether applicants received subsidies in prior years, the Covered California officials told us, and thus are subject to the reconciliation requirement. Those who obtained previous coverage through Covered California can readily be identified, they said. But the state marketplace generally does not receive information on whether its applicants have ever had previous coverage elsewhere. An exception is for applicants flagged by IRS for failure to file. For non–Covered California enrollees, that flag indicates previous coverage elsewhere, the officials said. Otherwise, the state marketplace has no way to determine such previous coverage. At the outset of our testing, we made 9 of our 15 applications using new fictitious identities to test scenarios similar to those tested in our previous undercover testing—citizenship / lawful presence, Social Security identity, and duplicate enrollment in more than one state. After the payment- processing issue noted earlier, eight of these nine applications remained in active testing—seven for qualified-health-plan coverage and one for Medicaid. For all seven of the qualified-health-plan coverage cases that remained active, our fictitious applicants were approved for coverage with APTC and CSR subsidies. However, as discussed later, one fictitious applicant did not maintain subsidized coverage. For these seven successful applicants, we obtained APTC subsidies totaling about $2,700 on a monthly basis, or about $33,000 annually. In the eighth case, our applicant was approved for Medicaid coverage. Figure 4 summarizes our testing results by scenario. As previously noted, citizenship/lawful presence is an explicit eligibility criterion under PPACA. In the case of Social Security identity, the information our applicants submitted did not match information on file with SSA. In the case of duplicate enrollment, we used a single identity to apply for coverage in each of the three states—a situation consistent with identity theft. CMS officials told us that for the federal Marketplace there generally appeared to be reasons to explain the outcomes our fictitious applicants experienced. For example, in the case of the applicant who passed online identity proofing (described earlier), the eligibility system may still have been able to generate identity-proofing security questions even if our applicant’s identity was fictitious, the officials told us. This could be possible through use of probable or likely matching criteria, rather than exact matching of the phony applicant information, officials explained. That is, the system that seeks to identify a person and then generate corresponding security questions may have made a match based on some applicant information, rather than on a one-for-one match with information the applicant provided. The federal Marketplace uses a risk- based system for applicant identification, CMS officials told us, based on the preponderance of data available, as opposed to a single identity element. Because there is no universal source for applicant information, the risk-based approach is best, they said. Meanwhile, the identity- proofing process used in online applications is not used in the telephone application process, the officials said. This is due in part to resource limits, CMS officials told us, but is chiefly attributable to a policy decision that call center representatives not have access to applicants’ credit histories, in order to protect personally identifiable information. Likewise, according to CMS officials, some of our applicants’ treatment could likely be explained by an extension of document-submission deadlines granted by the Marketplace. CMS regulations authorize the Marketplace to extend the standard 90-day inconsistency resolution period if the applicant demonstrates a good-faith effort to obtain the required documentation during the period. In 2014, the Marketplace had statutory authority to extend for any reason the period to resolve inconsistencies unrelated to citizenship or lawful presence, as well as the good-faith-effort regulatory authority to extend the submission period for resolving any type of inconsistency. Using its authority, the Marketplace effectively waived document-submission requirements for many applicants. For 2015 and 2016, however, the statutory authority had expired, and the Marketplace took a different approach in implementing its good-faith-effort regulatory authority. CMS told us as part of our 2014 testing work that use of the good-faith-effort authority would be limited to a case-by-case basis after 2014. Under good-faith-effort extensions for 2016, documentation requirements are not waived, but applicants are provided additional time to submit documents, CMS officials said. According to CMS officials, a good-faith-effort extension can be triggered for these reasons: 1. An applicant has not received standard reminder notices warning of an unresolved inconsistency and the deadline for submitting documentation. In such cases, a 90-day extension is provided. 2. The Marketplace has not called the applicant to warn that he or she needs to submit documents by the deadline. A 30-day extension is provided. 3. The consumer requests an extension. A 60-day extension is provided. Each extension based on these factors is onetime only, the CMS officials told us. Other than granting these extensions, the Marketplace did not apply good-faith-effort authority in any other way for 2016 enrollment, officials told us. CMS also did not otherwise waive, amend, or extend verification or eligibility controls in 2016, officials told us. We asked CMS officials for details on the number of applications that benefited from good-faith-effort extensions for 2016, including the reason for granting the extensions and types of inconsistencies at issue, and as of August 2016 officials had yet to respond. In the case of duplicate enrollments, our fictitious applicant was first approved for subsidized coverage in the California marketplace and then—using the same identity—applied and was approved for a qualified health plan in Virginia and for Medicaid in West Virginia. When our applicant made the Medicaid application, a federal Marketplace representative flagged the applicant as potentially fraudulent. Nonetheless, the applicant was told that he was eligible for coverage. CMS officials told us they consider it highly unlikely, and thus low risk, that individuals would apply for multiple plans for themselves, given the cost of paying premiums on more than one plan. But they also acknowledged they are interested in the possibility that multiple enrollments could represent identity theft, and said they are working on approaches to identify such situations. As CMS has reported to us previously, officials during this review also said the agency is unaware of any fraud in individual consumer applications for federal Marketplace coverage. Apart from individual- consumer-level fraud, instances have occurred in which agents or brokers have submitted applications for people without their knowledge, for financial gain, such as if the agent/broker is working for an organization and is paid on commission based on the number of people enrolled, CMS officials told us. The officials said some consumers have reported to the agency that they have been enrolled without their knowledge. CMS officials declined to provide other details, saying work in this area is law- enforcement sensitive. In responding to consumer complaints, CMS has recently developed a capability for service-center representatives to direct complaint information to a program-integrity office for investigation into waste, fraud, or abuse, CMS officials told us. They likewise said further details were unavailable. Overall, according to CMS officials, the federal Marketplace has made a number of improvements to the eligibility and enrollment process, as well as the process for resolving application inconsistencies. In particular, CMS officials said the agency has focused on providing applicants with specific details of what documentation is required, and that notices sent to consumers have been improved. As a result, more consumers are sending proper documentation with appropriate information in response to Marketplace requests, and applicant inconsistencies are down. As an illustration of improvements in document filing, CMS officials cited a 40 percent increase in the number of documents consumers have submitted to resolve inconsistencies. According to CMS officials, the Marketplace has also relaxed the income- inconsistency resolution-threshold standard beginning with applications for 2016 coverage. Under this change, the acceptable variance for applicants submitting documentation to resolve income inconsistences has increased, and the inconsistency can be resolved if the new income information meets one of two standards. First, there can now be up to a 25 percent difference, up from 20 percent, between what an applicant initially reported in income and the amount submitted later when providing income documentation to resolve an income inconsistency, for that inconsistency to be officially resolved. Or second, he or she can resolve the inconsistency if the income difference is within $6,000. CMS officials said the federal Marketplace made the change in recognition that many applicants experience variations in earnings, making it hard to project income. They said that for lower-income households a small difference in income, measured in dollars, can result in a large percentage change. In the case of our Medicaid application, as noted, we applied through the federal Marketplace and were told our applicant may be eligible for Medicaid and that the West Virginia state Medicaid agency would contact us with a final determination. When we later called the West Virginia state Medicaid agency, we were told our applicant was approved for Medicaid. When we shared the results of our testing with West Virginia Medicaid officials, they told us that without a specific identity for our fictitious applicant, they could not comment authoritatively on the outcome. However, the officials said that because our applicant was not directed to produce any documentation, it is likely that the federal Marketplace did not pass along any application inconsistencies, assuming the application was processed properly. As noted, for this fictitious applicant a federal Marketplace representative said the case would be flagged as a “fraud issue,” because applicant identifying information was already present in the Marketplace system. However, West Virginia officials said the Marketplace does not pass along to the state any information suggesting fraud. The West Virginia officials told us that this experience illustrates why the federal Marketplace should make a greater effort to verify identity before sending Medicaid applicant information to the state. West Virginia is a “determination state,” meaning it delegates eligibility determinations to the federal Marketplace. That underscores the importance of the federal Marketplace making accurate determinations, the officials said. The officials told us West Virginia’s experience with Medicaid applicant data from the federal Marketplace has been that for 2014, the first coverage year for the program, data quality was not good overall. For example, the state would receive applicant information showing income exceeding Medicaid limits; or, the state would not receive Medicaid application information that should have passed from the federal Marketplace to the state. But since the first year, data quality has improved significantly, the officials said, although quality issues such as blank data fields or incorrect Social Security numbers remain. Data quality is important because if applicant information cannot be verified electronically, a Medicaid case worker must review it manually, the officials said. Had our West Virginia Medicaid applicant been directed to send documentation, a case worker would have examined it with the level of scrutiny applied varying according to the particular situation, the officials told us. Although Covered California officials also told us that without specific identities they could not comment on individual outcomes, in general they said the results of our undercover testing indicate their marketplace processes worked as designed. For example, they said, when our applicants could not clear online identity proofing and contacted Covered California representatives by phone, the representatives were correct in first seeking to direct the applicants to visit enrollment counselors, so they could verify identities in person. While in-person presentation of identity documentation is never required, the officials said, an in-person visit provides an opportunity to examine identity documents. When our applicants indicated they would have difficulty in doing so, the representatives were also correct in offering the opportunity to file a paper application, the officials said. Likewise, applicants were treated correctly in being granted eligibility with the directive to provide supporting documentation. The state officials noted that under PPACA the marketplace is required to accept paper applications. While our applicants could not establish their identity through the standard online process, the officials said, they could file a paper application in which the signature on the paper application is done under penalty of perjury. The same paper process is available for those originally applying by telephone, they said. Obtaining an eligibility determination then becomes possible—an option precluded by failure to confirm an identity in the online process, the officials said. The state marketplace does not have any information on the extent to which the threat of a penalty for perjury actually compels applicants to provide truthful answers. Like the federal Marketplace, Covered California made use of a good- faith-effort extension policy for applicant documentation. According to state officials, consumers must affirmatively request such an extension by contacting the state marketplace. They can be granted a maximum of 60 additional days to file required documentation, beyond the 90-day period initially provided. According to the officials, there has been a low volume of such requests—about 10 percent to 15 percent of consumers required to submit documentation to retain coverage. Covered California officials also told us they have eased documentation requirements in several other ways: Income: Covered California is not taking steps to resolve income inconsistencies. Even though it requested applicants to submit income documentation, it is not taking action in cases in which they do not. The reason is a policy decision that the issue of whether amount of subsidies received was proper will be addressed through the tax reconciliation process. The marketplace provides consumers with multiple notices, alerting them to possible tax consequences of income inconsistencies, officials said. In addition, Covered California decided to give higher priority to other inconsistencies that can lead to termination of coverage, such as citizenship / lawful presence, rather than adjustment of subsidies, they said. We note that under PPACA, even if reconciliation is made, the amount of excess APTC that can be recovered can be limited, based on household income and tax-filing status. CSR subsidies, however, are not subject to reconciliation. Minimum essential coverage: The marketplace is not taking action to verify applicants’ claims that they do not have access to “minimum essential coverage” and hence can apply for subsidized coverage through the marketplace. While important, such cases account for a very low percentage of all applications, the officials said. Incarceration: Rather than rely on documentation, the marketplace accepts applicant attestation on incarceration status. Under PPACA, those who are incarcerated are not eligible for coverage, unless they are incarcerated awaiting disposition of charges. The officials said they did not have information on the number of such attestations provided. Otherwise, Covered California officials told us the state marketplace has made a number of improvements. In May 2016, it implemented a system check to guard against use of impossible Social Security numbers; we used such numbers in our 2015 undercover testing, which included California. The marketplace is more consistently reminding people when documents are due and warning of loss of coverage if the material is not provided, they said. Consumer notices overall are more readable, following work with focus groups, according to the officials, and efforts are under way to address cases in which applicant-supplied Social Security numbers cannot be verified through the data hub. Such applicants, too, are being warned about loss of subsidy or coverage. On the issue of identity theft and duplicate enrollment, Covered California officials said that while it can check the state marketplace’s own records it would be helpful if CMS could supply data on those obtaining plans through the federal Marketplace. That way the state marketplace could check those obtaining coverage against coverage obtained elsewhere. We retained subsidized coverage for 10 of the 11 qualified-health-plan applicants through August 2016, even though supporting documentation we submitted was fictitious, and in some cases we submitted none or only some of the documentation we were directed to send. As noted, we focused our testing on 12 fictitious applicants. For all 11 of our applicants approved for qualified-health-plan coverage with subsidies, we were directed to provide supporting documentation. Our applicant approved for Medicaid received no direction to provide supporting documentation. In response to the marketplace directives to the 11 subsidized qualified- health-plan applicants, we provided follow-up documentation, albeit fictitious. Overall, we varied what we submitted by application— providing all, none, or only some of the material we were told to send—to test controls and note any differences in outcomes. Among the 11 applications for which we were directed to send documentation, we submitted all requested documentation for five applications, partial documentation for three applications, and no documentation for the remaining three applications. Figure 5 summarizes document submissions and outcomes for the 11 qualified-health-plan applicants, plus the Medicaid application for which, as noted, we were not directed to send documentation. In two of the cases, in which we provided only partial documentation, our applicants were nevertheless able to clear inconsistencies through conversations with marketplace phone representatives. For example, in one case we called the federal Marketplace to discuss notices received about application inconsistencies. A representative told our applicant that the applicant needed to submit documentation on citizenship status and Social Security number. However, our applicant told the representative that the applicant had a name change, and provided the former name. The representative appeared to enter this information into the Marketplace system before saying the documentation issues had been cleared, and no other information was required. The information our applicant provided over the phone, however, did not match documentation our applicant had filed previously. Without a specific identity, CMS officials could not say conclusively what happened with our application. Generally, however, they told us that under certain circumstances, such as an applicant providing new information, a previously recorded inconsistency may become inactive. In one of the 11 qualified-health-plan cases, as shown in figure 5, our fictitious applicant’s coverage was terminated after the document- submission period, after we failed—by design—to provide any documentation to clear an inconsistency, in this case regarding immigration status. We also noted other issues with marketplace-requested documentation: In one case involving Social Security identity, one of our applicants was directed to supply proof of a valid Social Security number at the time of initial eligibility determination. A subsequent Marketplace notice in early 2016, however, omitted that directive. We believe this could be confusing to an applicant. Further, to the extent it might cause an applicant to not submit necessary documentation, the discrepancy could lead to loss of coverage. CMS officials told us that the Marketplace initially requests a Social Security number, because having a Social Security number can help to clear other inconsistencies. The Marketplace does not, however, make it a practice to resolve Social Security inconsistencies alone. In another of our applications involving Social Security identity, a Marketplace representative noted a discrepancy with our applicant’s Social Security number, and inquired about the possibility of identity theft. Based on our applicant’s assurances, however, the representative cleared the discrepancy and made no request for the applicant’s Social Security card. In some cases, our applicants presented identical information, but marketplace handling of their applications was different. For example, in each of two federal Marketplace applicant scenarios, we claimed to be lawfully present and with income at a level qualifying for a subsidy. In each case, we were directed to provide proof of immigration status and income, and in both cases, we did not provide any documentation. In one case we lost coverage, while in the other we retained it. As noted, we elected not to continue testing with three scenarios after encountering premium-payment issues. Even though our coverage was canceled in these cases, we continued to receive marketplace notices directing us to provide supporting documentation or risk losing coverage. Such a situation could cause consumer confusion. CMS officials told us this practice is by design, because if consumers reapply later, they would still need to resolve inconsistencies previously identified. As noted in the case of our one successful Medicaid application, we were not directed to submit any supporting documentation. In discussing these outcomes for our fictitious applicants, federal and state marketplace officials reaffirmed, as we have reported previously, that the marketplaces do not seek to identify fraudulent document submissions. Federal Marketplace officials said document-review standards—in which CMS’s documents-processing contractor is not required to examine documents for fraud—remain unchanged. Unless documents show signs of being visibly altered, they are accepted as authentic. Covered California officials likewise told us marketplace service-center representatives do not authenticate documents. As with the federal Marketplace, the standard for review is visible alteration and whether a document presented appears as it should be; that is, for example, that a permanent-resident card submitted conforms to established design of such a card. If documents do look suspicious, they can be referred to a consumer-protection office for investigation, the officials said. Thus far, the office has not received any such referrals, they said. In addition, as noted earlier, federal officials cited good-faith-effort extensions as possibly contributing to our outcomes. California officials said the state marketplace does not take action in cases when applicants fail to submit requested income documentation, thereby leaving income inconsistencies unresolved, which could account for our results. For overall handling of inconsistency resolution, we asked CMS about the number of unresolved inconsistencies and the value of associated APTC and CSR subsidies. As of August 2016, the agency had yet to respond. Covered California officials provided some information on the state marketplace’s experience with inconsistency resolution. Since January 1, 2016, the marketplace eliminated APTC for failure to resolve citizenship / lawful-presence inconsistencies in 10,043 cases; and likewise for 875 cases with unresolved incarceration inconsistencies. Covered California did not have information on value of subsidies for these groups. As of June 2016, Covered California’s largest categories of unresolved inconsistencies were income (190,693 cases), Social Security number (9,247 cases), and citizenship / lawful presence (7,717 cases). Values of associated subsidies were likewise unavailable, officials told us. We provided a draft of this report to HHS, IRS, Covered California, and the West Virginia state Medicaid agency for their review and comment. HHS, IRS, and Covered California provided technical comments, which we incorporated as appropriate. HHS provided us with written comments, which are reprinted in appendix II. Covered California’s comments, along with our responses, are reprinted in appendix III. The West Virginia Medicaid agency did not provide comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the Acting Administrator of CMS, the Commissioner of Internal Revenue, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6722 or bagdoyans@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The objective of this report is to describe, by means of undercover testing and related work, potential vulnerabilities to fraud in the application, enrollment, and eligibility-verification controls of the federal Health Insurance Marketplace (Marketplace) and a selected state marketplace, for the third open-enrollment period under the Patient Protection and Affordable Care Act, for 2016 coverage. Our testing covered both individual health-care plans and Medicaid, with a portion focusing on a requirement that applicants who previously received advance payment of tax credits to subsidize their monthly premium payments must file federal income-tax returns and account for those credits, in order to continue receiving subsidies in future years. To perform our undercover testing of the application, enrollment, and eligibility-verification process for the 2016 open enrollment season— which ran from November 1, 2015, to January 31, 2016—we used fictitious identities for the purpose of making 15 applications. Specifically, we made 14 applications for individual plans, and 1 application for Medicaid. In these 15 applicant scenarios, we chose to test controls for verifications related to the following: 1. Whether applicants had made required income-tax filings. We made six such fictitious applications. For qualifying applicants, the act provides two possible forms of subsidies for consumers enrolling in individual health plans, both of which are generally paid directly to insurers on consumers’ behalf. One is a federal income-tax credit, which enrollees may elect to receive in advance, which reduces a consumer’s monthly premium payment. This is known as the advance premium tax credit. If an applicant chooses to have all or some of his or her credit paid in advance, the applicant is required to “reconcile” on his or her federal income-tax return the amount of advance payments the government sent to the applicant’s insurer on the applicant’s behalf with the tax credit for which the applicant qualifies based on actual reported income and family size. Our group of six fictitious applicants tested this reconciliation requirement. 2. The identity or citizenship/immigration status of the applicant, or whether the applicant had sought enrollment in multiple plans. We made nine such fictitious applications. In general, our testing approach allowed us to test similar scenarios across different states. We made 10 of our applications online initially, and 5 by phone. In some cases, we filed paper applications, as is permissible, after speaking with marketplace representatives. We set our applicants’ income levels at amounts eligible for subsidies provided under the act, or to meet Medicaid eligibility requirements, as appropriate. Because the federal government, at the time of our review, operated a marketplace on behalf of the state in about three-quarters of the states, we focused our work on those states. Specifically, we selected two states—Virginia and West Virginia—that elected to use the federal Marketplace rather than operate a marketplace of their own. We selected one additional state—California—that operates its own marketplace. The results obtained using our limited number of fictional applicants are illustrative and represent our experience with applications in the three states we selected. The results cannot, however, be generalized to the overall population of applicants or enrollees. For all 15 fictitious applications, we used publicly available information to construct our scenarios. We also used publicly available hardware, software, and materials to produce counterfeit or fictitious documents, which we submitted as appropriate for our testing. In responding to marketplace directives to submit documentation, we adopted an approach of submitting all requested documentation in some cases, partial documentation in other cases, or no documentation in the remaining cases, in order to note any differences in outcomes. We observed any approvals received, and responded as appropriate for our testing to any directions to provide additional supporting documentation. Fourteen of our 15 applicant scenarios involved qualified individual health plans. For these 14 plans, we attempted to pay the required premiums to put policies into force. For 11 of these 14 applicants, we successfully made premium payments. However, for three applicants, our initial premium payments—made to insurers we selected—were unsuccessful, and we were unable to resolve the issue. While we believed we had received confirmation of premium payments, insurers said payments were not received on a timely basis. As a result, our coverage was not put into effect in these three cases. At that point, because these cases had experienced different treatment than our other applications and no longer matched our original testing profile, we elected to discontinue them from further testing. To protect our undercover identities, we did not provide the marketplaces with specific applicant identity information. Centers for Medicare and Medicaid Services (CMS) and state officials generally told us that without such information they could not fully research handling of our applicants. We also reviewed statutes, regulations, and other policy and related information. Overall, our review covered the act’s third open-enrollment period, for 2016 coverage, as well as follow-on work after close of the open-enrollment period. After conducting our undercover testing, we briefed officials from CMS, the Internal Revenue Service, the California marketplace, and the West Virginia state Medicaid agency on our results and sought their views on the outcomes. We conducted this performance audit from November 2015 to September 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. We conducted our related investigative work in accordance with investigative standards prescribed by the Council of the Inspectors General on Integrity and Efficiency. 1. For the two California applications that were submitted to test income- tax filing and reconciliation requirements, we did provide a valid Social Security number. This means that our applicants could have been flagged by the Internal Revenue Service for failure to file tax returns. As Covered California noted, if the Social Security number is invalid or is not provided, the Internal Revenue Service does not return a failure-to-file code to the marketplace. In addition to the contact named above, Philip Reiff, Gary Bianchi, and Helina Wong, Assistant Directors; Evelyn Calderón; Paul Desaulniers; Ranya Elias; Robert Graves; Olivia Lopez; Maria McMullen; James Murphy; George Ogilvie; Ramon Rodriguez; Christopher H. Schmitt; Julie Spetz; and Elizabeth Wood made key contributions to this report.
PPACA provides for the establishment of health-insurance marketplaces where consumers can, among other things, select private health-insurance plans. States may operate their own health-care marketplace or rely on the federal Health Insurance Marketplace (Marketplace). The Congressional Budget Office estimates subsidies and related spending under PPACA at $56 billion for fiscal year 2017. GAO was asked to review marketplace enrollment and verification controls for the act's third open-enrollment period ending in January 2016.This report provides results of GAO undercover testing of potential vulnerabilities to fraud in the application, enrollment, and eligibility-verification of the federal Marketplace and one selected state marketplace. GAO submitted 15 fictitious applications for subsidized coverage through the federal Marketplace in Virginia and West Virginia and through the state marketplace in California. GAO's applications tested verifications related to (1) applicants' making required income-tax filings, and (2) applicants' identity or citizenship/immigration status. The results, while illustrative, cannot be generalized to the full population of enrollees. GAO discussed results with CMS, IRS, and state officials. Written comments from HHS and California are included in the report. GAO currently has eight recommendations to CMS to strengthen its oversight of the federal Marketplace (see GAO-16-29 ). CMS concurred with the recommendations and implementation is pending. The Patient Protection and Affordable Care Act (PPACA) requires health-insurance marketplaces to verify application information to determine eligibility for enrollment and, if applicable, determine eligibility for income-based subsidies. Verification steps include validating the applicant's Social Security number, if one is provided; citizenship or immigration status; and household income. PPACA requires the marketplaces to grant eligibility while identified inconsistencies between the information provided by the applicant and by government sources are being resolved. The 2016 coverage year was the first year in which a key eligibility requirement—verification of whether applicants who previously received one type of federal subsidy under the act filed federal tax returns, as a requirement to retain that benefit—went into effect. As previously reported for the 2014 and 2015 coverage years, GAO's undercover testing for the 2016 coverage year found that the health-care marketplaces' eligibility determination and enrollment processes remain vulnerable to fraud. The marketplaces initially approved coverage and subsidies for GAO's 15 fictitious applications. However, three applicants were unable to put their policies in force because their initial payments were not successfully processed. GAO focused its testing on the remaining 12 applications. For four applications, to obtain 2016 subsidized coverage, GAO used identities from its 2014 testing that had previously obtained subsidized coverage. Although none of the fictitious applicants filed a 2014 tax return, all were approved for 2016 subsidies. Marketplace officials told GAO that they allowed applicants to attest to filing taxes if information from the Internal Revenue Service (IRS) indicated that the applicant did not file tax returns. Marketplace officials said one reason they allow attestations is a time lag between when tax returns are filed and when they are reflected in IRS's systems. CMS officials said they are rechecking 2014 tax-filing status. For eight applications, GAO used new fictitious identities to test verifications related to identity or citizenship/immigration status and, in each case, successfully obtained subsidized coverage. When marketplaces directed 11 of the 12 applicants to provide supporting documents, GAO submitted fictitious documents as follows: For five applications, GAO provided all documentation requested and the applicants were able to retain coverage. For three applications, GAO provided only partial documentation and the applicants were able to retain coverage. Two of these applicants were able to clear inconsistencies through conversations with marketplace phone representatives even though the information provided over the phone did not match the fictitious documentation that GAO previously provided. For three applications, GAO did not provide any of the requested documents, and the marketplaces terminated coverage for one applicant but did not terminate coverage for the other two applicants. According to officials from the Department of Health and Human Services' (HHS) Centers for Medicaid & Medicare Services (CMS), some of GAO's application outcomes could be explained by decisions to extend document filing deadlines.
The FLSA requires that workers who are covered by the act and not specifically exempt from its provisions be paid at least the federal minimum wage (currently $7.25 per hour) and 1.5 times their regular rate of pay for hours worked over 40 in a workweek. The act also regulates the employment of youth under the age of 18 and establishes recordkeeping requirements for employers, among other provisions. There are a number of exceptions to the requirements of the FLSA; for example, independent contractors are not covered by the FLSA, and certain categories of workers, such as those in bona fide executive, administrative, or professional positions, are exempt from the minimum wage requirements, overtime requirements, or both. WHD has issued regulations implementing the FLSA that further define these exemptions and other requirements of the FLSA. The mission pursued by DOL through its WHD is to promote and achieve compliance with labor standards to protect and enhance the welfare of the nation’s workforce. The FLSA authorizes DOL to enforce its provisions by, for example, conducting investigations, assessing penalties, supervising payment of back wages, and bringing suit in court on behalf of employees. DOL’s WHD also conducts a range of compliance assistance activities to support employers in their efforts to understand and meet the requirements of the law. WHD’s enforcement and compliance assistance activities are conducted by staff in its 52 district offices, which are located throughout the country and managed by staff in its five regional offices and Washington, D.C. headquarters. In response to complaints of alleged FLSA violations it receives from workers or their representatives, WHD conducts several types of enforcement activities. These range from comprehensive investigations covering all laws under the agency’s jurisdiction to conciliations, a quick remediation process generally limited to a single alleged FLSA violation such as a missed paycheck for a single worker. Before WHD initiates an investigation of a complaint, it screens the complaint to determine, among other factors, whether the allegations, if true, would violate the law, and to ensure the statute of limitations has not expired. If WHD identifies a violation through an enforcement activity, but the employer refuses to pay the back wages or penalties assessed, DOL’s Office of the Solicitor may sue the employer on behalf of the affected workers. In fiscal year 2012, WHD conducted investigations or conciliations in response to about 20,000 FLSA complaints and DOL’s Office of the Solicitor filed about 200 federal lawsuits to enforce the requirements of the FLSA on behalf of workers. In addition to responding to complaints, WHD enforces the requirements of the FLSA by initiating investigations of employers by targeting industries it believes have a high probability of violations but a low likelihood that workers will file a complaint. In fiscal year 2012, WHD concluded about 7,000 targeted FLSA investigations. WHD encourages compliance with the FLSA by providing training for employers and workers and creating online tools and fact sheets that explain the requirements of the law and related regulations, among other efforts. The agency refers to these efforts collectively as compliance assistance. One form of FLSA compliance assistance WHD provides is written interpretive guidance that attempts to clarify the agency’s interpretation of a statutory or regulatory provision. WHD disseminates this guidance to those who request it—such as employers and workers— and posts it on the WHD website for public use. WHD’s interpretive guidance includes opinion letters which apply to a specific situation. However, in 2010, WHD stopped issuing opinion letters and indicated that it would instead provide administrator interpretations, which are more broadly applicable. As a result of the Portal-to-Portal Act of 1947, which established a “safe harbor” from liability under the FLSA for employers that rely in good faith on a written interpretation from WHD’s administrator, certain WHD written guidance could potentially provide a “safe harbor” in FLSA litigation. The FLSA also grants workers the right to file a private lawsuit to recover wages they claim they are owed because of their employer’s violation of the act’s minimum wage or overtime pay requirements. WHD cannot investigate all of the thousands of complaints it receives each year because of its limited capacity. Therefore, the agency informs workers whose complaints of FLSA violations are not investigated or otherwise resolved by WHD of their right to file a lawsuit. Workers filing an FLSA lawsuit may file in one of the 94 federal district courts, which are divided into 12 regional circuits across the country. FLSA lawsuits may be brought individually or as part of a collective action. A collective action is a single lawsuit filed by one or more representative workers on behalf of multiple workers who claim that an employer violated the FLSA in similar ways. The court will generally certify whether an action meets the requirements to proceed as a collective; in some cases, the court may decertify a collective after it is formed if the court subsequently determines that the collective does not meet those requirements. In such cases, the court may permit the members of the decertified collective to individually file private FLSA lawsuits. Over the last two decades, the number of FLSA lawsuits filed nationwide in federal district courts has increased substantially, with most of this increase occurring in the last decade. Since 1991, the number of FLSA lawsuits filed has increased by 514 percent, with a total of 8,148 FLSA lawsuits filed in fiscal year 2012. Since 2001, when 1,947 FLSA lawsuits were filed, the number of FLSA lawsuits has increased sharply (see fig. 1). Not only has the number of FLSA lawsuits increased, but they also constitute a larger proportion of all federal civil lawsuits than they did in past years. In 1991, FLSA lawsuits made up less than 1 percent (.6 percent) of all civil lawsuits, but by 2012, FLSA lawsuits accounted for almost 3 percent of all civil lawsuits, an increase of 383 percent. These increases, however, were not evenly distributed across all states. In fact, while federal district courts in most states saw increases in both the number of FLSA lawsuits filed and the percentage of all civil lawsuits filed that were FLSA lawsuits, increases in a small number of states were substantial and contributed significantly to the overall trends. In each of three states—Florida, New York, and Alabama—more than 1,000 more FLSA lawsuits were filed in fiscal year 2012 than in fiscal year 1991 (see fig. 2). Since 2001, more than half of all FLSA lawsuits were accounted for by filings in those three states. About 43 percent of FLSA lawsuits filed nationwide during this period were filed in either Florida (33 percent) or New York (10 percent). At the same time, the percentage of all federal civil cases that were FLSA cases in those three states also increased significantly. In both Florida and New York, growth in the number of FLSA lawsuits filed was generally steady, while changes in Alabama involved sharp increases in fiscal years 2007 and 2012 with far fewer lawsuits filed in other years. Each spike in Alabama coincided with the decertification of at least one large collective action, which likely resulted in multiple individual lawsuits. From 1991 to 2012, while most states experienced increases in the number of FLSA lawsuits filed, the proportion of civil lawsuits that were FLSA lawsuits, or both, these trends were not universal. In 14 states, the number of FLSA lawsuits filed in 2012 was less than or the same as the number of FLSA lawsuits filed in 1991, and in 10 states the proportion of civil lawsuits FLSA cases made up was smaller in 2012 than in 1991. While many factors have likely contributed to the overall increase in FLSA lawsuits and stakeholders we interviewed cited multiple factors, they most frequently cited increased awareness about FLSA cases and activity on the part of plaintiffs’ attorneys as a significant contributing factor. Many stakeholders, including two plaintiffs’ attorneys, told us that financial incentives, combined with the fairly straightforward nature of many FLSA cases, made attorneys receptive to taking these cases. In some states, specifically Florida, where nearly 30 percent of all FLSA lawsuits were filed from 1991 to 2012, several stakeholders, including federal judges, WHD officials, and a defense attorney, told us that plaintiffs’ attorneys advertise for wage and hour cases via billboards, radio, foreign language press, and other methods. Two stakeholders we spoke with also said that some plaintiffs’ attorneys, when consulted by potential clients about other employment issues, such as wrongful termination, will inquire about potential wage and hour claims; a plaintiffs’ attorney we interviewed also said that it is generally easier to evaluate the potential merits of wage and hour cases than of wrongful termination and employment discrimination cases. While a few stakeholders said they did not view increased interest among plaintiffs’ attorneys to be a significant factor in the increase in FLSA lawsuits, most did, including some plaintiffs’ attorneys. Two stakeholders, including an academic and a representative of an organization that works on behalf of low wage workers, told us that this increased interest was beneficial because it served to counterbalance DOL’s limited FLSA enforcement capacity. In addition, several stakeholders told us that evolving case law may have contributed to the increased awareness and activity on the part of plaintiffs’ attorneys. In particular, they mentioned the 1989 Supreme Court decision Hoffmann–La Roche, Inc. v. Sperling, which held that federal courts have discretion to facilitate notice to potential plaintiffs of ongoing collective actions. Historically, according to several stakeholders, the requirement that plaintiffs must “opt in” to a collective action had created some challenges to forming collectives because the plaintiffs’ attorneys had to identify potential plaintiffs and contact them to get them to join the collective. Stakeholders we interviewed said the Hoffmann-La Roche decision, which made it easier for plaintiffs’ attorneys to identify potential plaintiffs, reduced the work necessary to form collectives. In addition, according to several stakeholders we interviewed, case law in other areas of employment litigation, such as employment discrimination, has evolved, making FLSA cases relatively more attractive for plaintiffs’ attorneys who specialize in employment litigation and large multi-plaintiff cases. For example, one attorney cited two Supreme Court decisions in the late 1990s that made it more difficult for plaintiffs in employment- based sex discrimination lawsuits to prevail, and led plaintiffs’ attorneys to consider other types of employment litigation such as FLSA cases. Stakeholders also cited other factors that may have contributed to the increase in FLSA litigation over the last two decades; however, these factors were endorsed less consistently than the role played by plaintiffs’ attorneys. First, a number of stakeholders said that economic conditions, such as the recent recession, may have played a role in the increase in FLSA litigation. Workers who have been laid off face less risk when filing FLSA lawsuits against former employers than workers who are still employed and may fear retaliation as a result of filing lawsuits. In addition, some stakeholders said that, during difficult economic times, employers may be more likely to violate FLSA requirements in an effort to reduce costs, possibly resulting in more FLSA litigation. Moreover, one judge we interviewed noted that the recent recession has also been difficult for attorneys and may be a factor in the types of lawsuits and clients they choose to accept. In addition, ambiguity in applying the FLSA statute or regulations—particularly the exemption for executive, administrative, and professional workers—was cited as a factor by a number of stakeholders. In 2004, DOL issued a final rule updating and revising its regulations in an attempt to clarify these exemptions and provided guidance about the changes, but a few stakeholders told us there is still significant confusion among employers about which workers should be classified as exempt under these categories. Finally, the potentially large number of wage and hour violations was given as a possible reason for the increase in FLSA litigation. Federal judges in New York and Florida attributed some of the concentration of such litigation in their districts to the large number of restaurants and other service industry jobs in which wage and hour violations are more common than in some other industries. An academic who focuses on labor and employment relations told us that centralization in the management structure of businesses in retail, restaurant, and similar industries has contributed to FLSA lawsuits in these industries because frontline managers who were once exempt have become nonexempt as their nonmanagerial duties have increased as a portion of their overall duties. Service jobs, including those in the leisure and hospitality industry, increased from 2000 to 2010, while most other industries lost jobs during that period. Many stakeholders also told us that the prevalence of FLSA litigation by state is influenced by the variety of state wage and hour laws. For example, while the federal statute of limitations for filing an FLSA claim is 2 years (3 years if the violation is “willful”), New York state law provides a 6-year statute of limitations for filing state wage and hour lawsuits. A longer statute of limitations may increase potential financial damages in cases because more pay periods are involved and because more workers may be involved. Adding a New York state wage and hour claim to an FLSA lawsuit in federal court may expand the potential damages, which, according to several stakeholders, may influence decisions about where and whether to file a lawsuit. In addition, according to multiple stakeholders we interviewed, because Florida lacks a state overtime law, those who wish to file a lawsuit seeking overtime compensation generally must do so under the FLSA. Our review of a representative sample of FLSA lawsuits filed in federal district court in fiscal year 2012 showed that approximately half were filed against private sector employers in four industries. Almost all FLSA lawsuits (97 percent) were filed against private sector employers. An estimated 57 percent of the FLSA lawsuits filed in fiscal year 2012 were filed against employers in four broad industry areas: accommodations and food services; manufacturing; construction; and “other services” which, in our sample, included services such as laundry services, domestic work, and nail salons. Almost one-quarter of all lawsuits filed (an estimated 23 percent) were filed by workers in the accommodations and food service industry, which includes hotels, restaurants, and bars. This concentration of lawsuits is consistent with what several stakeholders, including DOL officials, told us about the large number of FLSA violations in the restaurant industry. At the same time, almost 20 percent of FLSA lawsuits were filed by workers in the manufacturing industry. In our sample, most of these lawsuits were filed by workers in the automobile manufacturing industry in Alabama, and most were individual lawsuits filed by workers who were originally part of one of two collective actions that had been decertified. It is important to note that, because of the presence of collective actions, the number of lawsuits filed against an industry’s employers may understate the number of plaintiffs involved in these suits. FLSA lawsuits filed in fiscal year 2012 included a variety of alleged FLSA violations in addition to at least one of the three types of claims— overtime, minimum wage, and retaliation—that each private FLSA lawsuit must at minimum contain. Allegations of overtime violations were the most common type among those explicitly stated in the documents we reviewed (see fig. 3). An estimated 95 percent of the FLSA lawsuits filed in fiscal year 2012 alleged violations of the FLSA’s overtime provision, which requires certain types of workers to be paid at one and a half times their regular rate for any hours worked over 40 during a workweek. Thirty-two percent of the FLSA lawsuits filed in fiscal year 2012 contained allegations that the worker or workers were not paid the federal minimum wage, another main provision of the FLSA, while a smaller percentage of lawsuits included allegations that the employer unlawfully retaliated against workers (14 percent). In addition, the majority of lawsuits contained other FLSA allegations, most often that the employer failed to keep proper records of hours worked by the employees (45 percent); failed to post or provide information about the FLSA, as required (7 percent); or violated requirements pertaining to tipped workers such as restaurant wait staff (6 percent). We also identified more specific allegations about how workers claimed their employers violated the FLSA. Nearly 30 percent of the FLSA lawsuits filed in fiscal year 2012 contained allegations that workers were required to work “off-the-clock” so that they would not need to be paid for that time, and 16 percent alleged that workers were not paid appropriately because they were employees who were misclassified as being exempt from FLSA protections. In a similar proportion of cases (13 percent), alleged overtime violations were claimed to be the result of the miscalculation of the wage rate a worker was entitled to as overtime pay. Such miscalculations could be the result, for example, of an employer not factoring in bonuses paid to workers when determining their regular rate of pay, which is used for the calculation of the overtime pay rate. Other lawsuits included allegations that a worker was misclassified as an independent contractor rather than an employee (4 percent). Independent contractors are generally not covered by the FLSA, including its minimum wage and overtime provisions. In our review of FLSA lawsuits filed in fiscal year 2012, we found that a majority of them were filed as individual actions, but collective actions also composed a substantial proportion of these lawsuits. Collective actions can serve to reduce the burden on courts and protect plaintiffs by reducing costs for individuals and incentivizing attorneys to represent workers in pursuit of claims under the law. They may also protect employers from facing the burden of many individual lawsuits; however, they can also be costly to employers because they may result in large amounts of damages. We found that an estimated 58 percent of the FLSA lawsuits filed in federal district court in fiscal year 2012 were filed individually, and 40 percent were filed as collective actions. An estimated 16 percent of the FLSA lawsuits filed (about a quarter of all individually- filed lawsuits), however, were originally part of a collective action that was decertified (see fig. 4). For example, 14 of the 15 lawsuits in our sample filed in Alabama were filed by individuals who had been members of one of two collectives that were decertified in fiscal year 2012. Consistent with its stated mission of promoting and achieving compliance with labor standards, WHD has an annual process for planning how it targets its FLSA enforcement resources. Each year, WHD’s national office plans the share of its enforcement resources that will be used for targeted investigations versus responding to complaints it receives from workers or their representatives about potential FLSA violations. To plan the deployment of its resources for targeted investigations, WHD identifies broad initiatives that focus on industries it determines have a high likelihood of FLSA violations and where workers may be particularly vulnerable. For example, WHD has targeted industries where workers may be less likely to complain about violations or where the employment relationship is splintered because of models such as franchising or subcontracting. WHD’s regional and district offices then refine the list of priority industries to develop plans that focus on the most pressing issues in their areas. Each year, with input and ultimate approval from the national office, WHD’s regional and district offices use these plans to target their enforcement resources. In developing their enforcement plans, WHD considers various inputs. For example, WHD officials consider the nature and prevalence of FLSA violations by using historical enforcement data to study trends in FLSA complaints and investigation outcomes in particular areas. University- based researchers under contract with DOL have also used the agency’s historical enforcement data to help it plan for and strategize its FLSA enforcement efforts. In addition, WHD considers data from external sources, such as reports from industry groups, advocacy organizations, and academia. Although WHD’s national office is aware of significant FLSA lawsuits through its monitoring of FLSA issues in court decisions, WHD’s national, regional, and district offices do not analyze data on the number of FLSA lawsuits filed or use the results of such analyses to inform their enforcement plans. WHD officials noted that data on the number of FLSA lawsuits filed may not provide an accurate or sensitive gauge of FLSA violations because the number of workers involved and the outcomes of these lawsuits are not readily available. In developing their annual enforcement plans, WHD regional and district offices identify approaches to achieving compliance given the industry structure and the nature of the FLSA violations that they seek to address. According to WHD internal guidance, strategic enforcement plans should not only include targeted investigations of the firms that employ workers potentially experiencing FLSA violations, but they should also contain strategies to engage related stakeholders in preventing such violations. For example, if a WHD office plans to investigate restaurants to identify potential violations of the FLSA, it should also develop strategies to engage restaurant trade associations about FLSA-related issues so that these stakeholders can help bring about compliance in the industry. Our prior reports and DOL’s planning and performance documents have emphasized the need for WHD to help employers comply with the FLSA. In documenting best practices about planning and performance management, we have suggested that agencies “involve regulated entities in the prevention aspect of performance.” In the case of WHD, this best practice means helping employers voluntarily comply with the FLSA, among other laws. Similarly, DOL’s planning and performance documents have emphasized the importance of WHD promoting “sustained and corporate-wide compliance among employers” as a strategic priority. According to federal standards for internal control, program managers need operational data to determine whether they are meeting their agencies’ strategic and annual performance plans as well as their goals for effective and efficient use of resources. In addition, according to our guidance on the Government Performance and Results Act of 1993 (GPRA), for planning and performance measures to be effective, federal managers need to use performance information to identify problems and look for solutions and approaches that improve results. WHD expects staff in its regional and district offices to play a key role in delivering some forms of compliance assistance. For example, staff in the district offices respond directly to employers’ questions about laws such as the FLSA by providing informal guidance, most of which is offered over the phone but is sometimes provided in writing when the guidance is particularly technical. In addition, in each of WHD’s five regions, there are three or more staff who specialize in community outreach and planning. These specialists are involved in planning meetings and developing outreach efforts and other forms of compliance assistance as part of the annual, district-specific enforcement plans. Finally, WHD investigators in the field are responsible for providing information and education to employers during their enforcement actions. At the national level, WHD publishes FLSA-related guidance including notably its interpretive guidance, though this guidance is not informed by systematic analysis of data on requests for assistance. To develop and assess its interpretive guidance about the FLSA, WHD’s national office considers input from its regional offices, but it does not have a data-based approach that is informed by objective input, such as data on areas which employers and workers have indicated the need for additional guidance. Officials from WHD’s Office of Policy, which is responsible for publishing interpretive guidance about the FLSA, told us they meet with WHD regional and national office leadership weekly to discuss ongoing initiatives and emerging issues. While WHD collects some data on the inquiries it receives from the public via its call center, the Office of Policy does not analyze these data to help guide its development of interpretive guidance. According to WHD officials, the call center frequently refers callers with technical questions to a WHD district office, but WHD does not compile data on FLSA-related questions received by its district offices. In addition, WHD does not use advisory panels to gather input about areas of confusion that might be addressed with the help of additional or clarified interpretive guidance on the FLSA. WHD officials cited the administrative burdens associated with the Federal Advisory Committee Act as a deterrent to using such panels to inform its guidance. At the same time, despite the issuance of several FLSA-related fact sheets, WHD’s publication of FLSA-related interpretive guidance has declined in recent years. From 2001 to 2009, WHD published on its website, on average, about 37 FLSA interpretive guidance documents annually. However, in the last 3 years (2010 to 2012), WHD published seven FLSA interpretive guidance documents. WHD officials cited various reasons for this decline, including the resource-intensive nature of developing the guidance; WHD’s finite resources; and other priorities, such as promoting compliance with the Family and Medical Leave Act of 1993. In addition, WHD cited its issuance of several FLSA-related fact sheets, which WHD posts separately from interpretive guidance on its website. For example, in September 2013, WHD published several fact sheets about domestic service employment under the FLSA, and, in July 2013, it revised a fact sheet about ownership of tips under the FLSA. According to WHD officials, there is no backlog of requests for FLSA interpretive guidance; however, WHD does not maintain a system for tracking requests for such guidance. Because WHD does not have a systematic approach for identifying areas of confusion about the FLSA or assessing the guidance it has published, WHD may not be providing the guidance that employers and workers need. Of the nine wage and hour attorneys we interviewed, seven indicated that more interpretive guidance on the FLSA would be helpful to them. The attorneys cited determining whether workers qualify for exemptions and calculating workers’ regular rate of pay for purposes of overtime compensation as examples of FLSA topics on which more guidance would be useful. Some policymakers have raised questions about the effect that an increasing number of FLSA lawsuits might have on employers’ finances and their ability to hire workers or offer flexible work schedules and other benefits, but it is difficult to isolate the effect of these lawsuits from the effects of other influences such as changes in the economy. On the other hand, the ability of workers to bring such suits is an integral part of FLSA enforcement because WHD does not have the capacity to ensure that all employers are in compliance with the FLSA. While there has been a significant increase in FLSA lawsuits over the last decade, the reason for this increase is difficult to determine: it could suggest that FLSA violations have become more prevalent, that FLSA violations have been reported and pursued more frequently than before, or a combination of the two. Improved guidance from WHD might not affect the number of FLSA lawsuits filed, but it could increase the efficiency and effectiveness of WHD’s efforts to help employers voluntarily comply with the law. Without a precise understanding of the areas of the law and related regulations that are not clear to employers and workers, WHD may not be able to improve the guidance and outreach it provides in the most appropriate or efficient manner. A clearer picture of the needs of employers and workers would allow WHD to more efficiently design and target its compliance assistance efforts, which may, in turn, result in fewer FLSA violations. Moreover, using data about the needs of employers and workers in understanding the requirements of the FLSA would provide WHD greater confidence that the guidance and outreach it provides to employers and workers are having the maximum possible effect. Such data could, for example, serve as a benchmark WHD could use to assess the impact of its efforts. To help inform its compliance assistance efforts, the Secretary of Labor should direct the WHD Administrator to develop a systematic approach for identifying areas of confusion about the requirements of the FLSA that contribute to possible violations and improving the guidance it provides to employers and workers in those areas. This approach could include compiling and analyzing data on requests for guidance on issues related to the FLSA, and gathering and using input from FLSA stakeholders or other users of existing guidance through an advisory panel or other means. We provided a draft of this report to DOL for review and comment. DOL's WHD provided written comments, which are reproduced in appendix II. WHD agreed with our recommendation that the agency develop a systematic approach for identifying and considering areas of confusion that contribute to possible FLSA violations to help inform the development and assessment of its guidance. WHD stated that it is in the process of developing systems to further analyze trends in communications received from stakeholders such as workers and employers and will include findings from this analysis as part of its process for developing new or revised guidance. WHD also emphasized that it is difficult to determine with sufficient certainty that any particular action contributed to the described increase in FLSA lawsuits. In addition, WHD provided technical comments, which we incorporated as appropriate. We also provided a draft of this report to the Administrative Office of the United States Courts and the Federal Judicial Center. These agencies had no comments—technical or otherwise—on the report. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Labor, the Director of the Administrative Office of the United States Courts, and the Director of the Federal Judicial Center. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or moranr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To describe what is known about trends in the number of lawsuits filed under the Fair Labor Standards Act of 1938, as amended (FLSA), we analyzed federal district court data provided by the Federal Judicial Center. The data included case-specific information for all FLSA lawsuits filed in federal district court during fiscal years 1991 through 2012. We analyzed these data by year, circuit, state, and district. The Federal Judicial Center also provided data on the number of civil lawsuits filed during this time period, which we used to analyze the percentage of civil cases that were FLSA cases. We did not review FLSA lawsuits filed in state courts because data on these cases were not available in a consistent way. To provide context about identified trends such as the increase in FLSA lawsuits, we interviewed a range of FLSA stakeholders, including Department of Labor (DOL) officials, attorneys who specialize in wage and hour cases, federal district court and magistrate judges, officials from organizations representing workers and employers, and academics. To ensure balance, we interviewed both attorneys who represent plaintiffs and those who represent defendants. We identified some of these attorneys through organizations that represent workers, such as labor unions and advocacy organizations such as the National Employment Law Project, and industry organizations such as the National Small Business Association. In selecting judges to interview, we chose from districts with a significant increase in FLSA litigation in recent years as well as districts that had not seen such increases to ensure a variety of perspectives. To provide information on selected characteristics of FLSA lawsuits filed, we reviewed a nationally representative random sample of complaints from FLSA lawsuits filed in federal district court during fiscal year 2012. The sample of 97 complaints from FLSA lawsuits was drawn from the case-specific FLSA lawsuit data provided by the Federal Judicial Center. The filing date was determined by the “filing date” field, which records the date a case was docketed in federal court. Cases that were initially filed in federal court, cases removed from state court to federal court, cases transferred from another federal court, and cases for which a new federal docket was otherwise created during fiscal year 2012 (e.g., an individual docket created in fiscal year 2012 after a previously filed collective action was decertified) could be in our sample. We did not review cases from the federal courts of appeals because data linking appeals to their corresponding cases filed in district court were not readily available. All estimates from our sample have a 95 percent confidence interval of within plus or minus 10 percentage points. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 10 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. After the sample was drawn, we used the docket number associated with each lawsuit in the sample to retrieve the complaint from the Public Access to Court Electronic Records (PACER) database and review and record information about the lawsuit. In cases where multiple complaints were associated with a docket, such as amended complaints, we used information from the first available complaint in the docket. We selected this approach to ensure that we recorded information for each individual case at approximately the same stage of litigation. We relied solely on information that could be ascertained from the complaint; we did not do additional research to confirm the accuracy of the information. With respect to our analysis of the specific types of violations alleged in our sample of complaints, our estimates include only those cases in which an allegation was explicitly made in the complaint. It is possible that a larger percent of FSLA litigation involved specific issues that were not alleged explicitly in the complaint and instead were described more generically as overtime or minimum wage violations. In addition, we did not review subsequent documents filed in the case beyond the initial complaint. Therefore, our review does not provide any information about how these lawsuits were ultimately resolved. Information on the number of plaintiffs taking part in a collective action was neither consistently available in the complaints nor was it precise when it was available. Our analysis with regard to the industries in which FLSA lawsuits are filed is based on the number of FLSA lawsuits filed, not the number of plaintiffs included in those lawsuits. A single collective action represents multiple plaintiffs. Therefore, the number of FLSA lawsuits filed against employers in a specific industry may not accurately reflect the number of workers or relative frequency of workers claiming FLSA violations by industry. The complaint from each lawsuit was reviewed by a GAO analyst and a GAO attorney to identify the FLSA violation(s) it alleged and other information, such as whether the lawsuit was a collective action, whether there were associated allegations of state wage and hour law violations, and the industry of the worker or workers who filed the lawsuit. In cases in which the two reviewers recorded information about the lawsuit differently, a discussion between them was held to resolve the difference. We assessed the reliability of the data received from the Federal Judicial Center by interviewing officials at the Administrative Office of the U.S. Courts and the Federal Judicial Center and by reviewing documentation related to the collection and processing of the data. In addition, we conducted electronic testing to identify any missing data, outliers, and obvious errors. We determined that certain data fields were not sufficiently reliable, and therefore did not use them. For example, we could not analyze data about judgments such as the amount of monetary awards in FLSA lawsuits because, for a large percentage of the cases, the information on the judgment was missing. We determined that the data included in our report were sufficiently reliable for our purposes. To describe how DOL’s Wage and Hour Division (WHD) plans its FLSA enforcement and compliance assistance efforts, we reviewed the agency’s planning and performance documents, as well as its published guidance on the FLSA. In addition, we interviewed DOL officials in WHD’s Office for Planning, Performance, Evaluation, and Communications; Office of Policy; two regional WHD offices; and DOL’s Office of the Solicitor about the agency’s enforcement and compliance assistance activities. In addition, we asked the other stakeholders we interviewed about their views of WHD’s enforcement and compliance assistance efforts. To provide context throughout the report, we also reviewed relevant federal laws and regulations. Finally, we compared WHD’s planning process to internal control standards (see GAO, Standards for Internal Control in The Federal Government, GAO/AIMD-00-21.3, Washington, D.C.: November 1999) and best practices that we have previously identified (see GAO, Managing for Results: Enhancing Agency Use of Performance Information for Management Decision Making, GAO-05-927, Washington, D.C.: Sept. 9, 2005; and Managing for Results: Strengthening Regulatory Agencies’ Performance Management Practices, GAO/GGD-00-10, Washington, D.C.: Oct. 28, 1999). We did not assess WHD’s implementation of its enforcement and compliance assistance plans. We conducted this performance audit from November 2012 to December 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Betty Ward-Zukerman (Assistant Director), David Barish, James Bennett, Ed Bodine, David Chrisinger, Sarah Cornetto, Justin Fisher, Joel Green, Ying Long, and Walter Vance made significant contributions to this report. Also contributing to the report were Jessica Botsford, Susanna Clark, Melinda Cordero, Ashley McCall, Sheila McCoy, Drew Nelson, Catherine Roark, Sabrina Streagle, Anjali Tekchandani, and Kimberly Walton.
The FLSA sets federal minimum wage and overtime pay requirements applicable to millions of U.S. workers and allows workers to sue employers for violating these requirements. Questions have been raised about the effect of FLSA lawsuits on employers and workers and about WHD's enforcement and compliance assistance efforts as the number of lawsuits has increased. This report (1) describes what is known about the number of FLSA lawsuits filed, and (2) examines how WHD plans its FLSA enforcement and compliance assistance efforts. To address these objectives, GAO analyzed federal district court data from fiscal years 1991 to 2012 and reviewed selected documents from a representative sample of lawsuits filed in federal district court in fiscal year 2012. GAO also reviewed DOL's planning and performance documents and interviewed DOL officials, as well as stakeholders, including federal judges, plaintiff and defense attorneys who specialize in FLSA cases, officials from organizations representing workers and employers, and academics about FLSA litigation trends and WHD's enforcement and compliance assistance efforts. Substantial increases occurred over the last decade in the number of civil lawsuits filed in federal district court alleging violations of the Fair Labor Standards Act of 1938, as amended (FLSA). Federal courts in most states experienced increases in the number of FLSA lawsuits filed and the percentage of total civil lawsuits filed that were FLSA cases, but large increases were concentrated in a few states, including Florida and New York. The number of workers involved in FLSA lawsuits is unknown because the courts do not collect data on the number of workers represented. Many factors may contribute to this general trend; however, the factor cited most often by stakeholders, including attorneys and judges, was attorneys' increased willingness to take on such cases. In fiscal year 2012, an estimated 97 percent of FLSA lawsuits were filed against private sector employers, often from the accommodations and food services industry, and 95 percent of the lawsuits filed included allegations of overtime violations. The Department of Labor's (DOL) Wage and Hour Division (WHD) has an annual process for planning how it will target its enforcement and compliance assistance resources to help prevent and identify potential FLSA violations, but it does not compile and analyze relevant data to help determine what guidance is needed, as recommended by best practices previously identified by GAO. In planning its enforcement efforts, WHD targets industries it determines have a high likelihood of FLSA violations. Although WHD does not analyze data on FLSA lawsuits when planning its enforcement efforts, it does use information on its receipt and investigation of complaints about possible FLSA violations. In developing its guidance on FLSA, WHD considers input from its regional offices, but it does not have a systematic approach that includes analyzing relevant data, nor does it have a routine, data-based process for assessing the adequacy of its guidance. For example, WHD does not analyze trends in the types of FLSA-related questions it receives. Since 2009, WHD has reduced the number of FLSA-related guidance documents it has published. According to plaintiff and defense attorneys GAO interviewed, more FLSA guidance from WHD would be helpful, such as guidance on how to determine whether certain types of workers are exempt from overtime pay and other requirements. GAO recommends that the Secretary of Labor direct the WHD Administrator to develop a systematic approach for identifying and considering areas of confusion that contribute to possible FLSA violations to help inform the development and assessment of its guidance. WHD agreed with the recommendation, and described its plans to address it.
The Postal Reorganization Act of 1970 created the independent U.S. Postal Service and authorized it to make arrangements with DOD regarding the performance of military postal services. Each military service managed its own mail program until 1980, when DOD and the U.S. Postal Service entered into an agreement for the joint provision of postal services for all branches of the armed forces. The agreement created the Military Postal Service Agency, which acts as an extension of the U.S. Postal Service beyond the boundaries of U.S. sovereignty and must provide full postal services, as nearly as practicable, for all DOD personnel overseas where there is no U.S. Postal Service available. The Military Postal Service Agency is DOD’s single manager for military postal functions. Although this joint service agency is organizationally located under the Army Adjutant General and depends on the Army for funding and staffing, the Under Secretary of Defense (Acquisitions, Technology, and Logistics) is responsible for the agency’s policies and oversight. In October 2002, several months prior to U.S. and coalition troops crossing the border into Iraq, a joint planning conference was held at U.S. Central Command—the designated combatant command for Operation Iraqi Freedom. The U.S. Central Command hosted the conference, bringing together postal officials from all four military components, as well as the U.S. Postal Service and the Military Postal Service Agency. The conference led to the creation of a U.S. Central Command postal operating plan that assigned roles and responsibilities for all joint postal operations during the impending contingency. The DOD doctrine for joint military operations states that postal support for any contingency is coordinated by the combatant command in the region. The combatant commander appoints a single-service postal manager to direct, implement, and manage all postal operations in the joint theater. Since the Gulf War in 1991, the single-service manager for postal operations in the U.S. Central Command area of responsibility has been the Air Force’s 82nd Computer Support Squadron, currently assigned to the Air Force’s Air Combat Command. However, U.S. Central Command has the overriding responsibility for all operations in theater, including postal operations. The movement of mail from the Unites States to troops in the Iraqi theater follows several complex logistical steps. Letters and parcels with military addresses destined to Iraq, Kuwait, and Bahrain are sent to one of four International Mail GatewaysNew York, San Francisco, Chicago and Miamifor processing. According to Military Postal Service Agency data, 90 percent of all letters and parcels for Operation Iraqi Freedom were processed through New York. The U.S. Postal Service delivers letters to the International Service Center at John F. Kennedy Airport, in New York; parcels are delivered to the Postal Service’s International and Bulk Mail Center in New Jersey. After the letters and parcels are sorted, they are then packaged, placed into containers, and then transferred to Newark International Airport in New Jersey where they are loaded onto airplanes for transport to the Iraqi theater. Unlike during Operations Desert Shield/Storm, where military planes operated by the Military Airlift Command transported much of the mail, a dedicated contractor aircraft carried mail during Operation Iraqi Freedom. During the next stage of mail movement, the mail planes fly to aerial mail terminals colocated at the international airports in Kuwait and Bahrain. Once landed, local airport ground handlers offload the mail containers from the planes and take them to an Air Force Mail Control Activity located at the airport, where the mail is staged for ground transportation. In Bahrain, mail for service members stationed in Iraq is processed at the U.S. Air Force Mail Control Activity; mail for service members located in Bahrain or aboard ships is processed at the U.S. Fleet Mail Center. For troops stationed in Iraq, mail is transferred onto a contracted cargo plane and flown directly into Iraq. In Kuwait, all mail is processed at the Joint Military Mail Terminal. Figure 1 illustrates two different examples of how military mail flows from the Newark International Airport into the Iraqi theater. The Joint Military Mail Terminal, which handles the bulk of the letters and parcels entering the Iraqi theater, sorts the mail and arranges for its transportation—either by land or by air—to the various regions occupied by U.S. troops. Mail must be delivered to the unit level, designated by ZIP codes provided by the Military Postal Service Agency, before it can be distributed to individual service members. Figure 2 illustrates postal operations and a backlog of mail in February 2003 at the Joint Military Mail Terminal in Kuwait. According to the Military Postal Service Agency, more than 65 million pounds of letters and parcels were delivered to U.S. Central Command’s contingency area of responsibility during calendar year 2003 at a cost of nearly $150 million. The largest amount moved in a single month was April 2003, when over 11 million pounds of mail were delivered. This represents an average of just over 377,000 pounds per daythe equivalent of about forty 40-foot-long trailers full of mail. Figure 3 illustrates a convoy of trucks carrying 40-foot trailers of mail leaving the Kuwait Joint Military Mail Terminal. The timeliness of mail delivery to troops serving in Operation Iraqi Freedom cannot be accurately determined because DOD does not have a reliable, accurate system in place to measure timeliness. Data collected by military postal units using the Transit Time Information Standard System for Military Mail indicate that average delivery times met the Army wartime standard of 12 to 18 days. However, the methodology used to calculate and report these times consistently masks the actual time it takes for service members to receive mail, thus significantly understating actual delivery times. Test letters sent to individuals at military post offices also have produced unreliable data because many test letters were never returned, and letters were sent only to individuals located at military post offices. Military postal officials acknowledge that mail delivery to troops serving in Operation Iraqi Freedom was not timely. In addition, more than half of the 127 soldiers and marines we talked with during informal meetings at their home bases in the United States said they were dissatisfied with the timeliness of mail delivery while they were deployed. Morale suffered, as mail from home was many service members’ only link with friends and families. The Army’s wartime standard for first class mail delivery is 12 to 18 days from the point of origin to the individual service member. According to our analysis of data reported by the Transit Time Information Standard System for Military Mail, average postal transit times for letters and parcels sent to the Iraqi theater ranged from 11 to 14 days from February through September 2003. (See fig. 4.) These times represent the time it takes for a letter or parcel to go from its point of origin (a stateside post office) to a service member’s designated military post office, where he or she picks up mail. However, on the basis of our analysis, we found that the methodology used to calculate and report transit times significantly understates the actual time it takes for a service member to receive mail. According to Transit Time Information Standard System for Military Mail guidance, transit times should be reported by postal units in theater on the basis of a random sample of all incoming letters and all incoming packages arriving at military post offices in the Iraqi theater. The samples are then divided into three categories according to the date of the U.S. postmark: postmark less than 10 days old, postmark 11 to 15 days old, and postmark over 16 days old. Each of these three categories is given a weight value of 10, 15, and 16, respectively, which represent the break points of each category. The sample size (number of letters or packages sampled) in each category is then multiplied by the weight value and averaged to get the reported transit time. Consequently, regardless of the sample size or the actual number of days the items spent in transit, the resulting average will always be from 10 to 16 days. For example, a piece of mail that spent 100 days in transit would be counted in the same category and weighted the same as one that only took 16 days. Similarly, a piece of mail that spent 4 days in transit would be counted in the same category as one that took 10, and again weighted the same. (See table 1 for an example of how this methodology is used to calculate transit times.) This methodology is even less viable when one considers that during the peak of wartime operations, all mail destined for Iraq was held at the joint military mail terminal in Kuwait for 23 days (late March through mid-April) because of the rapid pace of troop movements. However, this 23-day hold on mail is not reflected in the transit time data, as the “weighted average” methodology masks the calculation, thus significantly understating actual transit time. Officials at the Military Postal Service Agency and at the Army’s 3rd Personnel Commandthe Army entity providing in-theater postal support during Operation Iraqi Freedomcould not provide documentation that described this methodology. We reviewed the Transit Time Information Standard System for Military Mail guidance, the standard that explains and prescribes how military postal activities collect mail transit time data, and could not find any mention of this particular methodology. Only 3rd Personnel Command, the source of the transit time data, was aware that the transit times were being reported in this manner. According to a 3rd Personnel Command official, it had always been done this way. We discussed the methodology with Military Postal Service Agency officials. While they acknowledge that the Transit Time Information Standard System is the official tracking system, they were not aware that this particular methodology was being employed, and moreover could not tell us why it was being used. In order to collect transit times on retrograde mail (which the Transit Time Information Standard System for Military Mail does not collect) as well as prograde mail, the Military Postal Service Agency sent test letters to individuals located at military post offices within the contingency area of responsibility. The letters contained instructions asking the recipient to mark the date received and then return them through the military postal system. The test letter data—derived from letters sent by the Military Postal Service Agency from February through September 2003—indicate that, on average, prograde transit times met the Army standard of 12 to 18 days during all but 1 month. The only exception was April 2003, when average transit time peaked at 19 days. (See fig. 5.) However, this average obscures the fact that nearly 25 percent of the test letters took more than 18 days to be delivered to the Iraqi theater. Retrograde test letters were not as timely, failing to meet the 12- to 18-day standard during 2 months. In addition, the Military Postal Service Agency initially only sent test letters to individuals at military post offices in Kuwait and Bahrain. It was not until August 2003 that test letters were sent to locations in Iraq as well. Therefore, the aforementioned 23-day hold on mail bound for units in Iraq would not have affected transit time data as reported by test letters. Information based on test letters sent to individuals located at military post offices is not a complete measure of transit times because many letters were never returned. Between February and September 2003, the Military Postal Service Agency sent more than 1,700 test letters to service members at military post offices in various locations in Kuwait, Bahrain, and Iraq. Based on our analysis of the agency’s data, we found that only 59 percent (1,028) of the letters were returned. In addition, of the more than 700 letters that failed to return, we determined that 25 percent had been sent to individuals located at post offices in or near the northern Iraqi cities of Kirkuk and Mosul. However, only one letter from each of these locations was ever returned out of about 180 letters mailed. Unfortunately, there is no way of telling whether or not these or any of the other unreturned test letters were ever actually received. There are other drawbacks with this test letters approach. For example, it does not accurately measure the transit time from point of origin to the individual service member. Test letters were addressed only to individuals located at military post offices, and not to service members located in forward-deployed combat units. It could take several additional days for service members deployed elsewhere to receive mail from such locations. Also, this approach used only letters, not parcels, and parcels comprised the bulk of mail sent into the theater. In the absence of reliable data to describe timeliness, we held discussions with a non-representative sample of 127 soldiers and marines who served in theater, and who were selected prior to our visits to Fort Stewart, Georgia, and Camp Pendleton, California. Almost 60 percent of these service members indicated that they were dissatisfied with the timeliness of mail delivery. Nearly half said that, after arriving in theater, they waited more than 4 weeks to get their mail, and many commented that some mail took as long as 4 months to work its way through the system. When asked, about half the troops we interviewed also indicated that they were not aware of command decisions to purposefully halt mail service. In addition, nearly 80 percent said that they were aware of mail that was sent to them but that they did not receive while they were deployed. In many cases, this mail was finally received after the service members returned to their home stations. Clearly, the non-receipt of mail became a concern for friends and family back home. Many service members told us that they did not receive certain pieces of mail until they returned to their stateside home installations. For example, starting in June 2003, Camp Pendleton, California, received about 100,000 pounds of military mail that had been returned undelivered and unopened to the U.S. Postal Service gateway in New Yorkat a cost of about $93,000. Upon receipt in New York, the mail was sent by rail to the U.S. Postal Service gateway in San Francisco and then put in trailers and trucked to Camp Pendleton. Extra space considerations were needed in order to accommodate the returned mail, including two tents staged outside of the main post office for overflow. Many of the returned packages were damaged and rewrap procedures had to be established in order to try and control packages that were all but destroyed from being mishandled or handled too often. (See fig. 6.) Postal officials at Camp Pendleton did not clear out and deliver all of this returned mail for the better part of 3 months, or until the latter part of August 2003. According to soldiers we interviewed, one of the issues that hampered mail delivery was changing deployment information. Mail delivery to the Army’s 3rd Infantry Division was stopped when word was received that the division was about to redeploy. When this plan changed and the division did not redeploy, mail started to flow again. The division was told several times that it would be redeployed and then it did not; each time, when deployment was thought to be imminent, mail delivery was stopped. This created a backlog. When the 3rd Infantry Division finally did redeploy, the 1st Brigade of the 3rd Infantry Division stayed behind and was assigned to the 1st Armored Division. But this information was not disseminated, and the 1st Brigade received no more mail while in theater. Despite differences in operational theaters and an effort by postal planners to consider experiences from Operations Desert Shield/Storm in planning for Operation Iraqi Freedom, many of the same problems continued to hamper postal operations during Operation Iraqi Freedom. These problems include (1) difficulty with implementing joint-service postal operations, (2) postal personnel who were inadequately trained and initially scarce because of late deployments, and (3) inadequate postal facilities, material-handling equipment, and transportation assets to handle mail surge. During January 1991, at the height of Operations Desert Shield/Storm, more than 500,000 U.S. troops supported a ground war that lasted a little more than 4 days. These troops were concentrated in camps located in Kuwait and Saudi Arabia near the borders of Kuwait and Iraq. In contrast, Operation Iraqi Freedom involved about half the number of troops (about 250,000), dispersed over a larger geographical area (all of Kuwait and Iraq), and involved a ground war that lasted about 42 days. This greater dispersion of troops for a longer period of time increased the logistical requirements for delivering the mail. Additionally, although the ground war for Operation Iraqi Freedom is officially over, there is an ongoing requirement to provide timely and efficient postal support for a large number of personnel still in theater, fighting the global war on terrorism. Several key planning assumptions used in the creation of U.S. Central Command’s postal plan for Operation Iraqi Freedom proved problematic. The embargo on Any Service Member mail produced unintended negative results; mail restrictions for the first 30 days in theater were never enacted; and the volume of mail was grossly underestimated. Table 2 summarizes these key assumptions, the actions taken, and the consequences of those actions. Because Any Service Member mail caused delays in the delivery of other personal mail and stressed the logistical system during Operations Desert Shield/Storm, postal plans for Operation Iraqi Freedom placed an embargo on this type of mail. Defense officials also discontinued Any Service Member mail for security reasons following the Anthrax scares of 2001. During Operations Desert Shield/Storm, Any Service Member mail acted as a morale booster because it provided mail to troops who might not have received mail otherwise. From an operations standpoint, this mail could be separated and set aside until individually addressed mail had been processed. However, the volume of Any Service Member mail taxed transportation and storage capabilities. In order to prevent similar problems during Operation Iraqi Freedom, planners placed an embargo on Any Service Member mail. Despite this, individuals and organizations sending mail developed “workarounds” that overwhelmed the postal system and contributed to a slowdown in service. Instead of addressing mail to “Any Service Member,” senders addressed their letters and parcels to specific individuals, enclosing a request that they share the mail with other troops. Because this mail was addressed to specific individuals, postal personnel had to treat it as regular mail and could not separate it and set it aside for later processing. These “workarounds” added to the workload at every stage in the mail delivery process. For example, when we visited the Joint Military Postal Activity in San Francisco, California, we observed one of these “workaround” shipments. It consisted of approximately 40 boxes, each weighing about 8 to10 pounds. They were all addressed to the same recipient and came from a charitable service organization. This one shipment required its own handcart and almost one- quarter of an airline-shipping container. A second key assumption that did not have the intended result involved mail restrictions. Drawing from the lessons learned from Operations Desert Shield/Storm, postal planners for Operation Iraqi Freedom assumed that mail would be restricted to personal first-class letters or sound/video recordings that weighed 13 ounces or less for the first 30 days of operations. At the beginning of Operation Iraqi Freedom, Military Postal Service Agency and Army postal officials in theater asked that these restrictions be imposed. However, U.S. Central Command officials did not approve the request because, according the U.S. Central Command postal planner, they believed that a sufficient appropriate postal infrastructure was in place to handle the mail. As a result, the mail continued to flow into theater, overtaxing the limited mail handlers and facilities in place and creating huge backlogs of mail. Underestimating the volume of mail was the third planning assumption that created problems for the mail system. Postal planners in Operation Iraqi Freedom assumed that the volume of mail per person would be less than it actually was. They estimated that there would be from 0.5 and 1.5 pounds of first class mail per person per day based on data from previous contingency operations. Instead, military officials estimate that the initial surge of mail averaged closer to 5 pounds per day, overburdening the developing mail system. According to the Military Postal Service Agency and Air Force Postal Policy and Operations officials we interviewed, of the total volume of mail shipped, more than 80 percent consisted of parcels and the rest consisted of flat mail. The mail volume per soldier was much higher than that seen in Operations Desert Shield/Storm. For example, mail volume reached a monthly peak of 10 million pounds in Operations Desert Shield/Storm for about 500,000 troops compared with a monthly peak of 11 million pounds in Operation Iraqi Freedom for half as many troops. Consequently, during Operation Iraqi Freedom, the necessary facilities and manpower needed to move this higher volume of mail were not initially available in theater. In addition to problematic postal planning assumptions, the single service manager concept was not enacted to ensure the management of joint postal operations. In both Operation Iraqi Freedom and Operations Desert Shield/Storm, the single-service manager concept did not perform as planned. The single-service manager is assigned by the combatant commander to be the manager and point of contact on all postal issues in the area of responsibility. The single-service manager is normally appointed from one of the military components, generally the component with the most postal resources in theater. During Operations Desert Shield/Storm, the single-service manager was the same for both the peacetime and the contingency areas of responsibility. According to lessons learned from Operations Desert Shield/Storm, the use of the peacetime single-service manager was unsuccessful because of a lack of coordination and cooperation between the components. To overcome this problem, U.S. Central Command, through its operations plan, directed the establishment of a Joint Postal Center−to be manned by representatives from all components−to oversee all mail operations in the contingency area and assume the duties and responsibilities of the single- service manager. The operations plan states that a Joint Postal Center be established and that the peacetime single-service manager for the area of responsibility provide postal personnel, resources, and equipment to support the Joint Postal Center as required; continue to oversee military postal operations in the area of responsibility not in the contingency area; and relinquish policy and oversight responsibilities of postal operations in the contingency area of responsibility to the Joint Postal Center once it is operational. U.S. Central Command postal officials told us that neither the Joint Postal Center nor the single-service manager performed according to the approved plan or as expected. The Joint Postal Center did not fully assume the role of the in-theater single-service manager, as it arrived late in theater, was not supported by all of the components, and was undermanned. In the interim, the peacetime single-service manager for U.S. Central Command did not have adequate personnel to assume the role for the contingency area of responsibility. According to representatives from the designated single-service manager, they were unable to provide full-time staff in theater and could not adequately manage operations from their home station in the United States. By the time the Joint Postal Center’s personnel began arriving in theater in February 2003, the different components had already been receiving large quantities of mail and had established their own postal operations. In January 2003 the Commander of the Army’s 3rd Personnel Command assumed responsibility for postal operations supporting the combined land forces (Army and Marines) and was making decisions that affected the flow of mail for the theater, a responsibility the Army was not resourced to assume. In both Operations Desert Shield/Storm and Operation Iraqi Freedom, postal units lacked sufficient training. According to lessons learned from Operations Desert Shield/Storm military postal operations need to be staffed with trained personnel who are familiar with postal operations and the movement of mail. Similar problems surfaced during Operation Iraqi Freedom. Military postal officials told us that Army postal personnel arriving in theater were largely untrained in establishing and managing military postal operations, as they are traditionally not tasked for this type of duty. Usually, Army postal personnel are tasked to support the daily operations of military post offices. However, even this type of training was lacking. Officials attributed this lack of training to a number of different factors. One factor is that most of the Army’s postal units are made up of Army Reserve soldiers, who do not have an opportunity to train in postal facilities during peacetime. This is because there are no military post offices in the United States. Subsequently, if a reserve unit wants to train in a military post office they have to deploy overseas for their annual training. The second factor is that active duty Army postal personnel do not have an opportunity to conduct realistic postal operations during routine training exercises. The third factor is that, unlike the other services, the active duty Army does not have a postal career track. This means that, even if active duty soldiers have attended postal training, they may never work in a postal position. Moreover, during both Operations Desert Shield/Storm and Operation Iraqi Freedom, postal units were initially scarce because of late deployments. Units should have deployed early enough to establish an adequate postal infrastructure in advance of the mail. During Operation Iraqi Freedom, despite plans to deploy Army postal units early, they arrived in theater after most combat troops. Military postal officials told us that other units had a higher priority for airlift into the Iraqi theater. The Operations Plan specified that postal personnel needed to handle mail would deploy within the first 10 days of the build-up for the contingency. Even though some troops mobilized according to the original plan, our analysis of data received from the Army’s 3rd Personnel Command shows that some of these troops were delayed at their mobilization stations up to 130 days (with the average delay being 69 days) before deploying. (See fig. 7.) Postal units did not begin arriving into theater until March 2003. Consequently, early mail operations were conducted with insufficient postal troops to carry out the mission. This decision ultimately affected the timely establishment of postal operations. Inadequate postal facilities hampered postal operations in theater during both Operations Desert Shield/Storm and Operation Iraqi Freedom. As the theater grew during Operations Desert Shield/Storm, the facilities proved to be inadequate, and additional aerial mail terminals had to be established in various parts of Saudi Arabia to handle the increasing volume of mail. Although some military postal facilities set up to serve troops during and after Operations Desert Shield/Storm were still in operation in Kuwait and Bahrain, these facilities were inadequate to service the influx of 250,000 troops, which began arriving in January 2003. Key postal infrastructure elements were needed to receive the increased volume of mail and establish a joint mail terminal in Kuwait. At the beginning of Operation Iraqi Freedom, the Fleet Mail Center in Bahrain processed mail for all the services even though it did not have the staff or equipment to handle the surge in volume. Because of the increased workload, it took about 5 to 7 extra days for the mail to be delivered. As the theater matured, a joint military mail terminal had to be established in Kuwait to relieve the Fleet Mail Center of Army and Air Force mail and to augment existing postal facilities at Camp Doha in Kuwait. Postal officials told us that even with this additional facility, the biggest hindrance to processing mail was a lack of sufficient workspace. In addition, as troops began to occupy parts of Iraq in the spring of 2003, additional mail facilities and transportation assets were set up to handle incoming and outgoing mail in Baghdad and other cities and towns in Iraq. The lack of heavy material-handling equipment during the early stages of both conflicts also hampered the processing of mail. Lessons learned from Operations Desert Shield/Storm recommended that modern material- handling equipment be provided to postal units. Operation Iraqi Freedom postal officials also underscored the need to have modern and varied types of material-handling equipment, such as fork lifts and rough terrain cargo handlers available to support postal facilities. (See fig. 8.) Postal workers did not have such equipment in the early days of Operation Iraqi Freedom, so they had to manually break down the containers and sort thousands of pounds of mail per day by hand, adding to the time it took to process the mail for delivery. According to military postal officials, units did not have these types of heavy equipment because either their tables of organization and equipment were not updated to reflect the need, or if updated, were not properly resourced. In addition to a lack of heavy material-handling equipment, postal units did not have the appropriate postal equipment and supplies to perform routine operations. In lessons learned from Operations Desert Shield/Storm, postal officials recommended that postal units regularly review their equipment and supply needs and assemble prepackaged “kits” for contingency postal operations. They also recommended that, at the earliest indication of a contingency, an advance team of postal experts deploy into theater to determine what postal equipment and supplies are required. Despite these recommendations, postal units continued to arrive in theater inadequately equipped to conduct postal operations during Operation Iraqi Freedom. Postal officials at all levels told us that the lists of authorized postal equipment, such as meters and scales, were outdated and did not reflect the correct types or quantities of equipment needed for modern postal operations. In addition, many deployed units did not have access to the full suite of communications equipment, such as secure radios, cellular and satellite telephones, and “landlines” for their facilities. As a result, postal units were unable to coordinate mail pick-up and truck mail convoys, and communicate with other units. Moving mail once it got into theater was a challenge because postal units were not equipped with vehicles to transport the mail. The operations plan for Operation Iraqi Freedom made no special provisions for ground transportation of mail. It assumed that mail would use existing commercial trucks supplemented by military trucks as needed. Postal units at all levels of command (e.g., company through corps) had to compete with other units for vehicles or contract for trucks through local sources. Military postal officials stated that, during Operation Iraqi Freedom, trucks were scarce in theater and carrying mostly ammunition, water, and food. In order to minimize delays in mail delivery, postal officials in January 2003 arranged with a U.S. government contractor to provide 72 trucks and drivers to deliver the mail from the Joint Military Mail Terminal to military post offices in Kuwait and Iraq. Although it took the contractor several more months to obtain all the trucks, this action was a great help, according to U.S. Central Command postal units serving in theater at that time. As a result of lessons learned from the first Gulf conflict, the Military Postal Service Agency did implement one strategy during Operation Iraqi Freedom that proved to be successful. At the beginning of Operations Desert Shield/Storm, mail was initially transported overseas by commercial airlines. Because commercial U.S. carriers reduced the number of flights into Saudi Arabia, postal officials decided to switch exclusively to dedicated military flights to transport mail from the United States to the theater. Similarly, at the beginning of Operation Iraqi Freedom, mail backlogs occurred with existing commercial air service. However, in contrast to Operations Desert Shield/Storm, military postal officials decided to continue using commercial airlines but arranged with the U.S. Postal Service to contract for dedicated postal flights from the United States to Bahrain and Kuwait. According to Military Postal Service Agency officials, this resulted in much more reliable air delivery of mail to the theater. Although military postal officials and others have begun to identify solutions to some of the long-standing postal problems seen again during Operation Iraqi Freedom, no single entity has been officially tasked to resolve these issues. Despite early efforts made by the Military Postal Service Agency in this regard, this agency does not have the authority to ensure that these problems are jointly addressed and resolved prior to the next military contingency. The identification of solutions to long-standing postal problems has begun in a piecemeal fashion. At this time, no single entity has officially been designated to collect and consolidate solutions to long-standing mail delivery problems. After past contingencies, the Joint Staff’s Joint Center for Lessons Learned gathered and consolidated the lessons learned and made them available to the field. We spoke to representatives of the military Joint Center for Operational Analysis, formerly the Joint Center for Lessons Learned, to determine if this process would apply to Operation Iraqi Freedom and they informed us that military postal operations have not been identified as an issue area for lessons learned and they do not anticipate that postal operations will become one. Several individual members of entities such as the U.S. Army Reserve Command, U.S. Central Command, and the Coalition Forces Land Component Command have prepared memoranda outlining issues and lessons learned for postal operations during Operation Iraqi Freedom. We summarized the memoranda, after action reports and comments regarding solutions to postal problems that we collected during our meetings with dozens of key military postal officials. Key military postal officials emphasized that these postal issues must be addressed to avoid a repetition of the same postal problems in future contingencies. These issues represent many long-standing problems that can be directly traced back to Operations Desert Shield/Storm. The issues identified include the following: improve joint postal planning and ensure the execution of the postal operations plan; anticipate the levels of support and types of activities needed, and deploy postal units early to reduce or eliminate backlogs during the build-up; update tables of organization and equipment for postal units to reflect what they actually need in terms of people and equipment to conduct postal operations; develop peacetime training programs to prepare postal units for the missions they will be required to perform during contingency operations; and review the command and control of postal units to determine if the postal function is in the right place and whether one organization should be responsible to both develop and execute policy. In October 2003 the Military Postal Service Agency hosted a joint postal conference to discuss postal problems with dozens of key postal participants in Operation Iraqi Freedom. It is currently in the process of developing a final report that will outline plans to resolve issues in the areas of organization, supplies, planning, training, transportation, “Any Service Member” mail, routing and labeling, and transit time data collection. Although the agency has taken this initiative, it has limited authority and cannot direct the services to jointly address the problems, according to the Executive Director of the Military Postal Service Agency. Military Postal Service Agency officials describe their role as primarily the single point of contact between the military and the U.S. Postal Service. Service components and the Military Postal Service Agency have taken some initial steps in employing alternative mail delivery and tracking systems. For example, the Marine Corps is currently testing an electronic mail system for getting mail delivered to forward deployed troops. In addition, the Military Postal Service Agency has taken steps to solve a long-standing problem regarding transit time data. The agency has developed a mail bar-coding system that could be used to more accurately track the transit time, but it has not yet been successfully deployed for use by ground troops because of connectivity problems. The Military Origin Destination Information System, modeled after the system that the U.S. Postal Service employs, can be used to track transit times of bags of letters and small packages as well as larger parcels. By bar coding these items and scanning them prior to mailing, and then scanning them once they reach their destination, transit times can be easily calculated. According to officials from the Military Postal Service Agency, the Navy is currently using this system with some success. However, the system requires a certain level of connectivity with the Internet, which troops in the field lacked during Operation Iraqi Freedom. Wireless networks may be necessary in order to connect all military post offices to the Internet, which has not been practical on the battlefield. In addition, this system shares a shortcoming with the test letters in that transit times are not tracked to the level of the individual service member. The timely delivery of mail to troops overseas involved in contingency operations is an important mechanism to boost morale among service members and their families and friends. Without taking action to resolve the identified issues in planning, building, and operating a joint postal system, mail delivery will continue to suffer in future contingency operations as witnessed by the repetition of delayed mail delivery from one Gulf war to the next. Emphasis needs to be placed on establishing joint postal responsibilities and the subsequent execution of those duties. Past experience has shown that postal operations have not received command attention or been designated a priority. Establishing the needs for postal operations early in the process and dedicating the appropriate resources is crucial for providing the timely and efficient delivery of mail. While our work focused only on Operation Iraqi Freedom, we believe many of these same lessons apply to other combatant commands and theaters of operation as well. Without clear and accurate data to measure the timeliness of mail to U.S. troops overseas during contingency operations, no meaningful assessment can be made on the quality of mail service. Therefore, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Acquisition, Technology, and Logistics) to work with the Army Adjutant General to improve the quality of transit time data for postal operations by implementing a system that will accurately track, calculate, and report postal transit times. In the absence of a clear plan for resolving recurring postal problems during contingency operations, we recommend that the Under Secretary of Defense (Acquisition, Technology, and Logistics) designate, direct, and authorize an appropriate DOD agency, unit, or command to determine what long-standing postal issues need to be resolved, and to develop a specific course of action and timetable for their resolution, including appropriate follow-up to ensure that the problems have been fixed. Specifically, these actions should address the issues highlighted in this report, such as the following: strengthen the joint postal planning function and specify a body to ensure the implementation of postal operations in theater; deploy properly trained and equipped postal troops into theater prior to the mail build-up; and dedicate adequate postal facilities, heavy equipment, and transportation assets for postal operations. An important part of addressing these long-standing problems is to share the results of these lessons learned from Operation Iraqi Freedom with all of the combatant commands to ensure that future contingencies do not repeat these problems. In written comments on a draft of this report, DOD stated that it fully concurred with our recommendations and has already initiated certain actions. In response to our first recommendation, DOD has directed the Military Postal Service Agency to implement an automated system that will accurately track, calculate, and report postal transit times all the way to troop delivery. In addition, the Military Postal Service Agency is also reviewing manual transit time collection and reporting methods for use when automated collection is not possible. In response to our second recommendation, the Military Postal Service Agency will facilitate and track the corrective actions taken by the Unified Commands, services, service components, and the Military Postal Service Agency, itself, in response to the recommendations developed in the Joint Services After Action Report produced at the Joint Service Postal Conference held in October 2003. DOD’s comments are reprinted in their entirety in appendix II. DOD also provided a number of technical and clarifying comments, which we have incorporated where appropriate. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Executive Director of the Military Postal Service Agency; and the Director, Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please call me at (757) 552-8100. Key contributors to this report were Laura Durland, Karen Kemper, David Keefer, Timothy Burke, Ann Borseth, Madelon Savaides, and Nancy Benco. To address overall issues of military mail delivery to and from the Gulf region and determine responsibilities for mail service, we obtained and reviewed Department of Defense (DOD) guidance and operations plans for mail delivery to troops serving in a contingency area, and specifically during Operation Iraqi Freedom. We then met with officials from the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics); Joint Staff for Manpower and Personnel; and U.S. Central Command to discuss these policies. Our review focused on postal operations as they applied to U.S. troops deployed to the countries of Bahrain, Kuwait, and Iraq during the buildup for Operation Iraqi Freedom, the operation itself, and the ongoing military operations in Iraq (January through December 2003). To address the issue of the timeliness of mail service to and from troops serving in Operation Iraqi Freedom, we collected, analyzed, and assessed the reliability of transit time data from the Army’s 3rd Personnel Command and the Military Postal Service Agency. We discussed the data with military postal officials to ensure that we were interpreting it correctly, especially the methodology used to report transit times from the Transit Time Information Standard System for Military Mail. Within our analysis, we determined that the majority of transit time data we received was for Army mail. Some data were from the Air Force and Marine Corps, but they were not separated out. We did not collect transit time data from the Navy, as their postal operations run separate from and independent of the others. Some data required sorting in order to eliminate irrelevant data elements and to be able to display them on a monthly basis. To determine the effect that the timeliness of mail service had on troops serving in the contingency area, we designed a data collection instrument and then conducted discussion groups with and collected data from a non- representative sample of 127 officers and enlisted personnel—91 from the Army’s 3rd Infantry Division (stationed at Fort Stewart, Georgia) and 36 from the 1st Marine Expeditionary Force (stationed at Camp Pendleton, California). The data collected from this non-representative sample cannot be projected for the entire universe of troops deployed. At each location, the GAO “point of contact” selected a non-representative sample of military personnel who had recently returned from a deployment in support of Operation Iraqi Freedom. The sample size (127) is simply the total number of the soldiers and marines who were available to meet with us during our visits. We summarized the data we collected from the soldiers and marines, determined percentages of individual responses for each question, and gathered their personal accounts regarding mail delivery problems. To address how mail issues and problems experienced during Operation Iraqi Freedom compare with those experienced during Operations Desert Shield/Storm, we obtained and analyzed lessons learned from the first Persian Gulf War and compared these with any available reports prepared by the various offices and commands we visited regarding the postal problems experienced during Operation Iraqi Freedom. We met with numerous officials and personnel from the U.S. Army Reserve Command, the Military Postal Service Agency, the U.S. Postal Service, U.S. Central Command, the Army’s 3rd Personnel Command, U.S. Army Central Command, Air Force Air Combat Command, U.S. Marine Corps, Joint Military Mail Terminal in Kuwait, Fleet Mail Center in Bahrain, and Joint Military Mail Terminal in Iraq to discuss the similarities and differences of the postal problems still being encountered and what actions had been taken to resolve any previously identified problems. To assess efforts to resolve military postal problems for future contingencies, we collected any available after action reports and plans for addressing military postal problems. We attended the Joint Postal Conference—hosted by the Military Postal Service Agency in October 2003—which addressed postal problems encountered during Operation Iraqi Freedom. During the conference, we spoke with military postal officials who had direct responsibility for various aspects of mail delivery to and from the Iraqi theater, and collected pertinent documentation. We summarized information regarding key postal issues that must be addressed to avoid their repetition in the future. We spoke with officials at the Joint Forces Command who are in charge of collecting lessons learned for Operation Iraqi Freedom. We also spoke with the Army Adjutant General in charge of the Military Postal Service Agency to assess the agency’s plans for taking actions to mitigate those problems. We then met with a key official from the Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics)the responsible body for military postal policy and oversightto discuss our findings and to determine what entity is accountable for resolving these issues. We conducted our review from August 2003 through March 2004 in accordance with generally accepted government auditing standards.
Mail is a morale booster for troops fighting overseas and for their families at home. More than 65 million pounds of letters and parcels were delivered to troops serving in Operation Iraqi Freedom in 2003 and problems with prompt and reliable mail delivery surfaced early in the conflict. Congress and the White House forwarded more than 300 inquiries about mail delivery problems to military postal officials. GAO was directed to review mail delivery to troops stationed in the Middle East. In this report, GAO assesses (1) the timeliness of mail delivery to and from troops in Operation Iraqi Freedom, (2) how mail delivery issues and problems during this operation compared with those experienced during Operations Desert Shield/Storm in 1991, and (3) efforts to identify actions to resolve problems in establishing mail operations for future contingencies. The timeliness of mail delivery to troops serving in Operation Iraqi Freedom cannot be accurately assessed because the Department of Defense (DOD) does not have a reliable, accurate system in place to measure timeliness. In general, DOD's transit time and test letter data show that mail delivery fell within the current wartime standard of 12 to 18 days. However, the methodology used to calculate transit times significantly understated actual delivery times. In the absence of reliable data, GAO conducted discussion groups with a non-representative sample of 127 service members who served in-theater. More than half reported they were dissatisfied with mail delivery, underscoring the negative impact it can have on troop morale. Despite differences in operational theaters and efforts by DOD postal planners to incorporate Operations Desert Shield/Storm experiences into planning for Operation Iraqi Freedom, postal operations faced many of the same problems: difficulty with conducting joint-service mail operations; postal personnel who were inadequately trained and initially scarce owing to late deployments; and inadequate postal facilities, equipment, and transportation. The operations plan created for joint-service mail delivery contained certain assumptions key to its success but led to unforeseen consequences or did not occur. Also, plans for a Joint Postal Center were not fully put in place. One lesson learned from 1991 was carried out with success during Operation Iraqi Freedom: mail was transported overseas by dedicated contractor airlifts rather than by military. DOD has not officially tasked any entity to resolve the long-standing postal problems experienced during contingency operations. Moreover, the Military Postal Service Agency does not have the authority to ensure that these problems are addressed jointly. This agency and the military services, however, have taken some steps toward tackling these issues.
HUD housing serves populations that include persons in a position to increase their self-sufficiency and those who need long-term support (for example, the elderly and persons with severe disabilities). To assist those who can improve their self-sufficiency, HUD permits PHAs to allocate space in its public housing to offer training programs and information sessions. It also awards grants to PHAs to encourage them to work with local social service providers in offering education, job training, and other supportive services to residents of public housing and recipients of vouchers. The purpose of these programs is to increase the earned incomes of residents and reduce their reliance on cash assistance and housing assistance. For this report, we examined five HUD programs that fund self-sufficiency efforts in whole or in part. Three grant programs only award grants to support self-sufficiency activities: HCV FSS, PH FSS, and ROSS SC. FSS was authorized in 1990 to help families receiving vouchers or in public housing become self-sufficient through referrals to education, training, and other supportive services and case management. PHAs use grant funds to pay program coordinators who link residents to the supportive services they need to achieve self-sufficiency. Families in either FSS program sign a contract with their PHA, requiring that all family members become independent of cash welfare assistance and the head of the family seek and maintain employment. Both programs feature case management, referrals to supportive services, and an escrow account that accumulates balances, or savings, for the tenant based on increases in tenant contributions toward rent. The ROSS SC program provides funding to hire service coordinators to assess the needs of public housing residents and coordinate available resources in the community to meet those needs. For the FSS programs, the escrow accounts are incentives to increase work effort and earnings. Specifically, when participants have to pay a higher rent after their earned income increases, the PHA calculates an escrow credit that is deposited each month into an interest-bearing account (see fig. 1). Families that successfully complete their contract for either FSS program receive their accrued escrow funds. In general, in order to complete an FSS contract, the family head must certify that no member of the family has received welfare for at least one year, and the family head must be employed. HUD has two other programs—MTW and HOPE VI Revitalization Grants (HOPE VI)—that allow participating PHAs to determine how they will encourage self-sufficiency. The purposes of HUD’s MTW demonstration program include providing PHAs the flexibility to design and test policies that give incentives to families with children to become economically self- sufficient. For example, PHAs in MTW can alter eligibility and rent policies. Through the HOPE VI program, participating PHAs (HOPE VI agencies) can use HOPE VI grants to demolish, rehabilitate, or replace severely distressed public housing and also provide community and supportive services to help residents achieve self-sufficiency. HOPE VI agencies can spend up to 15 percent of their revitalization grant funds for community and supportive services activities, which HUD defines as any activity designed to promote upward mobility, self-sufficiency, and improved quality of life for residents of the public housing project involved. For the two FSS programs, HUD regulations state that self-sufficiency means that a family is no longer receiving housing assistance or welfare assistance. However, the regulations also indicate that while achieving self-sufficiency is an FSS objective, no longer receiving housing assistance is not a requirement for completing the program and receiving escrow funds. HUD does not define self-sufficiency for the other three programs we reviewed. Thus, HUD does not have a uniform measure of self-sufficiency across the programs. HUD officials with responsibility for administering all five programs noted that a single definition of self- sufficiency would not be useful, particularly because levels of self- sufficiency could differ for different groups. However, officials noted that the concept is commonly understood to indicate that a family does not rely on government programs that are intended to address poverty. HUD, HUD Strategic Plan Fiscal Year 2010-2015 (Washington, D.C.: May 2010). disabilities, services should help improve living conditions and enable residents to age in-place. For the HOPE VI program, indicators of progress towards self-sufficiency include the number of residents who obtained a high school or equivalent education, obtained new employment, and completed a job training program. In contrast, MTW agencies determine their own measures of residents’ progress towards self-sufficiency. In a 2012 report on the MTW program, we reported that HUD had not defined key program terms, including the statutory purpose of encouraging employment and self-sufficiency, and We recommended that HUD issue guidance that clarifies such terms.also noted the limited usefulness of having MTW agencies devise their own metrics, particularly when they are not outcome-oriented. We recommended that HUD improve its guidance to MTW agencies by requiring that they provide performance information that is quantifiable and outcome-oriented. HUD has taken steps to revise its reporting guidance, and the Office of Management and Budget (OMB) approved revised guidance on May 31, 2013. HUD requires all PHAs to collect detailed data from their residents. PHAs must submit these data electronically through the PIC at least annually for each household that participates in assisted housing programs. According to HUD guidance, PIC can be used to create reports, analyze programs, monitor PHAs, and provide information about those that live in HUD- subsidized housing. The data collected include amounts and sources of income; the amount of rent paid; and the presence of household heads and members who are elderly or have disabilities. MTW agencies can submit some tenant-related data into a separate module in PIC called MTW-PIC, which was created in 2007 to better accommodate some of the activities allowed under MTW. Most MTW agencies had transitioned to it by 2008. PHAs that receive either of the FSS grants, including MTW agencies, must report additional data about participating households into a section of the PIC and MTW-PIC systems called the FSS Addendum. The addendum captures data on each family when it enters either of the two FSS programs and should be updated annually. Specifically, the addendum includes fields for PHAs to report whether the head of the household works full-time, part-time, or is unemployed; the highest grade of education for the head of the household; and the sources of assistance received by the family, such as cash or food assistance. The annual totals for grant awards in the two FSS and ROSS SC programs have increased in recent years. As shown in table 2, the amount HUD awarded to PHAs in constant 2013 dollars through the two FSS programs increased from about $64 million in fiscal year 2006 to about $78 million in fiscal year 2011, with the majority of funds being awarded through the HCV FSS program. For the ROSS SC program, the total amount awarded, in constant 2013 dollars, increased from $30 million in fiscal year 2008 to $35 million in fiscal year 2011. Escrow payments to households represent additional program expenditures associated with the two FSS programs. As previously described, PHAs establish escrow accounts for households that participate in FSS. PHAs generally disburse the amount in the escrow account (the excess of any amount owed to the PHA by the family) to the head of the family when the contract of participation has been completed. As previously noted, in order to successfully complete an FSS contract, the head of the family must be employed and no members may be receiving welfare. According to HUD data adjusted to 2013 dollars, the agency disbursed nearly $82 million to FSS participants between fiscal years 2006 and 2011 (see table 3). Expenditures for community and supportive services, adjusted to 2013 dollars, represented 7 percent or less of HOPE VI grant awards during fiscal years 2006-2010. HUD data indicate that the 29 PHAs that were awarded HOPE VI revitalization grants between fiscal years 2006 and 2010 had spent about 5 percent of these funds on community and supportive services as of December 2012 (see table 4). Of the 29 grants, 4 were closed and 25 were open. HOPE VI agencies with open grants could spend more of their revitalization grant on supportive services, up to 15 percent. HOPE VI agencies also can leverage other sources of funding for community and supportive services. The amounts that MTW agencies spend on activities intended to increase resident self-sufficiency are not known for the program as a whole. MTW agencies are not required to expend a specific proportion of their HUD funds on activities that encourage work and self-sufficiency. However, they must implement activities that address the program’s statutory purposes (which include encouraging work and self-sufficiency), and annually submit written reports to HUD with information on each activity they undertake and its linkage to the program’s statutory purposes. MTW agencies had been required to annually report to HUD financial data on the sources and uses of their funds. A HUD official with responsibility for administering the MTW program stated that the agency had not analyzed this information and that MTW agencies had reported it using varying formats. According to HUD, with the implementation of revised MTW reporting requirements, MTW agencies will be required to report data on the costs of self-sufficiency activities using standardized metrics. Residents’ participation in the five self-sufficiency programs was not comprehensively known because the data were not reliable, aggregated programwide, or collected for all participants. Internal control standards state that transactions should be promptly recorded to maintain their relevance and value to management in controlling operations and making decisions. This applies to the entire life cycle of a transaction or event. They also state that program managers need operational data to determine whether they are meeting their agencies’ strategic plans and meeting their goals for the effective and efficient use of resources. The total number of families in either of the two FSS programs cannot be reliably assessed based on available PIC data for fiscal years 2006-2011 because of missing program start dates, exit dates, and annual updates. According to HUD guidance, PHAs that receive FSS grants must update information about each participating family at least annually. As a part of these updates, PHAs are supposed to indicate whether the family is active or has exited the program. For those who exited the program, PHAs are supposed to indicate whether they completed the program or stopped participating for other reasons. These data must be reported in the FSS addendum of PIC or MTW-PIC. NOFAs consistently have cited PIC as a data source for FSS program participation counts, which HUD uses in part to determine eligibility for funding. For some years, the notices also stated that applicants for grant funding could use other documentation for participation counts, such as a separate HUD form or MTW documents. We excluded MTW households from our analysis of FSS participation because available data were not reliable prior to 2011. these families. For 11 percent of the families that began the FSS programs in 2006, HUD data do not indicate the families had exited the programs, although other HUD data indicated that the families no longer received housing assistance. Similarly, HUD data also indicate that 11,596 families exited the two programs in fiscal year 2011 (some completing the program and others leaving before completion), but did not indicate a start date for about one quarter of these families. Because both FSS programs are intended to be 5-year programs, it is reasonable to expect that most of the families that began participating in fiscal year 2006 would have exited or completed the program by fiscal year 2011. HUD began posting FSS participation data from the PIC system online beginning with the fiscal year 2009 funding announcement for HCV FSS and the fiscal year 2012 funding announcement for PH FSS. According to these data, the number of families participating in HCV FSS has increased in recent years (see table 5). HUD officials told us that posting participation data online has emphasized to PHAs the importance of ensuring the data they submit into PIC are accurate. If a PHA believed the participation number HUD posted was incorrect and that it would be underfunded or ineligible, that PHA could submit documentation to confirm a higher number. However, as a part of the NOFA process, neither HUD nor the PHA were required to make corrections to the PIC system. Officials from one of the PHAs with whom we met stated that the enrollment data in PIC generally are inconsistent with their internal records. The staff stated that they have worked with HUD to correct PIC, but noted that once some errors were fixed, new errors emerged. Staff from another PHA, which is also a MTW agency, stated that their FSS enrollment data do not appear in PIC; thus, they must create ad hoc reports using their own internal systems. According to HUD officials with responsibility for implementing the FSS programs, the FSS records could be incomplete or incorrect for several reasons. For example, if a participating family left public housing or the voucher program, the responsible PHA might not update the FSS portion of PIC to reflect the departure. PHAs are supposed to enter the exit date, whether the family completed their contract of participation, or one of five primary reasons for leaving the program without completing it. According to these officials, no HUD staff have specific responsibility for monitoring the completeness of FSS participants’ records in PIC. HUD officials also stated that there are challenges with the PIC system. They were aware of cases in which PHAs have entered data into PIC, but either the entries were not saved or they overwrote previously entered data. PHA officials with whom we met, as well as HUD staff with responsibility for the PIC system, stated that PHAs cannot readily run reports that show FSS participation data, a feature which would enable them to identify and correct missing or incorrect data. While PHAs can run ad hoc reports from PIC, the functionality of the system is limited, and records older than 18 months cannot be accessed. We previously reported on the weaknesses associated with HUD’s antiquated technologies. Without complete information on families’ participation in either FSS program (including a reliable count of program participants, participants’ length of time in the program, and reasons participants do not complete the programs), HUD lacks accurate information to make grant funding decisions. Moreover, by not analyzing the extent to which PHAs have reported required data into PIC (and MTW-PIC), HUD’s ability to effectively oversee the program is limited. Participation data for the ROSS SC program from fiscal years 2008 (the year the current version of this program started) through 2011 also were limited, primarily because of the lack of reporting guidance for the program and the difficulty of aggregating the data. HUD does not require PHAs to report whether a resident received services through this program in its information systems; rather, HUD collects ROSS SC participation data from PHAs through individual Excel-based reporting tools. According to HUD officials with responsibility for the PIC and MTW-PIC systems, these systems were intended to collect resident characteristics and not to enable PHAs to report data on residents’ participation in specific programs, such as ROSS SC. PHAs must use the tool to apply for grant funding, inserting projections for the number of residents or households they intend to serve. Grant recipients then must submit annual updates, reporting on the number of residents or households actually assisted. Internal control standards state that transactions should be promptly recorded to maintain their relevance and value to management in controlling operations and making decisions. They also state that program managers need operational data to determine whether they have been meeting their agencies’ strategic plans and meeting their goals for the effective and efficient use of resources. As we previously reported, according to OMB, being able to track and measure specific program data can help agencies diagnose problems, identify drivers of future performance, evaluate risk, support collaboration, and inform follow-up actions. Analyses of patterns and anomalies in program information also can help agencies discover ways to achieve more value for the taxpayers’ money. According to HUD officials, the data that individual PHAs report using the Excel-based reporting tool cannot be easily or reliably aggregated to provide a count of resident participation across all PHAs. HUD officials who manage the ROSS SC program did not attempt to aggregate data reported through individual reporting tools until 2013 in response to our review. While HUD provided us with annual program participation data based on the tools, we determined that the data were not sufficiently reliable for these purposes. Specifically, we found duplicate records; differences in the time periods for which PHAs were reporting data; and different results for the “number of persons receiving services,” the “number of persons served,” and the sum of counts of persons served by age category. HUD officials told us that there are no line-level instructions for reporting these data. Thus, recipients of ROSS SC grants have not been given any formal guidance on what should be reported into these fields. Because of these limitations, HUD does not use the reporting tools to prepare official counts of residents assisted through the ROSS SC program. Rather, HUD multiplies the number of coordinators funded each year by 50 (the minimum number of residents who must be assisted under the terms of the grant). Based on this formula, they can report on the minimum number of residents who likely were assisted. Using this approach, HUD estimated that from 6,450 to 7,800 residents received services through the ROSS SC program annually from fiscal years 2008 through 2011. While this estimate provides information on the minimum number of residents who likely received assistance, it is not based on a count of the number of residents who actually received assistance. HUD officials who manage the ROSS SC program stated that the current process for collecting data from the reporting tools was supposed to be temporary, and a planned web-based replacement was never developed. Additionally, the officials stated that the data PHAs report into the tool were not always comparable because PHAs have varying interpretations of what they should report, and HUD never developed program-specific reporting guidance. Without developing a reliable process for collecting and analyzing data on the number of residents assisted through the ROSS SC program, HUD lacks basic information needed to manage the program. According to HUD participation data for HOPE VI (available on a cumulative basis), at least 73,000 working-age, nondisabled residents have participated in the community and supportive services component of the HOPE VI program since the program began in 1993. For the 280 revitalization grants awarded through the program since its inception, HOPE VI agencies self-reported that about 55,000 original residents (those who lived at the site before revitalization) participated in a program or service designed to help them progress towards self-sufficiency. In addition, HOPE VI agencies self-reported that nearly 18,000 additional individuals who became residents at the revitalized HOPE VI sites also participated in community and supportive services. Programwide data on residents’ participation in MTW activities related to increasing self-sufficiency from fiscal years 2006 through 2011 generally were unavailable. While MTW-PIC was created in 2007 to better fit the needs of MTW agencies, this system was not designed to collect activity- level data (unless a household was participating in one of the two FSS programs). Moreover, HUD officials do not consider the data in MTW-PIC to be reliable prior to 2011 because some MTW agencies were still transitioning to it through the end of 2010. As a result, officials with responsibility for administering the MTW program have not used MTW- PIC as a tool for analyzing residents’ participation in activities related to increasing self-sufficiency. In addition, HUD does not analyze the data that MTW agencies provide in their annual MTW reports, including data on residents’ participation in activities related to self-sufficiency, because reporting requirements do not call for the reporting of standardized data, such as the number of residents who found employment. We previously recommended that HUD develop and implement a plan for quantitatively assessing the effectiveness of similar activities, which would include activities related to encouraging self-sufficiency. HUD agreed that quantitatively assessing the effectiveness of similar activities was an important step. HUD has made revisions to its reporting requirements, which were approved by OMB in May 2013. According to our analysis of available HUD data, 338,900 households received rental housing assistance from an MTW agency in fiscal year 2011. Because HUD officials do not believe their PIC or MTW-PIC systems contain reliable data from MTW agencies prior to 2011, we did not attempt to analyze these data. Of the MTW households, 14,314 were reported as participating in the two FSS programs in fiscal year 2011. According to HUD officials, most MTW agencies participate in FSS, but the quality of their FSS data reporting is unknown. HUD requirements for collecting data on indicators of self-sufficiency vary by program. HUD requires PHAs to collect and report into PIC or MTW- PIC certain types of detailed information on every resident of HUD- assisted rental housing. For instance, PHAs collect data on the amount and sources of residents’ income, including whether it is earned or provided through disability payments, Temporary Assistance for Needy Families, or other sources. PHAs are supposed to collect and report this type of information into HUD’s PIC system at least annually. For residents affiliated with the MTW program, this information is reported into MTW- PIC, a separate module in the PIC system. In addition to reporting basic demographic and income data on each resident, HUD requires PHAs that implement the five programs to report additional indicators of residents’ progress towards self-sufficiency using information systems, an Excel-based reporting tool, and narrative reports. FSS programs and ROSS SC. As discussed earlier in this report, HUD requires PHAs that receive FSS grants to enter additional data in the FSS Addendum section of PIC on each family upon entry into an FSS program and update this information annually. For example, PHAs enter whether the head of the household works full-time, part- time, or is unemployed; and the highest grade of education for the head of the household. For the two FSS programs and the ROSS SC program, HUD also requires PHAs to annually enter summary output and outcome data related to their residents’ progress towards self- sufficiency into the previously described Excel-based reporting tool. Data fields in the tool include the number of households that increased their income, moved to nonsubsidized housing, and the number of residents that obtained a high school diploma. HOPE VI. HUD requires participating agencies to submit quarterly reports of summary data on residents’ progress towards self- sufficiency into the HOPE VI Quarterly Reporting System. For example, HOPE VI agencies must report the total number of residents who participated in activities that facilitate self-sufficiency, including the numbers enrolled in counseling programs, job training, and General Education Development classes. MTW. HUD requires MTW agencies to submit annual reports containing summary information about the impact of the activities intended to encourage resident self-sufficiency they have been implementing. The agencies were able to use metrics of their choosing at the time of our review. HUD has performed limited analyses—to assess outcomes for the programs as a whole—of data related to self-sufficiency outcomes that FSS grant recipients must report into its information systems. In addition, HUD has not analyzed similar data that FSS, ROSS SC, and MTW agencies must report through other mechanisms to assess each program as a whole. Standards for internal control emphasize the need for federal agencies both to collect reliable information with which to manage their programs and review the integrity of performance measures. Moreover, these standards emphasize the need for program managers to collect operational data to determine whether they have been meeting their goals for the efficient and effective use of resources. Additionally, the GPRA Modernization Act of 2010 (GPRAMA) emphasizes the need for information on the effectiveness of federal programs to improve congressional decision making. Based on the information that PHAs must report in PIC and the FSS Addendum in PIC, HUD has performed limited analyses of the data related to self-sufficiency outcomes for the two FSS programs. As previously discussed, FSS participation data were sometimes incomplete and therefore of questionable reliability. Specifically, HUD’s data lacked start dates and annual updates for some FSS participants. Despite these limitations, HUD has used these data in its Congressional Budget Justifications. For example, in the 2014 Congressional Budget Justification HUD reported that as of March 30, 2012, a total of 57,087 families were enrolled in the two FSS programs. Without annually updated records for each family, HUD cannot reliably determine whether families were still active program participants. Additionally, HUD has not analyzed and reported on the experiences of all families that start the FSS programs, including the extent to which they completed the program, the primary reasons they exited the program without completing it, and the extent to which required data are missing. According to HUD, they have not conducted such analysis due their own concerns about the usefulness of available data. As described previously, internal control standards state that such information can help agencies determine whether they have been meeting operational goals and using resources effectively. According to HUD officials with responsibility for administering the two FSS programs, they were aware that some PHAs did a better job of reporting into PIC than others, which affected the completeness and reliability of the data. We acknowledge that analyzing this information and summarizing overall changes in indicators of self- sufficiency among participants would not yield definitive results on the impact of FSS on resident self-sufficiency. But by not analyzing the data that FSS grant recipients must report in HUD’s information systems more thoroughly, HUD has been missing an opportunity to gain valuable information about the results of FSS programs for certain agencies. That is, for those agencies that have submitted complete data (all annual updates as well as information on whether and how the participant exited the program), HUD could review available information, such as changes in income and employment. Also, HUD has been missing an opportunity to identify PHAs with notably effective or ineffective FSS programs (or data reporting) and learn from their experiences. Ultimately, HUD’s lack of complete data on FSS participants limits the usefulness of analysis reported to Congress. In part due to the data’s weaknesses, HUD also has not assessed the data related to self-sufficiency outcomes that it requires FSS and ROSS- SC grant recipients to report through the Excel-based reporting tool for each program as a whole. According to HUD officials, HUD field office staff review the outcomes data that individual PHAs report, but HUD had no process in place for assessing the outcomes reported through this tool programwide. Standards for internal control emphasize the need for federal agencies to collect reliable information with which to manage their programs and to review the integrity of performance measures. HUD officials with responsibility for these programs told us that headquarters staff did not use the reporting tools to assess the effectiveness of the FSS or ROSS SC programs because a system had not been developed to do such an assessment and because the data submitted were sometimes incomplete, not comparable, and unreliable. HUD officials also stated that PHAs likely vary in their interpretations of what to report because HUD never developed program-specific guidance. In 2013, in response to our review, HUD had its contractor (which collects data PHAs report using the tool) aggregate the outcomes data reported for the PH FSS and ROSS SC programs for fiscal years 2008 through 2011. However, HUD staff found that the results did not appear to be accurate or reliable. While HUD’s effort to aggregate the performance data that they require grant recipients to report was a step in the right direction, the agency lacks a strategy for better ensuring that the outcomes data it collects are reliable and permit comparison across PHAs. Without a plan for helping to ensure that the outcome data FSS and ROSS SC grant recipients report are comparable and reliable, HUD will be unable to fully use the data it requires PHAs to report. HUD has not assessed the effectiveness of the MTW program using the information that it requires MTW agencies to submit in their annual performance reports on the impact of their MTW activities, including activities related to increasing resident self-sufficiency. MTW agencies generally have devised their own metrics for activities and reporting performance information, so the usefulness of this information for assessing programwide results is limited. That is, because the data are not consistent across agencies, they cannot be used to assess the performance of similar activities across MTW agencies. Additionally, in some cases the information is not outcome-oriented and thus cannot be effectively used to assess performance. We previously recommended that HUD (1) improve its guidance to MTW agencies on providing information in their performance reports by requiring that such information be quantifiable and outcome-oriented to the extent possible; and (2) develop and implement a plan for quantitatively assessing the program as a whole, including the identification of standard performance data needed. HUD generally agreed with our recommendations, and in May 2013 OMB approved the revised guidance. Additionally, according to HUD, the agency continues to seek funding for a full evaluation that will better analyze the MTW information that is already collected and assess the effectiveness of the program, including self-sufficiency activities. We recommended in July 1998 that HUD develop consistent national, outcome-based measures for community and supportive services at HOPE VI sites. HUD has used its HOPE VI reporting system to collect data from grantees on the major types of community and supportive services they provide and outcomes achieved by some of these services. See GAO, HOPE VI: Progress and Problems in Revitalizing Distressed Public Housing, GAO/RCED-98-187 (Washington, D.C.: July 20, 1998); and Public Housing: HOPE VI Resident Issues and Changes in Neighborhoods Surrounding Grant Sites, GAO-04-109 (Washington, D.C.: Nov. 21, 2003). many residents enrolled in services related to education and job training, and from one-third to half of those who enrolled completed the activity (see table 6). However, these outputs may not be directly attributable to the community and supportive services program and are self-reported by HOPE VI agencies. HUD officials with responsibility for collecting these data noted that it would be excessively time and resource-intensive to verify the accuracy of these data. HUD staff also noted that if a HOPE VI agency’s community and supportive services data appeared to be inconsistent with past trends, HUD staff would follow up with the agency. Where HUD had data, the data suggest positive changes in income and employment for families that participated in the two FSS programs, but these results are not conclusive. Data on program completion were missing for nearly half of the records we evaluated. More specifically, the FSS Addendum data in PIC lacked exit, completion, or extension data on 6,819 (46 percent) of 14,690 families that started either of the programs in 2006. Of these families, the “family report” section of PIC indicated that 1,671 (25 percent) had left subsidized housing. Thus, after comparing both the FSS Addendum and the main “family report” data, we found that HUD’s systems lacked information on whether 5,148—35 percent—of the families that started either of the FSS programs in 2006 exited the FSS programs or subsidized housing. Of the subset of families for which exit, completion, or extension data were available in the FSS Addendum of PIC (that is, 54 percent of program participants), about 60 percent exited the program without completing it. In contrast, 25 percent of participants completed the program in 5 years or less (see table 7). Considering only those families for which HUD had complete data, we observed positive changes in income and employment. We observed these changes for the families that started an FSS program in 2006 and completed it in 5 years or less.based on the 25 percent of families for which HUD data indicated completed the programs in 5 years or less. But, these findings do not take into account other factors that may have affected the families’ progress towards self-sufficiency. Specifically, of the 1,937 families that started either of the two FSS programs in fiscal year 2006 and completed them within 5 years, HUD’s data suggest positive changes in income and employment (see table 8). For example, median income increased from about $17,000 per year to about $25,000 per year. Families with total incomes of $35,000 or more when they started the program (ninetieth percentile) experienced income gains of 34 percent, compared with income gains of 106 percent for families that had total incomes of about $3,000 or less (tenth percentile). These families also experienced positive changes in employment. For example, full-time employment among these program graduates increased 76 percent. The proportion of these graduates who were working part-time decreased 30 percent and the proportion not employed decreased 61 percent. HUD aims to improve self-sufficiency among residents of HUD-assisted rental housing by encouraging coordination between PHAs and agencies offering services that promote work and self-sufficiency. HUD’s 2010- 2015 strategic plan presents strategies for increasing resident self- sufficiency, which include coordination with federal, state, and local programs to increase access to job training, career services, and work support. Consistent with this strategy, HUD formed a partnership with the Department of Labor (Labor) to improve residents’ access to Labor programs and services. Residents of HUD-assisted rental housing may be eligible for a variety of Labor programs and services, including those funded through the Workforce Investment Act (WIA). These services are delivered locally through American Job Centers, also known as one-stop centers, which provide education and career training, job search tools, and assistance with developing resumes and interview skills. Since 2009, HUD officials have held meetings with Labor’s Employment and Training Administration to share information about their respective programs and seek opportunities to collaborate. One product of these meetings has been a joint toolkit for PHAs and local workforce agencies intended to improve HUD-assisted residents’ access to employment services offered through the workforce agencies’ one-stop centers, intended for release in 2013. The draft toolkit offers PHAs and workforce agencies a baseline understanding of each other’s functions, examples and lessons learned from successful partnerships currently in place, information on available online resources, and sample Memorandums of Understanding (MOU), or partnership agreements. HUD guidance also encourages PHAs to coordinate with local entities such as social service agencies and job training providers to increase residents’ self-sufficiency. For example, HUD issued a notice to all PHAs in 2011 promoting partnerships with such agencies. The notice describes the benefits of collaboration, provides examples of possible partnerships and strategies for partnership development, and includes model MOUs. The notice states that HUD encourages such partnerships and recognizes that they benefit both PHAs and the households they serve. In addition, HUD’s guidance to PHAs implementing the self- sufficiency programs we reviewed emphasizes the value of coordinating with local agencies. For example, HUD developed a resource website for HOPE VI grantees that provides information on interagency coordination and examples of PHA practices. Also, HUD’s training materials for ROSS SC grantees include resources that PHAs can use to form partnerships, such as a sample partner outreach letter. While this guidance is directed to HOPE VI and ROSS SC grantees, it is publicly available on HUD’s website and can be accessed by all PHAs. HUD’s program requirements for four of the self-sufficiency programs we reviewed call for PHAs to coordinate with local service providers (see table 9). For example, grant funds for the two FSS programs and the ROSS SC program cannot be used to fund direct services. Instead, they must be used to hire coordinators who refer residents to local service providers, making coordination a key component of these programs’ designs. Furthermore, applicants for ROSS SC and HOPE VI grants must demonstrate financial or in-kind support from partner organizations or agencies. Additionally, supportive services funded through HOPE VI grants must be coordinated with other service providers, including state and local programs. Finally, all four programs require participating PHAs to form coordinating committees to help ensure that residents are linked to the services that they need. For example, PHAs that receive either of the two FSS grants must establish a Program Coordinating Committee, which is charged with securing commitments of public or private resources for the operation of self-sufficiency programs. PHAs are encouraged to include local service providers, including welfare and workforce agencies, on the committees. Selected PHAs worked with local service providers to implement HUD’s self-sufficiency programs. We interviewed officials from a sample of five PHAs that had implemented one or more of HUD’s self-sufficiency programs, as well as officials from workforce and welfare agencies in those locations.local agencies that provided a broad range of services, including job training, mental health services, child care, transportation, food assistance, and homeownership counseling. Some of these local agencies received federal funding—for example, workforce agencies that administered WIA programs. PHA and local agency officials with whom we met stated that coordination efforts sometimes were formally established through MOUs, contracts, or regular meetings but added that most coordination efforts were informal. This informal coordination included referrals to each other’s services or the sharing of information or updates on new programs and services. All of the PHAs we interviewed connected residents to Officials from the five PHAs with whom we met identified few barriers to coordinating with local service providers and found current HUD guidance related to such coordination sufficient. Officials said that, despite resource constraints, they generally were able to obtain the services their residents needed from local agencies. PHA officials also stated that local agencies, including federally funded agencies, were receptive to their coordination efforts. However, while officials from two PHAs stated that they had strong relationships with their local workforce agencies, officials from the other three PHAs noted that these agencies’ one-stop centers could be intimidating for residents and might not always be able to provide residents with appropriate services. Officials from workforce agencies we interviewed generally agreed, noting that residents might not have the education or work experience needed for some of the training opportunities that the centers offered. For example, a workforce agency official told us that individuals interested in job training in certain fields, such as trucking or nursing, must take tests to demonstrate required levels of academic readiness to participate. He said that many assisted housing residents were not prepared to pass these tests and might first need remedial adult education. HUD officials with responsibility for administering the self-sufficiency programs we reviewed and Labor officials with responsibilities related to workforce agencies told us that the joint HUD-Labor toolkit described above was developed in part to improve resident access to services offered through the one-stop centers. In general, awareness of federal efforts to improve coordination was mixed among the PHAs, workforce, and welfare agencies with which we met. For example, officials from two of the PHAs were familiar with the HUD notice on promoting partnerships, but officials from the other three PHAs were not. Additionally, officials from two of the PHAs and all five workforce agencies were not aware of a 2009 joint HUD-Labor letter on collaboration. Still, officials from these agencies generally did not express a need for further federal guidance or assistance in facilitating relationships between PHAs and local agencies. HUD’s housing assistance programs serve millions of low-income residents. Statutory and programmatic requirements for HUD and PHAs also direct the organizations to undertake activities that would help these households increase their economic self-sufficiency. In particular, HUD has five programs that, in whole or in part, fund activities intended to help families become self-sufficient. However, HUD faces two major impediments to effectively operating these programs and achieving their goals. First, it does not have reliable data on participation in self- sufficiency activities across PHAs. Second, even though it has complete data for some PHAs, it generally does not use these data to review the progress participants may have made in, for example, finding a job or completing more education. In relation to participation data for the programs we reviewed, HUD has missed opportunities to help ensure that FSS participation data are complete. For example, in the two FSS programs significant gaps exist in participant entry and completion dates or reasons for leaving. These gaps are detrimental for several reasons, including their effects on grant funding and congressional reporting. Specifically, HUD uses resident participation data as a factor in making grant funding decisions and has reported this information in Congressional Budget Justifications. Moreover, without such data HUD cannot identify PHAs that have low or high completion rates. Federal internal control standards state that transactions should be promptly recorded to maintain their relevance and value to management and HUD’s own reporting guidance also directs grant recipients to record program start dates, exit dates, reasons for exiting the program prior to completion, and completion dates. HUD has recognized the importance of having reliable participation data in recent years, emphasizing its importance to grantees in funding notices. But HUD could further work with agencies to correct participation data during the grant award process, or analyze the data outside of this process to help ensure that all required data are complete. By doing so, HUD will improve the accuracy of these data, improve its ability to assess FSS grantee activities and thus make better-informed decisions about funding them, and provide Congress a more complete view of program performance. Similarly, ROSS SC is designed to provide grant recipients with funds to hire staff to help residents progress towards economic independence and self-sufficiency. Since 2008, HUD has required grant recipients to report participation data. However, HUD has not provided grant recipients with program-specific reporting guidance; thus, the reported data vary and cannot be easily or reliably used for assessing programwide participation. Federal internal control standards state that program managers need operational data to determine whether they are meeting their agencies’ strategic plans and meeting their goals for the effective and efficient use of resources. By developing program-specific reporting guidance, HUD could help ensure the collection of accurate participation data, and establish data sets that can be used to assess participation for the ROSS SC program as a whole. In relation to outcome data for the programs we reviewed, HUD has not optimized its use of the information it requires PHAs to collect. We acknowledge that determining the outcomes of self-sufficiency activities is difficult and requires rigorous analyses. It is also difficult to isolate the impact of such activities from other factors that may influence participant outcomes. But HUD could do more to put itself in a position to look across a program to review participant accomplishments. While HUD has collected some indicators for the FSS and ROSS SC programs (such as information on hours worked and receipt of welfare), it lacks a strategy for using the data it collects, whether through PIC or its Excel-based reporting tool. And, as with participation data, PHAs have not consistently reported such information, a condition exacerbated by the lack of program-specific reporting guidance. As stated above, internal control standards underline the importance not only of collecting but also using information to achieve programmatic goals—helping families increase self-sufficiency. Additionally, GPRAMA emphasizes the need for information on the effectiveness of federal programs to help improve congressional decision making. A strategy for using these data could inform overall management review, congressional oversight, and planning for these programs. For instance, using such data could help HUD identify from which PHAs to draw lessons to help improve HUD management of the grant programs as well as PHA management of self- sufficiency-related activities. To better inform Congress and improve what is known about residents’ participation in key grant programs designed to facilitate resident self- sufficiency, and their progress towards self-sufficiency, the Secretary of the Department of Housing and Urban Development should develop and implement a process to better ensure that data on FSS participant grants are complete; such a process should include steps for identifying missing data, identifying the reasons for missing data, and taking steps to help ensure data are complete; a process to better ensure that PHAs awarded ROSS SC grants annually report required participation and outcome data that are comparable among grant recipients; this process should include the issuance of program-specific reporting guidance; a strategy for regularly analyzing FSS participation and outcome data; such a strategy could include identification of PHAs from which lessons could be learned and PHAs that may need assistance improving completion rates or outcomes; and a strategy for regularly analyzing ROSS SC participation and outcome data; such a strategy could include identification of PHAs from which lessons could be learned and PHAs that may need assistance improving participation rates or outcomes. We provided a draft of this report to HUD, Labor, Treasury, and HHS. HUD provided written comments, which are reprinted in appendix II. Labor and HHS provided technical comments, which we incorporated as appropriate, and Treasury did not provide comments. HUD agreed with three of our recommendations, and pointed to actions it intends to take to implement them. While agreeing with these recommendations, HUD noted some concerns about our recommendation that it develop and implement a strategy for regularly analyzing FSS participation and outcome data, and disagreed with our similar recommendation for the ROSS SC program. While agreeing with our recommendation that it develop and implement a strategy for regularly analyzing FSS participation and outcome data, HUD stated that the data captured in PIC for this program are not designed for rigorous statistical analysis. However, according to HUD guidance, PIC can be used to create reports, analyze programs, monitor PHAs, and provide information about residents in HUD-subsidized housing. In addition, if participating PHAs annually updated data, as required, in the FSS Addendum, PIC would include data on sources of assistance received by the family; whether the head of the household worked full- time, part-time, or was unemployed; and the highest grade of education for the head of the household. By implementing our first recommendation—that HUD develop and implement a process to better ensure that data on FSS participants are complete—the completeness and therefore reliability of outcome data should improve. Consequently, usefulness of analysis of such data would also be improved. HUD disagreed that it should develop a strategy for regularly analyzing ROSS SC participation and outcome data. In doing so, the agency noted the small size of the program, the difficulty of analyzing outcomes, and that the data it collects are administrative in nature and not intended to serve as the basis for analysis. However, HUD’s training materials on the Excel-based tool that it uses to collect participation and outcomes data for the ROSS-SC program state that the tool is intended to be used to manage, monitor, and evaluate program services. These training materials and HUD staff with whom we met also indicated that the tool, when populated by participating PHAs, contains data on outputs and outcomes. And, as noted above, HUD agreed that its reporting guidance to ROSS SC grantees should be improved to help make data more meaningful. HUD’s disagreement with our recommendation to analyze the information collected does not accord with its requirement for PHAs to submit operational data or with its assessment that data quality ought to be improved. Consistent with internal controls for the federal government, which apply to programs of all sizes, regular analysis of ROSS SC operational data would help HUD determine whether it was meeting goals for the effective and efficient use of resources. It can be difficult to isolate and definitively assess program outcomes. But program data can help identify patterns and anomalies in program operations, which can help agencies discover ways to achieve more value for the taxpayer’s money. Consequently, we continue to recommend that HUD develop and implement a strategy for regularly analyzing ROSS SC participation and program data. In its technical comments, HUD also raised concerns about the accuracy of dollar amounts reported and the characterization of outputs related to the community and supportive services component of HOPE VI. First, HUD questioned the accuracy of the amounts awarded to PHAs through the FSS and ROSS SC programs and the amounts HOPE VI agencies spent on community and supportive services. Because trends in nominal spending may reflect changes in both price and quantity, we chose to present inflation-adjusted values by removing the general effects of inflation using a price index. Specifically, we adjusted these figures to fiscal year 2013 dollars using the fiscal year chain-weighted Gross Domestic Product price index. Second, for the HOPE VI program, HUD stated that without a costly and time-intensive experimental research design, it would not be possible to know whether the outputs residents’ experienced were directly attributable to the program. We acknowledged the difficulties of isolating program outcomes in this report, and have not made any recommendations related to HOPE VI. Finally, during the course of our review, OMB approved revised reporting requirements for the MTW program, which are intended to establish standard metrics for activities related to self-sufficiency, among other things. We revised the report to recognize these changes. HUD provided additional technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Housing and Urban Development and other interested committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or sciremj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objectives were to (1) discuss what is known about the costs of and residents’ participation in Department of Housing and Urban Development (HUD) grant programs that encourage work and self-sufficiency, (2) determine what is known about the effect on residents of HUD grant programs to promote self-sufficiency, and (3) describe steps HUD has taken to coordinate with other federal agencies and increase residents’ access to non-HUD programs that encourage work and self-sufficiency. To discuss what is known about the costs of and residents’ participation in HUD programs that encourage self-sufficiency we reviewed documentation of HUD’s programs and determined whether and how they defined and addressed resident self-sufficiency. Because HUD did not have an agencywide definition of self-sufficiency, and included 20 programs in its list of programs that contribute to its subgoal of increasing economic security and self-sufficiency, we established criteria to narrow the scope of programs to include in our review. First, we limited our review to grant programs that were intended to encourage work and self- sufficiency among residents of HUD-assisted rental housing. Second, we included grant programs that did not have self-sufficiency as their primary focus but rather as a secondary program goal. We sought to include programs that were actively awarding grants, or for which grants were still open. Based on these criteria, we identified the following programs: Public Housing Family Self-Sufficiency (PH FSS) Housing Choice Voucher Family Self-Sufficiency (HCV FSS) Resident Opportunity and Self-Sufficiency Service Coordinators (ROSS SC) Moving to Work (MTW) HOPE VI HUD officials agreed that we had identified the key grant programs that encourage work and self-sufficiency. To describe what is known about the programs’ costs, we met with program staff and obtained documentation of the total grant amounts awarded for specific periods, as follows: For the Family Self-Sufficiency (FSS) programs, we obtained grant award amounts for fiscal years 2006 through 2011 and analyzed data from HUD’s Public and Indian Housing Information Center (PIC) system on escrow account disbursements from fiscal years 2006 through 2011. We adjusted all dollar amounts to fiscal year 2013 dollars using the chain-weighted Gross Domestic Product price index. For ROSS SC, we obtained grant award amounts for fiscal years 2008 through 2011, because 2008 was the first year of the current version of the program. For the MTW program, we met with program staff and staff from HUD’s Grants Management Office to determine whether HUD collected information on MTW agencies’ expenditures for activities related to self-sufficiency. For the HOPE VI program, we reviewed and summarized the fiscal year 2006-2010 Notice of Funding Availability (NOFA) requirements related to community and supportive services expenditures and the amount of HOPE VI revitalization grant funds that HOPE VI agencies expended on community and supportive services in that period. To analyze residents’ participation in each of the five programs, we met with HUD staff to determine how participation data were collected and summarized any readily available information. For the two FSS programs, we analyzed participation data from PIC for fiscal years 2006 through 2011. Specifically, we analyzed data that public housing agencies (PHA) entered in the FSS Addendum, a PIC subcomponent. As a part of this analysis, we identified the extent to which program start dates, end dates, and annual updates for participating households were entered. We interviewed HUD officials with responsibility for administering both FSS programs and the PIC system about the completeness of the data and HUD’s use of the information. We evaluated the information available on families’ participation in the two FSS programs in relation to internal control standards for the federal government as well as HUD’s own guidance. Based on interviews with HUD officials and our review of the completeness of data fields in the FSS Addendum component of PIC, we determined that the available data were not sufficiently reliable to provide an accurate count of residents’ participation in FSS for fiscal years 2006 through 2011. We summarized data that HUD posted on its website as a part of the NOFA process for fiscal years 2009 through 2012. For the ROSS SC program, we obtained and reviewed aggregated data on program participation that a HUD contractor created in 2013. We evaluated the information available on residents’ participation in ROSS SC in relation to internal control standards for the federal government. We determined that the aggregated participation data were not sufficiently reliable to provide an accurate count of residents’ participation in ROSS SC for fiscal years 2008 through 2011. For the HOPE VI program, we summarized data from the HOPE VI Quarterly Reporting System on the total number of nondisabled residents between the ages of 18 and 64 who had received services through the community and supportive services component of HOPE VI from the program’s inception through the end of 2012. For the MTW program, we interviewed program administrators to determine whether and how they used information from participating agencies’ Annual MTW Reports or MTW-PIC to determine the number of MTW households participating in MTW and activities related to self- sufficiency. We analyzed the FSS Addendums associated with MTW- PIC in an effort to determine the number of MTW families that had participated in either of the two FSS programs. We determined, based on interviews with HUD officials and our analysis of FSS Addendum data for MTW agencies, that these data were not sufficiently reliable prior to 2011. We summarized available data on MTW families’ participation in the two FSS programs in 2011. To determine what is known about the effect on residents of HUD’s grant programs that encourage work and self-sufficiency, we examined whether and how HUD collected information that could indicate progress toward self-sufficiency, such as information on income and employment, for each program. We reviewed available data dictionaries and guidance HUD provided to PHAs required to report this information. We determined that for the five programs we reviewed, HUD used PIC, MTW-PIC, Excel- based reporting tools, written reports, and other reporting systems to collect this information. We interviewed HUD staff about their use of the self-sufficiency indicators they require PHAs to report, the reliability of the data collected, and program-specific guidance. We evaluated HUD’s processes for aggregating self-sufficiency-related outcomes data in relation to our standards for internal control. For the two FSS programs, we analyzed PIC data on the 14,690 families that HUD’s data indicated started the programs in 2006. Based on PIC data for fiscal years 2006 through 2012, we identified the number of families that completed the two FSS programs, exited the programs without completing them, and received an extension to continue the program past 2011, and for which HUD’s system lacked either program exit, completion, or extension information. For the subset of families that HUD’s data indicated had completed the two FSS programs in 5 years or less (1,937 households), we analyzed changes in median and mean income, income for those at the tenth and ninetieth percentiles, and employment experiences. We adjusted all dollar amounts to fiscal year 2013 dollars using the chain- weighted Gross Domestic Product price index. This sample represented about 25 percent of the families that started in 2006 and for which exit, completion, or extension data were available. In addition, this analysis does not control for other factors that may have affected participants’ progress towards self-sufficiency. We excluded any households that were receiving rental assistance from an MTW agency because HUD officials indicated that their FSS participation data were not as reliable, particularly before 2011. Through interviews and a literature search, we identified several studies of the two FSS programs, HOPE VI, and MTW. We reviewed these studies to identify information on the programs’ impact on resident self- sufficiency. We determined that these reports were methodologically sound and reliable for our purposes. We did not identify any studies of the ROSS SC program. To describe the steps HUD has taken to coordinate with other federal agencies and increase residents’ access to non-HUD programs, we reviewed HUD regulations, policies, and guidance related to coordination. We also interviewed officials from HUD, the Department of Health and Human Services, and the Department of Labor, and reviewed materials related to interagency coordination provided by these officials. To better understand how PHAs connect residents with non-HUD programs, we interviewed staff from PHAs administering each of the self-sufficiency programs included in our review. To select PHAs, we compiled lists of PHAs that received grant funding (or permission to participate in MTW) between 2004 and 2011. We then randomly selected a PHA to interview for each program and adjusted our initial sample to ensure that the five selected PHAs varied in terms of size and region. Four of the selected PHAs implement more than one of the programs in our review. The selected PHAs were: Boulder County Housing Authority (Boulder, Colorado) Jersey City Housing Authority (Jersey City, New Jersey) Kingsport Housing and Redevelopment Authority (Kingsport, Tennessee) Louisville Metro Housing Authority (Louisville, Kentucky) Washington County Housing and Redevelopment Authority (Woodbury, Minnesota) For each of these locations, we also interviewed officials from the local agencies that administered Temporary Assistance for Needy Families and the Workforce Investment Act programs. We conducted this performance audit from August 2012 through July 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on the audit objectives. In addition to the contact named above, Paul Schmidt (Assistant Director), Stephen Brown, Emily Chalmers, Geoff King, John McGrail, Marc Molino, Lisa Moore, Michael Pahr, Barbara Roesmann, Beverly Ross, and Andrew Stavisky made key contributions to this report.
HUD reported in 2011 that nearly 8.5 million lower-income families paid more than half their monthly income for rent, lived in substandard housing, or both. As the number of those needing assistance is greater than existing federal programs can serve, if families were able to increase their income and no longer require housing assistance, spaces could become available for other needy families. HUD offers several competitive grants that PHAs can use to hire staff who link residents to services or implement programs that encourage self-sufficiency. GAO was asked to examine the effectiveness of HUD's efforts to promote self-sufficiency among residents. Among its objectives, this report describes (1) costs and resident participation in HUD grant programs for PHAs that encourage work and self-sufficiency and (2) available information on the programs' effects on residents. GAO reviewed HUD's goals for encouraging self-sufficiency, program descriptions, and regulations; analyzed grant award data for fiscal years 2006-2011 and available outcome information; and interviewed HUD and PHA officials. The Department of Housing and Urban Development (HUD) funds five key grant programs that encourage resident self-sufficiency. In fiscal year 2011, HUD awarded $113 million to the Housing Choice Voucher Family Self-Sufficiency (FSS), Public Housing FSS, and Resident Opportunity and Self-Sufficiency Service Coordinators (ROSS SC) programs. Public housing agencies (PHA) with HOPE VI grants or designated as Moving to Work (MTW) agencies spent a portion of their funds on activities that encourage self-sufficiency, but the amounts MTW agencies spent are not known for the program as a whole. Additionally, data on resident participation in the five programs were limited. The number of families that participated in the FSS programs and ROSS SC cannot be reliably assessed due to missing start dates, end dates, and annual updates, and a lack of reporting guidance. HOPE VI data on residents' participation does not include information on the elderly or persons with disabilities. Programwide MTW data on participation generally were unavailable. Internal control standards for the federal government state that program managers need operational data to determine whether they are meeting goals for accountability (effective and efficient use of resources). Without complete participation data, HUD lacks key information to effectively manage and evaluate its programs and Congress lacks data needed to oversee the programs. HUD lacks a strategy for using data it requires of PHAs to expand what is known about outcomes in four of the programs. HUD has performed limited analysis of the data related to self-sufficiency outcomes for both types of FSS grants reported into its information systems. HUD has not analyzed similar data reported for ROSS SC and MTW activities. However, for HOPE VI HUD collects consistent, outcome-based measures for participation in self-sufficiency activities and uses the data to track residents' progress towards self-sufficiency. Internal control standards underline the importance not only of collecting but also using information to achieve programmatic goals. Also, the GPRA Modernization Act of 2010 (GPRAMA) emphasizes the need for information on the effectiveness of federal programs to improve congressional decision making. A strategy for using these data could inform overall management review, congressional oversight, and planning for these programs. Using such data could help HUD identify from which PHAs to draw lessons to help improve HUD management of the programs as well as PHA management of self-sufficiency-related activities. GAO's analysis of available data on residents who participated in the FSS programs suggests positive changes for those who completed the programs, but the results are not conclusive because data indicating whether a family exited FSS or subsidized housing were missing for 35 percent of families that started an FSS program in 2006. For three of its self-sufficiency programs, HUD should develop processes and program-specific reporting guidance to better ensure required data on participation and outcomes are complete. HUD agreed with three recommendations but disagreed that it should analyze data for the ROSS SC program. GAO believes that analysis of program data is critical for assessing outcomes.
In April 2002, following a yearlong study, the Commercial Activities Panel reported its findings on competitive sourcing in the federal government. The report lays out 10 sourcing principles and several recommendations, which provide a road map for improving sourcing decisions across the federal government. Overall, the new Circular is generally consistent with these principles and recommendations. The Commercial Activities Panel held 11 meetings, including three public hearings in Washington, D.C.; Indianapolis, Indiana; and San Antonio, Texas. At these hearings, the Panel heard repeatedly about the importance of competition and its central role in fostering economy, efficiency, and continuous performance improvement. Panel members heard first-hand about the current process—primarily the cost comparison process conducted under OMB Circular A-76—as well as alternatives to that process. Panel staff conducted extensive additional research, review, and analysis to supplement and evaluate the public comments. Recognizing that its mission was complex and controversial, the Panel agreed that a supermajority of two-thirds of the Panel members would have to vote for any finding or recommendation in order for it to be adopted. Importantly, the Panel unanimously agreed upon a set of 10 principles it believed should guide all administrative and legislative actions in competitive sourcing. The Panel itself used these principles to assess the government’s existing sourcing system and to develop additional recommendations. A supermajority of the Panel agreed on a package of additional recommendations. Chief among these was a recommendation that public-private competitions be conducted using the framework of the Federal Acquisition Regulation (FAR). Although a minority of the Panel did not support the package of additional recommendations, some of these Panel members indicated that they supported one or more elements of the package, such as the recommendation to encourage high-performing organizations (HPO) throughout the government. Importantly, there was a good faith effort to maximize agreement and minimize differences between Panel members. In fact, changes were made to the Panel’s report and recommendations even when it was clear that some Panel members seeking changes were highly unlikely to vote for the supplemental package of recommendations. As a result, on the basis of Panel meetings and my personal discussions with Panel members at the end of our deliberative process, I believe the major differences between Panel members were few in number and philosophical in nature. Specifically, disagreement centered primarily on (1) the recommendation related to the role of cost in the new FAR-type process and (2) the number of times the Congress should be required to act on the new FAR-type process, including whether the Congress should authorize a pilot program to test that process for a specific time period. As I noted previously, the new Circular A-76 is generally consistent with the Commercial Activities Panel’s sourcing principles and recommendations and, as such, provides an improved foundation for competitive sourcing decisions in the federal government. In particular, the new Circular permits: greater reliance on procedures contained in the FAR, which should result in a more transparent, simpler, and consistently applied competitive process, and source selection decisions based on trade-offs between technical factors and cost. The new Circular also suggests the potential use of alternatives to the competitive sourcing process, such as public-private and public-public partnerships and high-performing organizations. It does not, however, specifically address how and when these alternatives might be used. If effectively implemented, the new Circular should result in increased savings, improved performance, and greater accountability, regardless of the service provider selected. However, this competitive sourcing initiative is a major change in the way government agencies operate, and successful implementation of the Circular’s provisions will require that adequate support be made available to federal agencies and employees, especially if the time frames called for in the new Circular are to be achieved. Implementing the new Circular A-76 will likely be challenging for many agencies. Our prior work on acquisition, human capital, and information technology management—in particular, our work on the Department of Defense’s (DOD) efforts to implement competitive sourcing—provides a strong knowledge base from which to anticipate challenges as agencies implement this initiative. Foremost among the challenges that agencies face is setting and meeting appropriate goals that are integrated with other priorities. Quotas and arbitrary goals are inappropriate. Sourcing goals and targets should contribute to mission requirements and improved performance and be based on considered research and sound analysis of past activities. Agencies will need to consider how competitive sourcing relates to the strategic management of human capital, improved financial performance, expanded reliance on electronic government, and budget and performance integration, consistent with the President’s Management Agenda. At the request of Senator Byrd and this subcommittee, we recently initiated work to look at how agencies are implementing their competitive sourcing programs. Our work is focused on goal setting and implementation strategies at several large agencies. DOD has been at the forefront of federal agencies in using the A-76 process and, since the mid-to-late 1990s, we have tracked DOD’s progress in implementing its A-76 program. The challenges we have identified hold important lessons that civilian agencies should consider as they implement their own competitive sourcing initiatives. Notably: competitions took longer than initially projected, costs and resources required for the competitions were underestimated, selecting and grouping functions to compete were problematic, and determining and maintaining reliable estimates of savings were difficult. DOD’s experience also indicates that agencies will have difficulties in meeting the time frames set out in the new Circular for completing the standard competition process. Those time frames are intended to respond to complaints from all sides about the length of time taken to conduct A-76 cost comparisons—complaints that the Panel repeatedly heard in the course of its review. The new Circular states that standard competitions shall not exceed 12 months from public announcement (start date) to performance decision (end date), with certain preliminary planning steps to be completed before a public announcement. Under certain conditions, there may be extensions of no more than 6 months. We welcome efforts to reduce the time required to complete these studies. Even so, our studies of DOD’s competitive sourcing have found that competitions can take much longer than the time frames outlined in the new Circular. Specifically, DOD’s most recent data indicate that competitions have taken, on average, 25 months. It is not clear, however, how much of this time was needed for any planning that may now be outside the revised Circular’s time frame. In commenting on OMB’s November 2002 draft proposal, we recommended that the time frame be extended to perhaps 15 to 18 months overall, and that OMB ensure that agencies provide sufficient resources to comply with Circular A-76. In any case, we believe that additional financial and technical support and incentives will be needed for agencies as they attempt to meet these ambitious time frames. Finally, federal agencies and OMB will be challenged to effectively share lessons learned and establish sufficient guidance to implement certain A-76 requirements. For example, calculating savings that accrue from A-76 competitions, as required by the new Circular, will be difficult or may be done inconsistently across agencies without additional guidance, which will contribute to uncertainties over savings. The prior version of Circular A-76 provided for a streamlined cost comparison process for activities with 65 or fewer full-time equivalent (FTE) employees. Although the revised Circular also provides for a streamlined process at comparable FTE levels, the revised streamlined process lacks a number of key features designed to ensure that agencies’ sourcing decisions are sound. First, the prior version of the Circular contained an express prohibition on dividing functions so as to come under the 65-FTE limit for using a streamlined process. The revised Circular contains no such prohibition. We are concerned that in the absence of an express prohibition, agencies could arbitrarily split activities, entities, or functions to circumvent the 65-FTE ceiling applicable to the streamlined process. Second, the 10 percent conversion differential under the prior Circular has been removed for streamlined cost comparisons. The Panel viewed this differential as a reasonable way to account for the disruption and risk entailed in converting between the public and private sectors. Third, the streamlined process requires an agency to certify that its performance decision is cost-effective. It is not clear from the revised Circular, however, whether the term “cost-effective” means the low-cost provider or whether other factors may be taken into account (such as the disruption and risk factors previously accounted for through the 10 percent conversion differential). Finally, the revised Circular has created an accountability gap by prohibiting all challenges to streamlined cost comparisons. Under the prior Circular, both the public and the private sectors had the right to file appeals to ad hoc agency appeal boards. That right extended to all cost comparisons, no matter how small or large (and to decisions to waive the A-76 cost comparison process). The new Circular abolishes the ad hoc appeal board process and instead relies on the FAR-based agency-level protest process for challenges to standard competitions, which are conducted under a FAR-based process. While we recognize that streamlined cost comparisons are intended to be inexpensive, expeditious processes for relatively small functions, we are nonetheless concerned that the absence of an appeal process may result in less transparency and accountability. Another accountability issue relates to the right of in-house competitors to challenge sourcing decisions in favor of the private sector—an issue that the Commercial Activities Panel addressed in its report. While both the public and the private sectors could file appeals to the ad hoc agency appeal boards under the prior Circular, only the private sector had the right, if dissatisfied with the ruling of the agency appeal board, to file a bid protest at GAO or in court. Under the previous version of the Circular, both GAO and the Court of Appeals for the Federal Circuit held that federal employees and their unions were not “interested parties” with the standing to challenge the results of A-76 cost comparisons. The Panel heard many complaints from federal employees and their representatives about this inequality in protest rights. The Panel recommended that, in the context of improving the federal government’s process for making sourcing decisions, a way be found to level the playing field by allowing in-house entities to file a protest at GAO, as private-sector competitors have been allowed to do. The Panel noted, though, that if a decision were made to permit the public-sector competitor to protest A–76 procurements, the question of who would have representational capacity to file such a protest would need to be carefully considered. An important legal question is whether the shift from the cost comparisons under the prior Circular to the FAR-like standard competitions under the new one means that the in-house most-efficient organization (MEO) should now be found eligible to file a bid protest at GAO. If the MEO is allowed to protest, there is a second question: Who will speak for the MEO and protest in its name? To ensure that our legal analysis of these questions benefits from input from everyone with a stake in this important area, GAO posted a notice in the Federal Register on June 13, 2003, seeking public comment on these and several related questions. Responses were due July 16, and we are currently reviewing the more than 50 responses that we received from private individuals, Members of Congress, federal agencies, unions, and other organizations. We intend to reach a conclusion on these important legal questions in the coming weeks. For many agencies, effective implementation of the new Circular will depend on their ability to understand that their workforce is their most important organizational asset. Recognizing this, the Panel adopted a principle stipulating that sourcing and related policies be consistent with human capital practices that are designed to attract, motivate, retain, and reward a high-performing workforce. Conducting competitions as fairly, effectively, and efficiently as possible requires sufficient agency capacity—that is, a skilled workforce and adequate infrastructure and funding. Agencies will need to build and maintain capacity to manage competitions, to prepare the in-house MEO, and to oversee the work—regardless of whether the private sector or MEO is selected. Building this capacity is important, particularly for agencies that have not been heavily invested in competitive sourcing previously. Agencies must manage this effort while addressing high-risk areas, such as human capital and contract management. In this regard, GAO has listed contract management at DOD, the National Aeronautics and Space Administration, the Department of Housing and Urban Development, and the Department of Energy as an area of high risk. With a likely increase in the number of public-private competitions and the requirement to hold accountable whichever sector wins, agencies will need to ensure that they have an acquisition workforce sufficient in numbers and abilities to manage the cost, quality, and performance of the service provider. In our prior work—notably in studying the lessons that state and local governments learned in conducting competitions and in private-sector outsourcing of information technology services—we found that certain strategies and practices can help ensure the success of workforce transitions when deciding who should provide the services they perform. In general, these strategies recognized that the workforce defines an organization’s character, affects its capacity to perform, and represents its knowledge base. When an agency’s leadership is committed to effective human capital management, they view people as assets whose value can be enhanced through investments. Agencies can aid their workforce in transitioning to a competitive sourcing environment if they: ensure employee involvement in the transition process; for example, by clearly communicating to employees what is going to happen and when it is going to happen; provide skills training for either competing against the private sector or create a safety net for displaced employees to bolster their support for the changes as well as to aid in the transition to a competitive environment, such as offering workers early retirement, severance pay, or a buyout; facilitate the transition of staff to the private sector or reimbursable provider when that is their choice and assist employees who do not want to transfer to find other federal jobs; and develop employee retention programs and offer bonuses to keep people where appropriate. Recognizing the workforce as an asset also requires agency officials to view competitive sourcing—whether it results in outsourcing, insourcing, or cosourcing—as a tool to help ensure we have the right people providing services in an effective and efficient manner. The Panel recommended that employees should receive technical and financial assistance, as appropriate, to structure the MEO, to conduct cost comparisons, and to create HPOs. However, it is unclear whether agencies will have adequate financial and technical resources to implement effective competitive sourcing programs or make needed improvements. The administration has proposed the creation of a governmentwide fund for performance-based compensation. However, most federal agencies lack modern, effective, credible, and validated performance management systems to effectively implement performance-based compensation approaches. Importantly, a clear need exists to provide assistance both to government employees to create MEOs that can compete effectively and to agencies to promote HPOs throughout the federal government, especially in connection with functions, activities, and entities that will never be subject to competitive sourcing. Assistance is also needed in helping to create the systems and structures needed to support the effective and equitable implementation of more performance-based compensation approaches. As a result, we believe consideration should be given to establishing a governmentwide fund that would be available to agencies, on the basis of a business case, to provide technical and financial assistance to federal employees to develop MEOs and for creating HPOs, including the creation of modern, effective, and credible performance management systems. While the new Circular provides an improved foundation for competitive sourcing decisions, implementing this initiative will undoubtedly be a significant challenge for many federal agencies. The success of the competitive sourcing program will ultimately be measured by the results achieved in terms of providing value to the taxpayer, not the size of the in-house or contractor workforce or the number of positions competed to meet arbitrary quotas. Successful implementation will require adequate technical and financial resources, as well as sustained commitment by senior leadership to establish fact-based goals, make effective decisions, achieve continuous improvement based on lessons learned, and provide ongoing communication to ensure that federal workers know and believe that they will be viewed and treated as valuable assets. Mr. Chairman, this concludes my statement. I will be happy to answer any questions you or other Members of the subcommittee may have. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In May 2003, the Office of Management and Budget (OMB) released a revised Circular A-76, which represents a comprehensive set of changes to the rules governing competitive sourcing--one of five governmentwide items in the President's Management Agenda. Determining whether to obtain services in-house or through commercial contracts is an important economic and strategic decision for agencies, and the use of Circular A-76 is expected to grow throughout the federal government. In the past, however, the A-76 process has been difficult to implement, and the impact on the morale of the federal workforce has been profound. Concerns in the public and private sectors were also raised about the timeliness and fairness of the process for public-private competitions. It was against this backdrop that the Congress enacted legislation mandating a study of the A-76 process, which was carried out by the Commercial Activities Panel, chaired by the Comptroller General of the United States. This testimony focuses on how the new Circular addresses the Panel's recommendations reported in April 2002, the challenges agencies may face in implementing the new Circular A-76, and the need for effective workforce practices to help ensure the successful implementation of competitive sourcing in the federal government. The revised Circular A-76 is generally consistent with the Commercial Activities Panel's principles and recommendations, and should provide an improved foundation for competitive sourcing decisions in the federal government. In particular, the new Circular permits greater reliance on procedures in the Federal Acquisition Regulation--which should result in a more transparent and consistently applied competitive process--as well as source selection decisions based on trade-offs between technical factors and cost. The new Circular also suggests the potential use of alternatives to the competitive sourcing process, such as public-private and public-public partnerships. However, implementing the new Circular will likely be challenging for many agencies. Foremost among the challenges that agencies face is setting and meeting appropriate goals integrated with other priorities, as opposed to arbitrary quotas. Additionally, there are potential issues with the streamlined cost comparison process and protest rights. The revised streamlined process lacks a number of key features designed to ensure that agency sourcing decisions are sound, including the absence of an appeal process. Finally, the right of in-house competitors to file a bid protest at GAO challenging the sourcing decisions in favor of the private sector remains an open question. For many agencies, effective implementation will depend on their ability to understand that their workforce is their most important organizational asset. Agencies will need to aid their workforce in transitioning to a competitive sourcing environment. For example, agencies will need a skilled workforce and adequate infrastructure and funding to manage competitions; to prepare the in-house offer; and to oversee the cost, quality, and performance of whichever service provider is selected.
Enlisted servicemembers can be separated from the military when they are found to be unsuitable for continued military service. If a servicemember is diagnosed with a non-disability mental condition that interferes with the servicemember’s ability to function in the military, a commanding officer may initiate the separation process. Separated servicemembers may appeal their separation to a discharge review board within 15 years after separation from the military. Further, separated servicemembers may appeal the discharge review board’s decision by applying to a board for the correction of military records. Once enlisted servicemembers have been separated from military service, they receive a certificate of release from the military— Department of Defense Form 214 (DD Form 214)—that includes dates of service, last duty assignment, pay grade and rank, awards received, and a characterization of their service—such as honorable or general under honorable conditions.entities, such as the applicable military service, VA, and the servicemember. Servicemembers receive two copies of the DD Form 214—a copy that includes the characterization of their service and the reason for the separation and one that does not. The reason for the separation is noted by a separation code, as well as a narrative that explains the reason for the separation. According to DOD policy, separation codes are to be used by the military services so that data on the cause of separations can be collected and trends in separations analyzed, which may, in turn, influence changes in DOD separation policy. DOD established six separation codes that the military services may use for non-disability mental conditions on the DD Form 214, but the military services have discretion as to which codes they choose to use. Five of these separation codes pertain only to non- disability mental conditions. They are (1) acute adjustment disorder, (2) disruptive behavior disorder, (3) impulse control disorder, (4) personality disorder, and (5) other non-disability mental disorder. The sixth code available for use by the military services, “condition, not a disability,” is a broader separation code that includes both non-disability physical and mental conditions. DOD’s separation policy, dated January 27, 2014, contains eight separation requirements that the military services must follow when separating enlisted servicemembers for non-disability mental conditions. Of the eight separation requirements, five apply to all enlisted servicemembers and three more apply only to enlisted servicemembers who served in an imminent danger pay (IDP) area, such as Iraq or Afghanistan. 1. The servicemember must be notified in writing that a non-disability mental condition is the basis of the proposed separation, 2. the servicemember must be formally counseled concerning deficiencies and afforded an opportunity to overcome those deficiencies, 3. evidence must demonstrate that the servicemember is unable to function effectively because of a non-disability mental condition, 4. the servicemember must receive a non-disability mental condition diagnosis by a authorized mental health provider,5. the servicemember must be counseled in writing that the diagnosis of a non-disability mental condition does not qualify as a disability. For servicemembers who served in an IDP area, the non-disability mental condition diagnosis must 6. be corroborated by a peer- or higher-level mental health professional, 7. be endorsed by the military service’s surgeon general, and 8. include an assessment to determine whether the servicemember has symptoms of PTSD or other mental illness co-morbidity. Over time, DOD has expanded its separation requirements. In 2011 and 2014 DOD revised its policy by extending its separation requirements to apply to servicemembers being separated for any non-disability mental condition. See table 1 for an explanation of the policy changes since 2008. DOD and three of the four military services—Army, Navy, and Marine Corps—cannot identify the number of enlisted servicemembers separated for non-disability mental conditions because, for most separations, they do not use available codes to specifically designate the reason why servicemembers were separated. Instead, they use the broad separation code, “condition, not a disability,” that mixes non-disability mental conditions with non-disability physical conditions, making it difficult to distinguish one from the other without a time-consuming and resource- intensive record review. In contrast, the Air Force is able to track the specific reasons for its servicemember separations because it uses the full array of separation codes available. DOD policy requires a separation code to be used by the military services so that DOD can track and analyze the reason for separations and evaluate DOD’s separation policy to determine if changes are needed. However, according to DOD officials, the military services can choose to use any of the six available codes that DOD provides the military services to record non-disability mental conditions, including the “condition, not a disability” code. Moreover, under the internal control standard for control activities, all transactions—such as separations of enlisted servicemembers—need to be clearly and accurately documented so they can be examined when needed; for example, to monitor trends in the reasons for servicemembers’ separations. The three military services had varying reasons for using the broad separation code, “condition, not a disability.” Navy and Marine Corps officials stated that they have historically used this code for most separations but Navy officials could not explain why they use this broad code instead of using one of the separation codes specific to non- disability mental conditions. Marine Corps officials cited concerns with potential stigma the servicemember may face if a more specific code is used. Army officials had a similar concern, stating that they use the broad separation code for most non-disability mental condition separations to protect enlisted servicemembers after they leave the service. Army and Marine Corps officials told us they were concerned that employers may request the servicemember’s copy of the DD Form 214 that has the separation code on it, and having a code specific to a mental condition might stigmatize the servicemember. Army officials stated that this issue has been discussed in media articles for several years. In contrast, both Air Force and DOD officials told us that they do not have evidence that including the separation code on the DD Form 214 has caused problems for servicemembers, as suggested by the Army. An Air Force official said that, unlike the Army, the Air Force does not have information that using these codes has caused problems for separated Air Force servicemembers. Likewise, DOD officials told us that they have not heard that this is a problem. According to DOD officials, characterization of service, such as honorable or dishonorable, is usually the piece of information that employers want from the form. By including the characterization of service on both of the servicemember’s copies of the DD Form 214, DOD believes that the servicemember could provide an employer with the copy that does not contain the reason for the separation and the employer will be satisfied because the characterization of service is indicated. Because the three military services are using the broad separation code “condition, not a disability” for most separations, the resulting data cannot be used to identify the number of servicemembers separated for non- disability mental conditions. There is no other systematic way to track these separations; that is, without a tedious and time-consuming manual review. In one instance for example, in response to a request from DOD for information on the number of separations for non-disability mental conditions, the Army undertook a 6-month manual review of over 2,000 servicemember files—all because it does not delineate separations for non-disability mental versus non-disability physical conditions in any data system. If DOD and the military services cannot systematically identify or periodically evaluate the number of enlisted servicemembers separated for non-disability mental conditions, they cannot assess how well the separation policy and process is working or respond specifically to key stakeholders, including the Congress, about trends in or concerns about separations for non-disability mental conditions. From fiscal years 2008 through 2012, when the services were filing compliance reports, most services did not report full compliance with DOD separation requirements for separations for personality disorder. We also found that the reports contained incomplete and inconsistent data, and DOD conducted limited review of these reports. Further, based on a review of separation policies, we found that none of the services’ policies address all DOD requirements for non-disability mental condition separations. Additionally, DOD and the military services do not have oversight processes in place to ensure compliance with DOD requirements in this regard. From fiscal years 2008 through 2012, DOD required the military services to monitor and report to DOD on their compliance with DOD requirements for separating servicemembers for personality disorders; however, while the services generally reported improved compliance over the 5 years of reporting, we found in the 2012 compliance reports that three of the services had not yet achieved full compliance with all of DOD’s 2008 separation requirements. For each of the 20 compliance reports the services submitted for fiscal years 2008 through 2012, the military services were required to review a sample of at least 10 percent of their personality disorder separations for the fiscal year and assess the service’s compliance with DOD’s 2008 separation requirements. Compliance, according to DOD, was achieved if the sample reviewed met a 90 percent compliance threshold for each requirement. According to DOD officials, the annual compliance reports were discontinued in 2013 because the military services’ 2012 compliance reports indicated that the services were compliant with all of DOD’s 2008 requirements. However, it is unclear how DOD came to this conclusion when our review of the 2012 compliance reports found that the Air Force, Marine Corps, and Navy did not report compliance with all DOD requirements. Specifically, the Marine Corps reported compliance below 90 percent with one requirement and the Air Force was below this compliance rate for two requirements. Further, the Navy did not report on its compliance with three of the eight separation requirements, so DOD could not have known the Navy’s level of compliance in those areas. Among the 20 compliance reports submitted during fiscal years 2008 through 2012, the services reported the most difficulty meeting the requirement to notify all servicemembers that a personality disorder diagnosis does not constitute a disability. For example, the Marine Corps reported 78 percent or less compliance with this requirement in 3 of its 5 reports. finding itself noncompliant with this DOD requirement, the Air Force acknowledged that its policy incorrectly stated that servicemember notification was only required for servicemembers who served in an IDP area. However, even after the Air Force updated its policy in September 2010 and made a correction in this regard, the Air Force’s compliance with this requirement was 81 percent in fiscal year 2011 and 89 percent in fiscal year 2012. Because three of the four services were noncompliant with at least one requirement in their 2012 reports, DOD’s discontinuation of these reports was premature. The federal internal control standard for monitoring states that there should be reasonable assurance that ongoing monitoring occurs in the course of normal operations. Further, it requires reasonable assurance that deficiencies are identified and corrected or otherwise resolved. Our review of the services’ compliance reports found examples of incomplete and inconsistent information in many of the 20 compliance reports submitted by the military services to DOD between fiscal years 2008 and 2012. The Marine Corps reported 90 percent compliance in its fiscal year 2008 report and 100 percent compliance in its fiscal year 2011 report. Compliance information for reservists and National Guard members was missing. Nineteen of the 20 compliance reports did not assess compliance with separation requirements for reservists and National Guard members who were separated while not on active duty. Some military service officials stated that they did not provide information on these reservists and National Guard members because DOD did not specifically ask for them to be included in the reports. However, DOD instructed the Secretaries of the Army, the Air Force, and the Navy and the Commandant of the Marine Corps to include a random sampling of all personality disorder separations for each military department, in which reservists and National Guard members are included as part of their respective departments. Compliance information for servicemembers who served in an IDP area was missing. Eight of the 20 compliance reports did not indicate how many of the servicemembers in their review sample had served in an IDP area, if any. As a result, it is not clear the extent to which compliance was assessed in these reports for the three separation requirements that apply only to servicemembers who served in an IDP area. Further, in its fiscal year 2011 report, Navy reported that its review sample did not include any servicemembers who served in an IDP area. Therefore, Navy could not have assessed compliance with the three separation requirements that apply to these servicemembers, even though Navy reported 100 percent compliance with them. While DOD did not explicitly state that servicemembers that served in an IDP area should be included in the sample reviewed by the military services, three of the eight separation requirements only apply to such servicemembers, so the need to include these types of servicemembers in the review sample in order for DOD to gauge compliance should have been clear. The Navy had incomplete information in its fiscal year 2010 compliance report. The Navy failed to complete the reporting requirement for its fiscal year 2010 compliance report because its findings were based on a 4 percent sample of the total number of servicemembers separated for a personality disorder in that fiscal year—not the 10 percent required by DOD. In its compliance report, Navy stated that additional separations would be reviewed to fulfill DOD’s sample size requirement, but according to DOD and Navy officials, the results of this review were never reported to DOD. When asked about these additional reviews, Navy officials could not offer an explanation regarding whether the reviews were conducted. In addition to incomplete information in the compliance reports, we found that two of the four services—Army and Navy—reported inconsistent numbers in several of their compliance reports. Specifically, DOD required the services to include in each annual compliance report the total number of personality disorder separations. However, the Army’s and the Navy’s reports of these numbers were not consistent across the 5 years, as demonstrated in table 2. According to DOD officials, DOD could not find documentation of follow-up with the services regarding these inconsistencies. Based on interviews with officials from DOD and the military services, we found that almost no follow-up was conducted by DOD with the military services regarding the compliance reports even though the services reported noncompliance with separation requirements across the 5 years of reports. DOD officials stated that they received the annual compliance reports from the military services and assumed that the information provided in these reports was accurate, given that they were signed by each of the three military service secretaries and a representative of the Commandant of the Marine Corps. By not reviewing the information provided, DOD did not know that the reports contained inconsistent and incomplete information, and the agency made the decision to end the compliance reports based on what we found to be faulty assurances of compliance. To discontinue compliance reporting and not institute any other type of oversight is inconsistent with the standards for internal control in the federal government, which states that there should generally be assurance that ongoing monitoring occurs in the course of normal operations. In addition to problems we identified with compliance reports focusing on personality disorder separations, we also found that the Army, the Navy, the Marine Corps, and components of the Air Force—namely, the Reserves and National Guard—have been separating servicemembers for non-disability mental conditions according to policies that are not consistent with all DOD requirements. Each of the services has policies for separating servicemembers for non-disability mental conditions; however, we found these policies to be inconsistent with DOD policy because they have not been updated to include all of the changes in DOD requirements over time. Specifically, we found the following. The Army active duty separation policy is not consistent with one of DOD’s separation requirements. not been expanded to apply to servicemembers separated for all non- disability mental conditions. In addition, the Army Reserves and National Guard separation policy was updated in March 2014 yet does not contain all of DOD’s separation requirements, such as the requirement that servicemembers be counseled in writing that their diagnosis does not qualify as a disability. Army Regulation 635-200 Personnel Separations Active Duty Enlisted Administrative Separations (Washington, D.C.: Sep. 6, 2011) and Army OTSG/MEDCOM Policy Memo 14-049 Administrative Separation of Soldiers for Personality Disorder (PD) under Chapters 5-13 and 5-17, or Other Designated Physical or Mental Conditions under Chapter 5-17 (Fort Sam Houston, T.X.: Jun. 23, 2014). The Navy’s separation policy has not been updated to be fully The Navy officials stated they were consistent with DOD policy.unaware that DOD had revised its separation policy in 2011, and again in 2014, until we discussed this with them in May 2014 during the course of our review. The Marine Corps’ separation policy has not been updated to be fully consistent with DOD policy, since the Navy is responsible for revising and implementing such policies for the Marine Corps. The Air Force Reserves and National Guard separation policy was created in 2005 and has not been updated over time as DOD has updated its separation requirements for non-disability mental conditions. For example, the policy does not contain the separation requirements applicable to servicemembers who served in an IDP area. Because the services have been separating servicemembers based on outdated DOD policy, some servicemembers may not have been afforded the protections intended by DOD’s updated separation requirements. For example, servicemembers who served in an IDP area may not have had their diagnosis endorsed by the military services’ Office of the Surgeon General as required by DOD and have been separated without the benefit of a confirmation of their diagnosis by this senior medical entity. During the course of our review, we also found that the Air Force National Guard does not have a process to separate National Guard members for non-disability mental conditions. In 2013, Air Force National Guard reported that it had not been separating Guard members for non-disability mental conditions because it did not have a process to get a mental health assessment or diagnosis for National Guard members. National Guard members believed to have non-disability mental conditions were either returned to duty or separated for another reason such as misconduct or unsatisfactory performance of duty. According to DOD policy, when separation for unsatisfactory performance or misconduct is warranted, separating a servicemember for a non-disability mental condition is not appropriate. Under federal internal control standards, control activities should be established to help ensure that performance is correctly assessed and transactions are accurately recorded. Accordingly, there should be processes in place so that, if a servicemember is identified as potentially having a non-disability mental condition that affects the member’s ability to perform military duties, that servicemember can be separated for such a condition in accordance with DOD requirements, if appropriate. The Air Force developed a corrective action plan to address the National Guard’s inability to separate National Guard members for non-disability mental conditions; however, this plan was never implemented. According to officials, the biggest challenge in implementing the plan was that the Air Force National Guard does not have authorized mental health providers who can diagnose non-disability mental conditions. Without a mental condition diagnosis by an authorized mental health provider, a servicemember cannot be separated for a non-disability mental condition, according to Air Force policy. According to officials, the Air Force National Guard would need to be tasked and provided related funding to have these providers available. The officials could not provide any further information on why no further action was taken to correct this identified problem. Further, Air Force officials stated that they could not identify the number of servicemembers potentially affected by this problem without conducting a comprehensive file review. Air Force National Guard members that are thought to have a non-disability mental condition but are separated for another reason, such as misconduct, risk not receiving the protections of DOD’s separation requirements. Beyond the limited review DOD conducted of the military services’ compliance reports conducted for personality disorder separations, which was discontinued after fiscal year 2012, DOD and military service officials stated they do not conduct any oversight of all non-disability mental condition separations. In September 2011, DOD revised its separation policy by expanding the three requirements that apply to servicemembers who served in an IDP area to include servicemembers separated for any non-disability mental conditions, not just for personality disorders. In January 2014, DOD again revised this policy and expanded the separations requirements so that all eight requirements apply to servicemembers separated for any non-disability mental condition. However, DOD officials stated that they have not required the military services to conduct any review of their compliance with DOD’s separation requirements for all non-disability mental conditions to determine whether they are being followed. As noted previously, the internal control standard for monitoring states that controls should be designed to assure that ongoing monitoring occurs in the course of normal operations. According to DOD officials, the military services are responsible for conducting oversight of their separation processes, not DOD. However, while some of the military services have review steps within their process for separating servicemembers for non-disability mental conditions, none of the services have an entity-wide oversight process, such as conducting an annual review, to oversee separations and provide reasonable assurance that the review steps they have in place are effective at ensuring DOD separation requirements are met. Specifically, we found their review processes are the following: The Air Force separation process includes reviews of separation packages—which is prepared by a commanding officer—by the Air Force central personnel office and a Judge Advocate General (JAG) Corps attorney prior to separation. Officials from these offices told us that the purpose of the reviews is to ensure that the appropriate documentation is present to support the separation. The Army’s JAG Corps also reviews all separation packages prior to separation to ensure documentation is present and is consistent with law and policy, according to Army officials. While this is the current practice, Army officials also told us that they have plans in place to start conducting oversight of separations for non-disability mental conditions. Specifically, Army officials stated that separations for non- disability mental conditions will be included as part of the Army Organizational Inspection Program, which will review a sample of active duty servicemember separations at local commands to see if documentation is present that indicates that DOD separation requirements have been met. These inspections will not include reserve and National Guard member separations. Army officials stated they plan to conduct the first inspection in January 2015. The Marine Corps Judge Advocates review the separation packages for personality disorder separations—but not for other non-disability mental conditions separations—to ensure documentation is present and is consistent with law and policy, according to Marine Corps officials. The Navy does not conduct any review of the separation package prepared by a commanding officer prior to the servicemember’s separation. The absence of a routine and comprehensive entity-wide oversight mechanism—such as an annual compliance review—prevents the military services from knowing if they are complying with all of DOD requirements and appropriately separating servicemembers with non-disability mental conditions. The inability to easily identify the number of enlisted servicemembers separated for non-disability mental conditions hinders oversight by DOD and the military services to ensure that servicemembers are being separated appropriately. For example, without such data, DOD cannot determine how many separated servicemembers have served in IDP areas—who may be at greater risk of having a service-related mental condition, such as PTSD—or determine if these servicemembers have been afforded the protections intended by the separation requirements, such as ensuring they are assessed for service-related mental conditions prior to being separated for a non-disability mental condition. Further, lack of such data prevents DOD from being able to conduct analyses—such as trends in non-disability mental condition separations over time—that are important for determining whether the separation policy is working, and if not, how it should be changed. Absent an effective process for monitoring and reporting compliance, DOD and the military services cannot assure that the military services are complying with DOD requirements for separating servicemembers with non-disability mental conditions. The fact that some military services were not aware of or had not yet aligned their policies or processes with updated DOD requirements for separating servicemembers with non- disability mental conditions is of particular concern. Because the services have been separating servicemembers based on outdated DOD policy, the military services may not have been affording all servicemembers with the protections that DOD intended through its recent updates of separation requirements for non-disability mental conditions. As a result, some servicemembers may have been separated for non-disability mental conditions inappropriately. It is also of concern that, although the Air Force reported its inability to separate National Guard members for non-disability mental conditions in 2013, it has made no progress in correcting this issue. Meanwhile, these members risk not receiving the protections intended by DOD’s separation requirements. To improve identification of enlisted servicemembers separated for non- disability mental conditions, and to provide reasonable assurance that enlisted servicemembers, including Air Force National Guard members, are separated for non-disability mental conditions as appropriate and in accordance with DOD requirements, we recommend that the Secretary of Defense take the following six actions: Direct the Under Secretary of Defense for Personnel and Readiness and the Secretaries of the Army and the Navy and the Commandant of the Marine Corps to use the separation codes specific to a non- disability mental condition or develop another uniform method to track servicemembers who have been separated for specific non-disability mental conditions so that this information can be easily retrieved. Direct the Under Secretary of Defense for Personnel and Readiness and the Secretary of the Air Force to take steps to ensure there is an appropriately staffed process to identify and administratively separate enlisted National Guard members who are unable to function effectively in the National Guard because of a non-disability mental condition. Direct the Secretaries of the Army, the Air Force, and the Navy and the Commandant of the Marine Corps to update their services’ administrative separation policies to be consistent with DOD regulations for those servicemembers separated for all non-disability mental conditions. Direct the Secretaries of the Air Force and the Navy and the Commandant of the Marine Corps to implement processes to oversee separations for non-disability mental conditions, such as reinstituting the requirement of annual compliance reporting of a sample of administrative separations, using current DOD policy requirements as review criteria for servicemembers of all military services and their Reserve components. Direct the Secretary of the Army to ensure that Army’s planned oversight of separations for non-disability mental conditions is implemented and incorporates reservists and National Guard members separated for such conditions, or that Army implement another process to oversee such administrative separations using current DOD policy requirements as review criteria for all servicemembers, including reservists and National Guard members. Direct the Under Secretary of Defense for Personnel and Readiness to review any processes used by the military services to oversee such administrative separations to ensure compliance with DOD requirements. We provided DOD and VA a draft of this report for advance review and comment. DOD provided written comments, which we have reprinted in appendix II. DOD generally concurred with our six recommendations. Regarding our recommendation that DOD direct the Secretaries and Commandant to use the existing separation codes specific to a non- disability mental condition or develop another uniform method to track such servicemember separations, DOD provided several reasons why it would not use the existing separation codes specific to non-disability mental conditions, including the possible stigmatization of the servicemember, but DOD agreed with the need to develop a method to uniformly track such separations. In its comments, DOD did not outline how it would develop a method to track these servicemember separations, nor when it would implement this or other of our recommendations. DOD and VA also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, the Air Force, and the Navy; the Commandant of the Marine Corps; the Under Secretary of Defense for Personnel and Readiness; the Secretary of Veterans Affairs; and other interested parties. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Like any veteran separated from military service under other than dishonorable conditions, veterans separated for non-disability mental conditions are eligible to receive Department of Veterans Affairs (VA) health care benefits, if they meet certain service requirements. Generally, servicemembers must have served 24 continuous months, or the full period for which they were called or ordered to active duty, in order to be eligible to receive VA health care benefits. Once VA determines the eligibility of veterans who have applied for VA health care, it assigns them to an enrollment priority group. Generally, VA assigns veterans separated for non-disability mental conditions to priority groups 5 through 8. VA assigns the veterans as follows: If the veteran has served in an imminent danger pay (IDP) area: the veteran is afforded what is known as enhanced enrollment status and is generally assigned to priority group 6. The veteran is not subject to copayments for visits or medications for conditions potentially related to their service in an IDP area, such as physical The veteran remains enrolled as injuries incurred while in service.priority group 6 for 5 years after discharge from the military. If during this 5-year, post-discharge period an eligibility status is gained that affords enrollment in a higher priority group, such as the finding of a compensable service-related medical condition, the veteran is immediately moved to the higher priority group. Once the 5-year term has ended, the veteran remains enrolled in the VA health care system but his priority for enrollment is reassessed and the veteran may be shifted to priority group 5, 7, or 8 depending on his household income. If the veteran has not served in an IDP area: the veteran is assigned to priority group 5, 7, or 8 based on income level. Veterans enrolled in these priority groups must agree to pay applicable copayments. Veterans remain enrolled in the VA health care system but are reassessed annually, and placed in a priority group based on the veteran’s household income. (See fig. 1) Once assigned a priority group and enrolled in VA’s health care system, the veteran is eligible to receive VA’s medical benefits package, which covers most of VA’s medical services such as preventive care, and outpatient and inpatient diagnostic and treatment services, and includes services for mental health conditions. In addition to the contact named above, Marcia A. Mann (Assistant Director), Zhi Boon, Deirdre Brown, Cathleen J. Hamann, Lisa Opdycke, Vikki Porter, and Laurie F. Thurber were major contributors to this report. Jacquelyn Hamilton provided legal support.
Non-disability mental conditions, such as personality disorders, can render a servicemember unsuitable for military service and can lead to an administrative separation. GAO was mandated to report on non-disability mental condition separations. This report examines the extent to which (1) DOD and the military services are able to identify the number of enlisted servicemembers separated for non-disability mental conditions, and (2) the military services are complying with DOD requirements when separating enlisted servicemembers for non-disability mental conditions, including personality disorders, and how DOD and the military services oversee such separations. GAO analyzed DOD and the military services' separation policies, policies related to tracking separations, reports the military services submitted to DOD regarding compliance with separation requirements, and interviewed DOD and military service officials. The Department of Defense (DOD) and three of the four military services—Army, Navy, and Marine Corps—cannot identify the number of enlisted servicemembers separated for non-disability mental conditions—mental conditions that are not considered service-related disabilities. For most non-disability mental condition separations, these services use the broad separation code, “condition, not a disability,” which mixes non-disability mental conditions with non-disability physical conditions, such as obesity, making it difficult to distinguish one type of condition from the other. In contrast, the Air Force is able to identify such servicemembers because it uses all five of the separation codes specific to non-disability mental conditions. DOD policy requires the military services to use a separation code so that DOD can track and analyze separations. Moreover, federal standards for internal control state that all transactions need to be clearly and accurately documented and readily available for examination when needed. The three services had varying reasons as to why they use the broad separation code. For example, Army officials believed that stating in servicemembers' discharge papers that they were discharged for non-disability mental conditions might stigmatize them with future employers. However, DOD stated that there are ways to protect servicemembers in this regard by providing them with discharge papers that are more general and that do not disclose specific reasons for discharge. By not systematically identifying or periodically evaluating the number of separations for non-disability mental conditions, DOD and the services cannot assess how well the separation policy and process are working or inform key stakeholders, including the Congress, about separation frequency, trends, and other data. The military services lack separation policies that address all of DOD's eight requirements for separating servicemembers with non-disability mental conditions; both DOD and the services also lack oversight over such separations. From fiscal years 2008 through 2012, DOD required the services to report on their compliance with DOD requirements for personality disorder separations, one of the non-disability mental conditions. Most of the services reported by fiscal year 2012 that they were not compliant with all eight requirements and many of the 20 reports contained incomplete and inconsistent information. For example, 19 reports were missing information on reserve members. DOD discontinued these reports and did not institute any other oversight, which is inconsistent with the internal control standard for monitoring. GAO also found, based on a review of the services' separation policies, that the services have not updated their policies to meet all DOD requirements for non-disability mental condition separations. For example, Navy officials stated that they were unaware that DOD separation policies had changed since 2008 until GAO's review. DOD officials stated that the military services are responsible for conducting oversight of their separation processes; however, GAO found that the military services do not have processes to oversee non-disability mental condition separations. Without up-to-date and consistent policies and oversight processes, DOD and the military services cannot ensure that servicemembers separated for non-disability mental conditions have been afforded the protections intended by DOD's separation requirements and that servicemembers have been appropriately separated for such conditions. GAO recommends that DOD and the military services develop a method to identify the number of servicemembers separated for non-disability mental conditions and take a number of actions to ensure that their policies and processes can ensure that servicemembers are appropriately separated for non-disability mental conditions in accordance with DOD's separation requirements. DOD generally concurred with GAO's recommendations, but did not provide information on how or when it plans to implement the recommendations.
In 1986, IRS began to modernize its technology for processing tax returns, enforcing the tax laws, and assisting taxpayers. IRS was several years into its system modernization efforts before it began to study the implications for its organization and work processes. In response to suggestions from us and others, IRS decided that it should take this opportunity to redesign its organization and processes for administering the tax laws. Hence, TSM has become part of IRS’ business vision for both technological and organizational change. IRS’ business vision includes the following basic organizational components: (1) submission processing centers to receive paper returns, correspondence, and other tax documents; (2) customer service centers to interact with taxpayers mainly by telephone; (3) district offices to use face-to-face contacts to assist taxpayers and enforce the tax laws; and (4) computer centers to maintain taxpayer accounts, process electronically filed returns, and receive electronic fund transfers. IRS’ customer service vision is a plan for changing the way IRS interacts with taxpayers. IRS does this in a number of ways, such as answering inquiries, clarifying and correcting tax returns, and collecting unpaid taxes. The agency has decided that these interactions are too fragmentary in its current organization and that taxpayers who contact IRS are too often told to call or write to other offices. It thus developed a plan to consolidate, in 23 customer service centers, work that has been done in at least 70 organizational units in 44 locations, including much work currently done by correspondence. As shown in figure 1, this consolidation of functions involves a major restructuring of IRS operations, including a reduction in the number of staff doing customer service work in 1994 from about 29,000 to an estimated 22,240 in 2001. Customer service centers would absorb the functions of toll-free taxpayer assistance sites, which answer calls about tax law and procedures, taxpayer accounts, and notices that taxpayers receive from IRS. In addition, customer service centers would attempt to convert to the telephone some work now done by correspondence in the collection, adjustment, taxpayer relations, and underreporter branches of service centers. Customer service centers would also absorb the workload of the current automated collection call sites, which contact taxpayers to secure payments and answer calls from taxpayers who are the subjects of collection actions. Finally, customer service centers would handle requests for tax forms in lieu of the current forms distribution centers. We did this assignment because of the magnitude of the changes IRS was planning for its customer service activities. Our objectives were to describe IRS’ goals for its customer service vision and its plans for achieving them, determine the current status of implementation, and identify major challenges facing IRS in moving toward its vision. To gather information on these objectives, we reviewed numerous IRS studies, plans, IRS’ National Office customer service site visit reports, and training materials. The key documents used as baselines for following the progress of customer service were IRS’ September 1993 Business Plan; April 1994 Business Master Plan; IRS Future Concept of Operations, Volume V, Customer Service Center, August 1994; November 1994 Customer Service Implementation Information; and February 1995 draft Integrated Transition Plan for Customer Service; interviewed IRS’ National Office officials responsible for customer service and related technology projects; reviewed IRS’ customer service workload model that was used for forecasting staff needs and the model used for selection of customer service sites, and discussed the latter model with officials of IRS’ Office of Cost Analysis; interviewed the Assistant Director of Negotiations at the National Treasury Employees Union (NTEU) who represents IRS’ bargaining unit employees; visited three of the seven customer service centers that had begun operations as of June 30, 1995: Nashville, TN; Fresno, CA; and Cincinnati, OH; and reviewed related reports done by us, IRS’ Internal Audit, and the National Research Council. We requested comments on a draft of this report from the Commissioner of Internal Revenue or her designee. On August 21, 1995, we met with IRS officials to obtain their comments on the draft. IRS representatives at that meeting included the Assistant Commissioner for Taxpayer Services; the Customer Service Site Executive; the Director, Taxpayer Service Design and Review; and the Director, Collection Customer Services. Their comments are summarized on pages 24 and 25 and incorporated elsewhere in the report where appropriate. We did our work from January 1994 to June 1995 in accordance with generally accepted government auditing standards. IRS’ customer service goals are to greatly improve both its service to taxpayers and its efficiency in using resources. IRS also expects these improvements to contribute to a higher level of compliance with the tax laws. IRS intends to achieve these goals by consolidating several taxpayer interaction functions in customer service centers, giving customer service representatives broad training and responsibility, and using better technology. IRS’ customer service vision, if achieved, would represent a substantial change from current capabilities and performance. One goal of the customer service vision is to resolve 95 percent of taxpayer inquiries after one contact, which IRS refers to as initial contact resolution. In its 1993 Business Plan, which describes IRS’ business vision, IRS stated that its interactions with taxpayers were fragmented among too many organizational units. One of IRS’ reasons for consolidating taxpayer interaction functions is its perception that taxpayers are often referred from one office to another or told to write a letter to resolve their inquiries. The difficulty of achieving this goal and of measuring its achievement is illustrated by one of IRS’ first experiences with initial contact resolution in 1991. By giving telephone assistors additional computer terminals with access to taxpayer accounts and more authority to resolve issues, IRS tried to reduce the contacts taxpayers had to make to resolve questions about their accounts. However, we reported that IRS was not reliably measuring its performance in this area. For fiscal year 1993, IRS reported achieving 96-percent initial contact resolution for these types of calls, but we found that about 80 percent of a small sample of these callers had to make additional contacts to resolve their account concerns. IRS acknowledged that its methodology for measuring initial contact resolution was flawed and agreed to develop a more accurate way to measure initial contact resolution for all types of calls. IRS intends to reach its initial contact resolution goal by bringing functions together in customer service centers, giving its representatives broader responsibility and authority, and giving them better technological tools with easy access to the needed data. Another IRS goal is to improve telephone accessibility. Currently, many taxpayers find it difficult to reach IRS by telephone. In fiscal year 1994, 73 percent of all call attempts received busy signals. IRS’ Tele-Tax, an automated system in which taxpayers can listen to selected topic tapes and get refund status information did better, recording busy signals only 13 percent of the time. IRS’ business vision for customer service centers calls for greatly reducing busy signals. Callers would be satisfied by an automated response program, reach a representative immediately, be put on hold, or leave a message to be called back. IRS’ plans call for having automated response programs resolve up to 45 percent of inquiries without intervention by a representative. IRS has an automated system that should allow it to route calls around the nation to balance peak and slack times in different areas. However, considering the shortcomings of current telephone accessibility to IRS, achievement of full accessibility is an ambitious goal. In addition to providing improved service to taxpayers, IRS plans to achieve efficiencies of personnel and facilities use by combining functions, providing better data and technology to customer service representatives, and moving most of the correspondence work to the telephone. The 23 customer service centers are expected to assume the workload currently done at 32 taxpayer services toll-free sites, 23 automated collection call sites, 3 toll-free forms distribution sites, and several branches in the 10 IRS service centers, such as adjustments and collection. In many locations, existing operations are to be consolidated to form the customer service centers, but operations would be closed at 21 locations that currently house 14 taxpayer services sites, 9 collection sites, and 2 forms distribution sites. (See app. I.) Existing IRS operations have drawn their workloads from geographically defined service areas; until 1995, employees had limited access to taxpayer data outside their service areas. When dealing with a taxpayer from another area, they had to request data from the IRS service center responsible for that area and recontact the taxpayer. In early 1995, IRS implemented a networking procedure among the 10 service center computers so that customer service representatives could have on-line access to some taxpayer account data nationwide, rather than only within their service areas. IRS plans to give its customer service representatives additional on-line access to taxpayer data nationwide. IRS also has an automated system that routes calls nationwide when one area is overloaded, and plans for the system to route specific calls and assign compliance cases on a nationwide basis to those customer service representatives that have the appropriate level of expertise to resolve the taxpayer’s concern. Its goal is to even the workload as much as possible and use personnel efficiently. IRS has projected that taxpayer interaction work now done by about 29,000 people will be done by about 22,240 at customer service centers. Under a redeployment understanding with the NTEU, IRS intends to gradually reassign employees to customer service centers from current positions that are being eliminated. IRS expects the planned reduction to occur through attrition. IRS employees interacting with taxpayers have often lacked easy access to information needed to resolve cases. This has not been just a geographic limitation, but also a result of fragmentation of taxpayer data into stand-alone systems that currently store different taxpayer data. Also, the current systems are not very user friendly because they require extensive knowledge of procedures and command codes. IRS intends to provide its customer service representatives with on-line access to a national database encompassing all the taxpayer’s account data and reference information needed to resolve most taxpayer accounts. In addition, new case processing software should help representatives work through cases more efficiently. Using a taxpayer’s social security number to obtain case history information, the software should automatically assemble the relevant information on screen, provide questions and prompts for the representative, and perform calculations for updating the account. IRS is moving toward this goal by first developing an Integrated Case Processing (ICP) project that links together existing information systems and helps representatives use them. In fiscal year 1994, IRS service centers received over 21 million pieces of correspondence from taxpayers. IRS has made progress in recent years in handling its correspondence. However, we reported in 1994 that IRS still had problems with timeliness, inadequate responses, and repeat correspondence on the same subject. One important assumption in the customer service vision is that most correspondence work from service centers can be converted to telephone and that this conversion will increase the productivity of IRS’ staff. As of June 30, 1995, the Fresno, CA, Customer Service Center had substantial experience in converting correspondence, and the Cincinnati, OH, Customer Service Center had just begun. IRS has calculated that the Fresno prototype Customer Service Center, working by telephone, has been at least twice as productive as the service center adjustments branch, working by correspondence. However, the director of the study noted that the range of inquiries handled by the customer service center may not have been as complicated as at the adjustments branch. In addition, the study showed that the cost to process inquiries was lower using the customer service approach; however, telephone charges were not factored in the cost. Thus, while converting work from written correspondence to the telephone appears on the surface to be more efficient and less costly, the extent of savings from this approach is still unknown. Success in converting correspondence to the telephone may require that IRS have authority to resolve more issues based on oral evidence from taxpayers rather than signed statements. Presently, some oral evidence is acceptable for such issues as penalty abatements up to a certain dollar amount. An IRS official told us that IRS might need legislation to remove some of the existing requirements for written evidence to meet its workload conversion goal. Achieving this goal would also depend on the willingness of taxpayers to resolve issues by telephone, which in turn may depend on the accessibility of the telephone service. If IRS does not succeed in converting a large percentage of correspondence workload to the telephone, it will need to retain more people than it is currently projecting to handle this workload. IRS has stated that achievement of its customer service vision will contribute to improved compliance with tax laws. IRS believes that compliance will improve as a result of (1) improved service to taxpayers and (2) earlier access to taxpayer data that will allow IRS to follow up on unreported or underreported income and other problems with returns. Earlier access would come from the improved technology expected for processing returns and checking the returns against information from employers and other reporting sources at the time the return is filed. IRS estimates that through increased voluntary compliance, to which customer service will contribute, and increased enforcement efforts, overall compliance will increase from 86.4 to 87.2 percent by 1997. IRS estimates that the increase would generate about $6.7 billion more tax revenue in 1997 than in 1994. IRS has described its customer service vision in various planning documents associated with its modernization effort, such as its Future Concept of Operations. It has also selected sites for its customer service centers, experimented with prototype centers, projected staffing needs, developed a schedule for start-up operations, and formulated a plan for progressively expanding the workload of new centers. As of June 30, 1995, the actual implementation of the customer service vision was still in an early stage. In December 1993, IRS announced the locations of future customer service centers. These included all 10 current service centers and 13 of the 35 locations where collection and/or taxpayer services were conducted by telephone. The 23 locations chosen and the 21 locations where telephone operations are to be discontinued are listed in appendix I. In January 1994, IRS initiated the Fresno Service Center as the prototype site for experimenting with customer service operations at a service center, and the Nashville district as the prototype for creating a customer service center out of existing taxpayer services (TPS) and automated collection service (ACS) operations. According to IRS’ implementation plan, these two types of centers would follow different paths of development, but would eventually come to perform most of the same functions. In late 1994, IRS began customer service operations on a small scale at the Cincinnati, OH, and Brookhaven, NY, Service Centers, and in early 1995, at the Kansas City, MO; Andover, MA; and Philadelphia, PA; Service Centers. IRS’ transition to the customer service vision is proceeding more slowly than originally foreseen. The 1993 Business Plan estimated that 10,000 of the eventual 22,240 customer service employees would be on site by the end of fiscal year 1996, with centers fully operational in 2001. While IRS still plans to begin operations at all 23 locations by the end of 1996, the scope of their work will be limited, and current plans call for about 5,000 staff in place. IRS now expects full transition to occur some time after 2001. Several indicators of IRS’ progress in reorganizing for customer service are summarized as follows: IRS intends to set up 23 customer service centers, with staffs varying from 186 to 2,609. Each center requires site preparation, acquisition of furniture and equipment, installation of software and telecommunications equipment, and training of staff. As of June 30, 1995, 7 centers had begun operations, 3 of which had fewer than 80 staff to perform very limited workloads. IRS intends to discontinue telephone operations at 21 locations, which include 14 taxpayer services telephone sites, 9 collection call sites, and 2 forms distribution sites. As of June 30, 1995, it had closed 6 locations. Figure 2 summarizes the reorganization of IRS telephone operations foreseen in the customer service plan. IRS intends to redeploy 22,240 staff to customer service centers. As shown in figure 3, about 925 of these staff were in place on June 30, 1995. IRS has not yet estimated the volume of correspondence it expects to convert to telephone work. Only Fresno had attempted any conversion at the time of our review, and IRS had conducted a study to assess the effect of Fresno’s initial efforts. The study concluded that these efforts may have reduced the service center’s correspondence receipts related to selected notices by about 91,000, from 600,000 to 509,000. According to an IRS study, the Fresno adjustments branch recorded a 15-percent decline in incoming correspondence after IRS began including the customer service center’s telephone number on outgoing notices. Fresno planned to expand its conversion efforts in 1995, and initial efforts were also planned at some new customer service centers. As a transitional approach to case processing, IRS is developing the ICP project. ICP is being developed in four increments. The first increment has been tested at Nashville, and the development and testing of the next two increments are scheduled through 1997. Scheduling for increment four has not yet been determined. The system is to be modified for use with a planned new comprehensive database of taxpayer information. IRS has been hampered to some extent in implementing its customer service plans by a lack of clarity in management responsibilities. This is reflected both at the senior management level for achieving the customer service vision and at lower management levels for specific TSM projects crucial to the vision. The lack of clarity at both management levels can be traced to IRS’ current organizational structure, which does not fully conform to the plans for the customer service vision, but uncertainty about how to implement IRS’ new core business systems management approach has also contributed to the confusion. IRS’ officials have a plan that, if timely and thoroughly implemented, should clearly identify responsibilities at the senior management level. An integral part of IRS’ core business system approach is its requirement for designation of “owners” of each core business system and the underlying subsystems and processes. IRS has defined an owner as an individual assigned to be responsible and accountable for all the activities associated with a core business system, subsystem, or process. This responsibility often includes establishing business requirements, setting quality measures, and overseeing the development of new products and services intended to enhance the performance of the core business system in meeting taxpayers’ needs. IRS has designated owners for the six core business systems and for most of the subsystems and processes that comprise the majority of IRS’ activities. Two of these core business systems, Managing Accounts and Ensuring Compliance, include the subsystems and processes that are to be transferred to customer service. Thus, the subsystems and processes that are to make up customer service are divided between two current IRS core business systems. (As the boundaries of the two functional organizations involved, Taxpayer Services and Compliance, are identical to the core business system boundaries, customer service responsibilities are split between the functional organizations also.) Because IRS’ current organizational structure does not match the structure planned for customer service, no owner for the emerging customer service organization has yet been designated. The result is that responsibility for carrying out the many projects and tasks necessary to work toward the goals of the customer service vision is divided between two IRS organizations, which are headed by two different senior managers called chiefs, who are also responsible for directing current returns processing, taxpayer services, examination, and collection activities. The divided ownership of the components of the customer service vision also raises the question of how the new customer service organization that will emerge as more sites are rolled out will be managed. Traditionally, IRS offices that carried out the service activities being combined at customer service sites have been part of a district and/or regional office, with the National Office providing policy guidance and oversight. However, significant changes have recently been made in both the number and responsibilities of IRS’ District and Regional offices. And, in some instances, components of IRS’ National Office have been structured in such a way that they have greater control over field operations than has traditionally been the case. IRS’ top leaders will have to decide which office will have responsibility for the emerging customer service organization and who is to lead it. IRS officials are aware of the potential problems associated with the divided ownership of the components planned for customer service and plan to address them soon. The Customer Service Site Executive told us that a group of executives had been selected to study the issue and make recommendations on how to deal with it. The issue we identified at management levels below the chiefs deals with IRS assigning process owners responsibility for seeing that specific products and services needed to further the customer service goals are successfully carried out and making sure process owners who have been assigned effectively carry out their responsibilities. Assigning process owners at the operational level has been difficult because there was confusion about who they should be and what their specific roles and responsibilities should be, especially in activities involving more than one core business system. IRS did not assign a process owner for its Voice Balance Due (VBD) interactive telephone system until late in its development, thus risking the need for changes late in the development process. The VBD system cuts across the Managing Accounts and Ensuring Compliance core business systems. IRS officials told us that in cases where core businesses overlap there has been confusion about who the owner should be. In the case of the VBD system, Managing Accounts personnel told us they owned it, but because the system includes collection activities, the Customer Service Site Executive’s Office believed the system belonged to Ensuring Compliance. However, the Director of Ensuring Compliance said that his office was not the owner of the project, although it had been involved. IRS officials in the Telephone Routing Interactive System (TRIS) Project Office, the developer of the new telephone systems, told us that they needed to have process owner input for the VBD quality measures so that they could plan and collect the management data needed to measure the system’s performance. TRIS officials also told us they did the best they could to determine the management data needed for the pilot test. Near the end of the pilot test, in June 1995, IRS assigned the Assistant Commissioner for Taxpayer Services—a process owner—responsibility for all the interactive systems, including the VBD. This was over 2 years after the TRIS Project Office began the design and development of the VBD. As a process owner, the Assistant Commissioner for Taxpayer Services is responsible for coordinating the requirements and overseeing the development for all interactive telephone systems, including those that involve more than one core business system. Although IRS officials told us the VBD system was working well, they agreed that owners should be clearly designated early in the design of systems to make sure they will meet the needs of the customer service sites and the taxpayers who will use them. In the case of two other interactive telephone systems, the Refund and Location systems, officials in the Managing Accounts Core Business System assumed the role of owner, but did not provide the TRIS Project Office with timely input in developing quality measures. Quality measures are important because IRS plans to use the measures to determine the success of the new systems and whether to implement them in the field. To avoid delays in development, the TRIS Project Office established quality measures and began testing the systems based on those measures. About 3 weeks into the 30-day test period, the owners required that additional measures be tested, and the pilot period was extended for at least 30 days. Managing Accounts officials who worked on the quality measures said that they wanted to be involved with TRIS earlier, but that their ongoing workload prevented them from doing so. IRS’ top management has not made clear the criteria for selecting owners and what their roles and responsibilities are to be. This is particularly important in situations where TSM projects such as the interactive telephone systems involve more than one core business system. In reviewing IRS’ early progress toward its customer service vision, we identified four important challenges that IRS must cope with and which we intend to monitor: (1) how to manage the transition to a substantially different organization while meeting ongoing workload demands, (2) how to define the responsibilities of customer service representatives to achieve a successful balance of generalization and specialization, (3) how to realize the expected benefits of new technology, and (4) how to measure success and balance multiple workload goals in the new centers. One of the challenges IRS faces will be managing the transition to the customer service vision while continuing to meet the ongoing workload demands of answering taxpayer inquiries, managing taxpayer accounts, and collecting unpaid taxes. Each group of employees who are redeployed as customer service representatives requires classroom and on-the-job training before the group can assume its share of the workload. Classroom training time for recent start-up operations has varied from 5 to 9 weeks, depending on the prior experience of employees. An IRS official responsible for customer service did not know at the time of our review how much training would be needed for the full scope of a customer service representative’s responsibilities. IRS intends to gradually close telephone operations at 21 locations through 1999, in the meantime keeping them adequately staffed to handle their share of the ongoing workload. In the spring of 1995, concerned about premature attrition at the sites scheduled for closing, IRS responded by hiring temporary employees who do not have redeployment rights. The customer service representative position has potentially very broad responsibilities, involving such things as (1) answering some tax law questions, (2) soliciting information from taxpayers and adjusting accounts, (3) helping taxpayers arrange payment plans, (4) contacting taxpayers to obtain payments, and (5) answering calls from taxpayers who have been subject to collection actions. Experience at the two prototype centers demonstrated the pitfalls of expecting too much too soon of the representatives. Nashville, which has both taxpayer services and automated collection call sites, tried to achieve maximum versatility of the customer service representatives in its prototype by creating a position with customer service expertise in collection work, taxpayer account work, and tax law matters. This involved cross training or “blending” some of the employees from the two call sites. After experiencing some accuracy and productivity problems, Nashville adopted less ambitious goals for blending functions, returning to a division of responsibility among its customer service representatives. Fresno’s prototype, based at a service center, focused primarily on responding to some of the service center’s written inquiries by telephone. It also tried to broaden its workload by accepting some tax law inquiries from neighboring taxpayer services sites. However, although the customer service representatives received some training in tax law, they nevertheless experienced difficulties with the tax law questions, and Fresno has since discontinued this workload. While IRS intends that the case processing software it is developing will provide guidance to customer service representatives, its expectations of them will nevertheless be substantial. In training and assigning work to them, IRS will have to find a balance between generalization and specialization. Expecting each representative to handle any issue that comes along could result in many mistakes. On the other hand, too much division of responsibility could undermine the flexibility needed for efficient operations and the goal of providing one-stop service to taxpayers. IRS’ task is complicated by the redeployment understanding it reached with NTEU. One of the considerations in filling customer service positions is that preference must be given according to the seniority of employees whose jobs will be adversely affected by reorganization. While some of these employees work at the current taxpayer services and collection call sites, more than half are in service center positions that do not involve telephone interaction with taxpayers. Much of the workload encountered will be new to these employees, as well as the environment of working by telephone. IRS found that many of the employees who exercised redeployment rights for the initial customer service start-up operations lacked relevant experience and needed considerable training. The potential effectiveness of customer service centers will depend considerably on the successful application of information technology. IRS’ vision for customer service relies directly on two TSM projects, TRIS and ICP, and indirectly on several others, such as Electronic Filing and the Document Processing System. IRS is installing new telephone equipment that greets callers with recorded messages. Taxpayers are able to choose from menus to route themselves to the appropriate source of assistance. This equipment allows management to monitor call volumes on specific subjects and adjust staffing to reduce waiting time. In addition, IRS is developing interactive systems that should allow taxpayers to accomplish various actions without speaking to a customer service representative. This equipment has been in use at some taxpayer services sites in recent years. IRS experience has shown that a large majority of callers do use the menus to route their calls, thus reducing the need for a group of representatives known as “screeners” who would identify the nature of a taxpayer’s inquiry and transfer it to the appropriate area for resolution. However, IRS studies have also shown that the success of such programs is very sensitive to menu design. In 1994, for example, some features of the menus increased the amount of time callers spent listening to menu options and also increased the percentage of callers who opted for a live receptionist instead of routing themselves. IRS is attempting to refine its menus to minimize these problems. IRS anticipates using interactive programs for about 30 different purposes. As of June 30, 1995, two programs had been in use for some time nationwide: interactive menus for self-routing of calls and recorded tax law information/refund status (Tele-Tax). Testing was completed on three additional programs in June 1995 at the Nashville prototype and are now being evaluated. They include one to be used by taxpayers who receive notices of taxes owed. IRS plans to offer taxpayers who select this option a chance to arrange for an extension of their payment due date or an installment payment plan. IRS expects that these interactive programs will be able to handle many routine taxpayer inquiries and transactions without assistance from a customer service representative. This would give representatives more time to handle more complicated inquiries and work on compliance issues, such as unreported or underreported income, improperly completed returns, and unpaid taxes. IRS has projected that 45 percent of calls can be satisfied by interactive programs. However, if programs are not well designed, or taxpayers are simply reluctant to use them, representatives would need to field many more calls than anticipated, and productivity gains could be reduced. IRS eventually wants its customer service representatives to interact with a consolidated database. In the meantime, IRS is developing ICP to overcome the fragmentation of taxpayer data in its current systems. ICP is being developed in four increments that will progressively expand the capability of representatives to interact with the databases for taxpayer accounts, collection, examinations, and underreported income. ICP is to provide a series of questions and prompts to help representatives work through various kinds of taxpayer issues, as well as make queries and updates to account data with assurance that the data will be reconciled among the different databases. Under ICP, IRS also intends to develop a workload management system that would allocate representatives’ time to various functions, create electronic case folders, and assign cases to representatives who have been certified to handle the issues involved. As of June 30, 1995, the first increment of ICP had been piloted at Nashville, allowing customer service representatives to access existing databases from a single terminal. Through 1998, IRS plans to develop, test, and install the remaining increments, as well as train representatives in their use. ICP includes plans to transfer IRS data from its existing separate databases to its eventual consolidated database. Customer service productivity would be reduced if ICP fails to deliver the data access and analytical capabilities envisioned for customer service representatives. IRS’ projections of customer service workload are dependent on certain TSM projects not directly used by customer service centers, such as electronic filing of returns and electronic scanning of paper returns. Electronically filed returns contain few errors compared to paper returns and thus should generate fewer problems to be resolved by customer service representatives. Similarly, electronic scanning through the Document Processing System is expected to reduce data input errors in processing paper returns. IRS expects these two projects to replace most of the manual processing of tax returns and greatly reduce errors. As of June 30, 1995, results of these projects were somewhat disappointing. The number of electronically filed returns received was running 17.8 percent below that of the same period in 1994 (11.1 million compared to 13.5 million in 1994). Attempts to use scanning equipment to process the simplest tax returns were also not succeeding as planned. The final outcome of these projects will not be known for several years. To the extent that the projects fail to achieve projected results, the follow-up workload generated for customer service centers will be greater than anticipated. This would mean that estimated savings in staffing and costs may not accrue to the degree that IRS had envisioned. In response to our attempt to learn how shortcomings with certain TSM projects could affect customer service, IRS officials told us that a statement of work has been written to contract out for a comprehensive analysis of the costs and savings associated with customer service. IRS has not determined all of the indicators by which success in customer service centers will be measured, although it has committed itself to goals of 95-percent initial contact resolution and near-100 percent accessibility.The customer service centers are to combine work that is currently done in several different organizational components of IRS. IRS has recognized that simply transplanting the workload priorities and performance measures of these components to the new centers would be inappropriate. It intends to develop work plans and performance measures relevant to the new organizations. Without appropriate performance indicators in place, IRS has only physical indicators, such as sites established, persons redeployed, and technological programs installed to measure progress toward the customer service vision. While useful, these indicators do not measure the value of the reorganization and may not alert IRS to developing problems. Developing meaningful indicators will not be easy because IRS is attempting to achieve qualitative as well as quantitative change. Managing the workload of customer service centers will also be a significant challenge. We believe that the new centers will likely feel the tension of competing demands to answer inquiries, adjust accounts, follow up on examinations, and collect unpaid taxes. If taxpayers are dissatisfied with accessibility and waiting times, pressure will build to answer more calls at the expense of compliance and collection activities. IRS will no longer have organizational walls to protect these different functions and will have to balance them within a single organizational structure. IRS is counting on technology to cope with such problems. Under ICP, IRS intends to develop an automated Workload Management System (WMS) that will provide the capabilities to create and manage cases, track and control correspondence, and maintain employees’ skills inventories. ICP is in an early stage of development, however, and the requirements for WMS are still being defined. IRS has undertaken an ambitious plan to reorganize and downsize its operations that provide service to taxpayers, a plan it has just begun to implement. IRS officials have recently acknowledged that the transition will last beyond its planned goal of 2001. It is too soon to conclude whether IRS will eventually accomplish its goals, but the gap between current operations and its customer service vision is very great. A lack of clarity in management responsibilities has, to some extent, hampered IRS in the early stages of implementing its customer service vision. This lack of clarity exists both at the senior management level and at the lower management levels for specific TSM projects crucial to the vision. The reasons for this are in part related to IRS’ current organizational structure, but are also due to problems in adjusting to a new management structure based on the core business system approach. At the senior management level, IRS has not designated an owner to be responsible and accountable for all the work units that will be part of customer service and the many projects that must be successfully completed to make the customer service vision a reality. IRS officials plan to address the need for an owner for customer service at the senior management level. At lower management levels, the process owner was assigned late for overseeing an interactive telephone system that involved both the Managing Accounts and Ensuring Compliance core business systems. Also, in two other instances owners for such systems had assumed ownership roles, but had not adequately carried out their responsibilities. The problems we identified have not had a serious adverse effect to date because IRS’ implementation of the customer service vision is still in the early stages. However, failure to deal with them as soon as possible could delay the development and implementation of new products and services needed for customer service. Finally, to achieve its customer service goals, IRS will have to overcome a number of challenges involving managing the transition to a new organization while meeting ongoing workload demands, defining customer service representatives’ responsibilities, realizing the benefits of new information technology, and developing effective ways to measure performance in the new centers. We recommend that the Commissioner of Internal Revenue clarify the criteria for assigning process owners responsibility for TSM projects when they involve more than one core business system; define process owners’ roles and responsibilities for TSM projects involving more than one core business system; and emphasize to those designated as process owners the need for them to provide the business requirements necessary to develop, test, and implement new customer service products and services. We obtained oral comments on a draft of this report from senior IRS officials in a meeting on August 21, 1995. IRS officials present included the Assistant Commissioner for Taxpayer Service; the Customer Service Site Executive; the Director, Taxpayer Service Design and Review; and the Director, Collection Customer Services. These comments were supplemented by a memorandum clarifying remarks made during our discussion. IRS officials said that the report was an accurate assessment of the progress to date for achieving IRS’ customer service vision. IRS officials agreed with our recommendations, but suggested wording changes in the recommendations to clarify their intent. We have made the wording changes the IRS officials suggested. In their comments on a draft of this report, IRS officials pointed out that a process owner—the Assistant Commissioner for Taxpayer Services—had been assigned ownership responsibility for all of the interactive telephone systems in June 1995. They said that the Assistant Commissioner’s responsibilities included coordinating requirements and overseeing the development of interactive telephone systems that involved more than one core business system. Assigning a senior official ownership responsibility for all of the interactive telephone systems should help to avoid the kinds of problems we identified with those systems. The IRS officials said that, to address our concerns on a broader basis, the Modernization Executive—the official who has overall responsibility for IRS’ TSM efforts—has been charged with directing, prioritizing, and coordinating any projects dealing with modernization, including those that support Customer Service. The IRS officials also said core business system owners have responsibility for identifying process owners for processes within their core business system. However, they emphasized that the Modernization Executive is responsible for making sure that any change efforts that involve more than one core business system, including TSM projects, are not adversely affected by the overlap. We believe that assigning the Modernization Executive responsibility for ensuring that TSM projects involving more than one core business system are not adversely affected by the overlap should, if properly carried out, accomplish the intent of our recommendations. We are sending copies of this report to other interested congressional committees, the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. Copies will also be made available to others upon request. Major contributors to this report are listed in appendix II. If you or your staff have any questions concerning the report, please call me on (202) 512-9110. Table I.1: Locations Chosen for Customer Service Centers To be determined. Lorelei H. Hill, Site Senior The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Internal Revenue Service's (IRS) progress in realizing its plan for improving its customer service, focusing on: (1) IRS customer service goals and its plans to meet these goals; (2) the difficulty IRS has in meeting these goals; (3) current management concerns; and (4) important challenges IRS faces. GAO found that: (1) IRS customer service goals are to provide better service to taxpayers, utilize its resources more efficiently, and improve taxpayers' compliance with tax laws; (2) IRS expects to improve its efficiency by having fewer work locations and automated workload management, giving customer service representatives better computer resources and access to taxpayer accounts, improving taxpayers' accessibility to telephone service, and allowing taxpayers to resolve their inquiries after a single telephone contact; (3) IRS has made progress toward its vision by initiating limited operations in new customer service centers; (4) current IRS management concerns include the lack of ownership for customer service, the absence of owner involvement during project development, and inadequate quality control to measure interactive telephone systems performance; (5) these management concerns have not had serious adverse effects on IRS goals because implementation of the customer service vision is still in the beginning stages; and (6) IRS will have to determine how to manage the transition to a different organization while maintaining ongoing workloads and developing and using new information technology in order to attain its customer service goals.
The Bureau’s mission is to collect and provide comprehensive data about the nation’s people and economy. Its core activities include conducting decennial, economic, and government censuses; conducting demographic and economic surveys; managing international demographic and socioeconomic databases; providing technical advisory services to foreign governments; and performing other activities such as producing official population estimates and projections. The Bureau is part of the Department of Commerce and is in the department’s Economics and Statistics Administration, led by the Under Secretary for Economic Affairs. The Bureau is headed by a Director and is organized into directorates corresponding to key programmatic and administrative functions, as depicted in figure 1. Two of these directorates are responsible for the projects that support the Internet response option for the 2020 census: (1) Associate Director for 2020 Census and (2) Associate Director for Information Technology and Chief Information Officer. Bureau officials have established a goal of conducting a high-quality 2020 census at a lower cost than the 2010 census. In order to achieve cost savings and quality targets for the 2020 Decennial Census, the Bureau must make fundamental changes to the design, implementation, and management of the decennial census. Accordingly, the Bureau identified four key design areas that are intended to enable the Bureau to achieve its goal: (1) reengineering address canvassing to eliminate a nationwide field address canvassing effort in 2019, (2) utilizing existing administrative records to reduce non-response follow-up workload,field operations to more efficiently and effectively manage the 2020 census fieldwork, and (4) optimizing self-response to generate the largest possible self-response rate, thus eliminating the need to follow-up with those households. The optimizing self-response design area includes examining contact strategies and self-response modes to identify which are best for each demographic, geographic, and language-based group. This includes giving households the option of responding to the census through an Internet-based survey—referred to as the Internet response option. Other self-response modes include mailing paper questionnaires to households and asking them to complete and mail back the forms, as well as providing a phone number for households to call and provide their responses via telephone. In fiscal year 2012, the Bureau began research and testing alternatives in the design areas. Key research and testing is expected to continue until the end of fiscal year 2015 (the end of September 2015), which is when the Bureau plans to decide how it will design the operations for the 2020 Decennial Census—referred to as the preliminary design decision—and will produce an updated total life-cycle cost estimate. The Bureau completed a major field test in 2014, and as of October 2014, was in the process of finalizing the design of an Optimizing Self-Response test that is planned for the spring of 2015, and was beginning to design a National Content Test, which is planned for the fall of 2015. These tests are intended, in part, to test components of the Internet response option. Following the preliminary design decision, the Bureau plans to conduct additional research and testing and further refine its design decisions through 2018. By September 2018, the Bureau plans to have fully implemented the 2020 design so that it can begin operational readiness testing. Figure 2 provides the timeline for planned 2020 Decennial Census research and testing. The Bureau is planning two additional field tests in 2015 that are not related to the Internet response option—Address Validation Test and 2015 Census Test. The 2020 census is intended to be the first time that the Bureau implements an Internet response option on a wide scale for the decennial census. During the 2000 Decennial Census, the Bureau tested the use of an Internet response option. The Internet option had few respondents (approximately 63,000 households representing about 169,000 persons), in part because the Bureau did not advertise this response option. However, the test demonstrated that the Internet response option worked operationally. The Bureau considered building on the 2000 experience for the 2010 Decennial Census by including the Internet response option in the scope of the Decennial Response Integration System contract that was awarded in October 2005. This contract included requirements to provide functionality for the public to respond to the 2010 census via paper, telephone, and the Internet. In July 2006, the Bureau decided not to include the Internet response option in the design for the 2010 census and eliminated it from the scope of the contract. This was because testing indicated that the Internet response option did not increase the overall response rate enough to justify the costs of building and securing this option, which the Bureau had underestimated. The Bureau also had continued concerns about the ability to sufficiently secure respondents’ data. Additionally, in January 2013, the Bureau implemented an Internet response option for its American Community Survey (ACS), which is another household survey conducted by the 2020 Census Directorate on a smaller scale than the decennial census. The ACS continuously collects data on social, demographic, economic, and housing characteristics that help determine how federal funds are allocated to states and localities and provide information to communities to aid in planning investments and services. The Bureau collects ACS data on a monthly sample of households and aggregates the results into 1-, 3-, and 5-year estimates, depending on the population size of the area. Bureau officials stated that they intend to build on the IT infrastructure and lessons learned from the ACS in order to implement the Internet response option for the 2020 census. The Bureau determined that the Internet response option offers several benefits for the 2020 census, including the added convenience for households in an increasingly Internet-enabled population to respond to the survey; better quality data, which could reduce the amount of follow- up that is needed for surveys with incomplete or inconsistent data; and less printing, postage, and processing of paper questionnaires. Bureau officials have also stated that the Internet response option would provide opportunities to administer the survey in multiple languages more easily than with paper questionnaires. The Bureau’s efforts to deliver an Internet response option for the 2020 census include several key components: Internet response application: Design and develop an online survey instrument that allows respondents to enter and submit their information to the Bureau. To enhance Internet response participation, the Bureau is researching an option for respondents to submit their information without having the traditional Bureau-issued ID number (referred to as “non-ID processing”). To achieve this, the Bureau will need to develop the capability to validate respondent-provided addresses either automatically against its master address file in real- time or in batches offline. IT infrastructure: Develop and acquire the IT infrastructure (e.g., servers, hardware, and network capacity) needed to support the data processing, storage, and transactions from Internet responses. The Bureau has stated that it plans to use cloud computing solutions, which is a means for establishing on-demand access to shared and scalable pools of computing resources, to help support the large volume of data processing, storage, and transactions needed for Internet responses. Communication and outreach: Planning for how the Bureau can make use of partnership efforts, advertising, and outreach methods like social media to maximize the use of an Internet response option and motivate households to self-respond via Internet. Additionally, the Bureau is exploring other potential design features related to Internet response during the research and testing phase, which may or may not be included in the final design of the 2020 census. For example, the Bureau is testing whether there is value in asking households to pre-register via a separate online portal prior to the census with a preference on how they would like to receive reminders (e.g., e- mail or text message) from the Bureau on completing the survey. The Bureau is also testing whether there is value in using e-mail addresses purchased from commercial sources to contact respondents and ask them to complete the survey. According to the Bureau, it will also need to conduct research to help address privacy and information security concerns the public may have regarding its use of the Internet to contact respondents and collect their personal information, as well as to determine which non-English languages will be offered with the Internet response option. The Bureau currently has 16 projects, planned or under way, that are related to the 2020 census Internet response option.being managed by two different directorates—the 2020 Census These projects are Directorate is responsible for many of the research and testing projects, while the IT Directorate is responsible for developing and acquiring the IT systems and infrastructure needed to support the 2020 census. The IT Directorate’s projects fall under an enterprise-wide program initiated in fiscal year 2015 called the Census Enterprise Data Collection and Processing (CEDCAP) program. CEDCAP is intended to integrate disparate, program-specific survey data collection and processing systems that the Bureau uses to conduct its many surveys. Table 1 summarizes the Bureau’s projects related to the Internet response option. While the Bureau plans to offer multiple self-response modes in the 2020 census, including the paper and telephone options that have been offered in prior censuses, the effect of an Internet response on population groups that are already typically undercounted or missed during the census is unclear. The Bureau has identified segments of the population that are more difficult to enumerate based on prior censuses, such as minorities, renters, children, low-income households, and low-education households. To help identify hard-to-count populations, the Bureau segmented the population into eight unique groups based on census demographic, socioeconomic, housing, and mail response data. Each group contained housing units with similar characteristics such as housing vacancy, tenure, marital status, education, poverty, and unemployment level. The groups that targeted the hard-to-count population included single unattached mobile renter, economically disadvantaged homeowner, economically disadvantaged renter, ethnic enclave homeowner, and ethnic enclave renter. To help reduce the undercount for the 2010 census, the Bureau embarked on a number of outreach and enumeration activities aimed at getting the hard-to-count populations to participate in the census. For example, the Bureau used actual participation data from the 2000 census as well as market and attitudinal research that identified different mindsets people have about the census, such as those who are less likely to participate because they doubt the census provides tangible benefits or are concerned that the census is an invasion of privacy and that the information collected will be misused. The Bureau used this information to tailor its paid media efforts, such as buying additional media in areas with low participation rates. with a population that is growing larger, As we have previously reported,more diverse, and more reluctant to participate, a complete and accurate census has become an increasingly daunting task. While the Bureau invested more resources in reaching out to and enumerating the hard-to- count population groups in the 2010 census, it achieved the same overall participation rate as in the 2000 census. This trend is likely to continue as the nation’s population gets larger, more diverse, and more difficult to count. According to data from the Pew Research Center, Internet use has increased significantly since 2000, from 50 percent to 87 percent of adults in the United States (see fig. 3). Also according to data from the Pew Research Center, as of 2014, Internet use varied among different demographic groups, such as race/ethnicity, age group, education level, and household income (see table 2). Another population group to consider when introducing the Internet response option is adults age 65 or older. Although they have historically been a high self-response group using the paper-based method, Internet use among this group was only 57 percent, compared to 97 percent among adults under age 30. In addition, ownership of mobile computing devices such as smartphones and tablets has increased significantly in the past several years, as has the use of cell phones to access the Internet, send and receive e-mails, download software applications, and send and receive text messages (see figs. 4 and 5). Our prior work has identified the importance of having sound management processes in place to help the Bureau as it manages the multimillion dollar investments needed for its decennial census. For the last decennial, we issued multiple reports and testimonies from 2005 through 2010 on weaknesses in the Bureau’s acquisition, management, and testing of key 2010 census IT systems. Since the 2010 census, we have issued additional reports and testimonies on weaknesses in the Bureau’s efforts to institutionalize IT and program management controls for the 2020 census. Relevant reports include the following: In June 2008, we reported that the 2010 census life-cycle cost estimate was not reliable and the Bureau had insufficient policies and procedures and inadequately trained staff for high-quality cost estimation.was not reliable, annual budget requests based on that estimate were not fully informed. We recommended that the Bureau, among other things, thoroughly document and update the estimate and for future estimates, establish policies and procedures for cost estimation. The Bureau has partially implemented these recommendations. We also stated that because the life-cycle cost estimate In January 2012, we reported that the Bureau was taking steps to strengthen its life-cycle cost estimates but had not yet established We also guidance for developing the 2020 life-cycle cost estimate. reported that the Bureau had not identified decision points at which executives would review progress and decide whether the Bureau is prepared to move from one project phase to another. We recommended that the Bureau, among other things, identify decision points and finalize guidance for the 2020 life-cycle cost estimate. The Bureau has not yet implemented these recommendations. In May 2012, we reported that the Bureau was taking steps consistent with leading practices for long-term project planning for the 2020 census, such as creating a high-level schedule of program management activities. However, the Bureau’s schedule did not include milestones or deadlines for key decisions needed to support transition between planning phases, which could result in later planning activity not being based on evidence from early research and testing. We also reported that the Bureau was taking steps in strategic workforce planning, but it had not yet identified the goals that should guide workforce planning or how to monitor, report, and evaluate its progress toward achieving them. Accordingly, we made several recommendations aimed at addressing these issues. The Bureau has taken steps to implement these recommendations, but has not fully implemented them. In September 2012, we reported that the Bureau had drafted a new investment management plan, system development methodology, and requirements development and management processes to improve its ability to manage IT investments and systems development.However, additional work was needed to ensure that these processes were effective and successfully implemented across the Bureau, such as finalizing plans for implementing its new investment management and systems development processes across the Bureau. We also reported that the Bureau had not fully put in place key practices for effective IT workforce planning, including conducting an IT skills assessment and gap analysis and establishing a process for directorates to coordinate on IT workforce planning. To address these weaknesses, we made a number of recommendations to the Bureau. The Bureau has taken steps to address the recommendations, such as finalizing its investment management process, conducting an enterprise-wide IT competency assessment and gap analysis, and developing action plans to address the identified gaps. In November 2012, we evaluated the Bureau’s efforts to improve the cost-effectiveness of enumeration in the 2020 census, paying particular attention to three key efforts, one of which included leveraging the Internet to increase self-response. We reported weaknesses in developing mitigation or contingency plans for several project risks, including those related to tight time frames and accurate cost information; weaknesses in developing cost estimates for research and testing projects; incomplete project plans; and incomplete performance metric documentation. We made a number of recommendations to address these weaknesses, and the Bureau has partially implemented them. In January 2013, we reported on the Bureau’s implementation of information security controls to protect the confidentiality, integrity, and availability of the information and systems that support its mission. We concluded that the Bureau had a number of weaknesses in controls intended to limit access to its systems and information, as well as those related to managing system configurations and unplanned events. We attributed these weaknesses to the fact that the Bureau had not fully implemented a comprehensive information security program, and made 13 public recommendations and over 100 other recommendations that were for limited distribution to address these deficiencies. The Bureau has partially implemented the recommendations. In September 2013, we testified on progress the Bureau had made in its efforts to contain enumeration costs, including its efforts to strengthen IT management and security practices. We noted that the Bureau was exploring technology options for 2020 census operations that collectively represent a dramatic leap from 2010, including the “bring your own device” model for field data collection and the Internet response option. We stressed the importance of the Bureau strengthening its ability to manage its IT investments as well as its practices for securing the information it collects and disseminates. In November 2013, we reported that the Bureau was not producing reliable schedules for two efforts related to the 2020 census: (1) building a master address file and (2) 2020 census research and testing. We reported, for example, that the Bureau did not include all activities and required resources in its schedules, or logically link a number of the activities in a sequence. We recommended that the Bureau take actions to improve the reliability of its schedules, including ensuring that all relevant activities are included in the schedules, complete scheduling logic is in place, and a quantitative risk assessment is conducted. We also recommended that the Bureau undertake a robust workforce planning effort to identify and address gaps in scheduling skills for staff that work on schedules. The Bureau has taken steps to implement these recommendations, but has not fully implemented them. In April 2014, we reported on the Bureau’s IT-related efforts for the 2020 census. We found that several of the IT-related projects lacked schedules and plans, and that it was uncertain whether the work would be completed in time to inform the operational design decision for the 2020 census, planned in September 2015. We also reported that the Bureau had not prioritized its projects to determine which were the most important to complete before the decision. We recommended that the Bureau prioritize the research and testing that it needed to complete in order to support the operational design decision, and ensure that project plans and schedules were developed consistent with the new prioritized approach. The Bureau has made significant progress toward addressing the recommendations. For example, it developed a document to guide the Bureau’s path to making the preliminary design decision in September 2015, and Bureau officials identified projects that could be deferred to after the preliminary design decision. The Bureau has taken preliminary steps to identify demographic groups likely to use the Internet response option in the 2020 census and how they compare to historically hard-to-count populations by examining existing ACS studies in this area and applying the lessons learned to the decennial census. For example, the Bureau issued a report in May 2014 on the effects of adding an Internet response option to the ACS among different segments of the population. The study found that the total self- response rate was statistically significantly higher after introducing the Internet response option and the Internet response rate was about 55 percent. The study also found that, while none of the groups had dramatically low rates of Internet participation, the addition of an Internet response option had a positive effect on certain groups (e.g., advantaged homeowner, single unattached mobile) and a negative effect on others (e.g., ethnic enclave homeowner, economically disadvantaged homeowner). The study suggested that, by pushing households to respond by Internet (i.e., mailing an Internet response invitation and only later mailing a paper questionnaire if the household did not first respond by Internet), the Bureau may be discouraging some households from self- responding at all and that this may be happening in certain hard-to-count groups. The 2020 Census Directorate officials told us that they are incorporating the results of the ACS studies into future decennial census field tests. For example, officials said that they plan to include both the Internet response and paper questionnaire option in the first mailing for hard-to-count population groups in the 2015 National Content Test. In addition to using the ACS studies, the Bureau plans to conduct future tests and research specific to the 2020 Decennial Census, which is expected to produce more information in this area. For example, the Bureau is planning two tests in 2015 that are expected to provide data on Internet response rates among different demographic groups, including historically hard-to-count populations. Specifically, the April 2015 Optimizing Self-Response Test site location was selected based on, among other things, lower than average 2010 Decennial Census response rates and ACS Internet response rates, Internet penetration at least as high as the national average, and the ability to segment by hard- to-count populations. Additionally, 2020 Census Directorate officials stated that the Bureau plans to begin coverage improvement research with the September 2015 National Content Test, which is expected to provide a nationally representative demographic sampling and national Internet self-response rates. Bureau officials stated that additional research in this area in 2016 and beyond is yet to be determined. GAO’s Cost Estimating and Assessment Guide identifies a number of best practices that are the basis of effective program cost estimating and should result in reliable and valid cost estimates that management can use for making informed decisions. Specifically, a reliable cost estimate should be comprehensive (costs are neither omitted nor double counted); well-documented (the estimate is thoroughly documented, including source data and significance, clearly detailed calculations and results, and explanations for choosing a particular method or reference); accurate (the estimate is unbiased, not overly conservative or overly optimistic, and based on an assessment of most likely costs); and credible (the estimate discusses any limitations of the analysis from uncertainty or biases surrounding data or assumptions). A rough-order-of-magnitude estimate is a less-rigorous cost estimate developed from limited data and in a short time. of the estimate in 2014, with a resulting total of approximately $12.7 billion. This included, among other things, about $73 million for the Internet response option. However, the estimate for the Internet response option did not meet the characteristics of a reliable estimate. Specifically: The Internet response option cost estimate was not comprehensive. This is because the Internet response option cost estimate included costs from 2010 to 2020 and provided a subset of assumptions for researching, testing, and deploying an Internet response option. While the estimate was structured around these high-level cost elements, these elements were not defined, and therefore it is not clear whether all costs associated with the Internet response option were included. Bureau officials stated that the estimate was not developed based on a work breakdown structure with defined elements because the 2020 census program was not mature enough to have such a structure at the time the initial estimate was developed. They stated that the estimate will be updated to reflect the program’s work breakdown structure once the preliminary design decision is made in September 2015. However, a work breakdown structure should have been initially set up when the program was established and successively updated with more detail over time as more information became known about the program. The Internet response estimate was not well-documented. We found that the documentation included selected source data, assumptions, and calculations, but it did not provide a clear and traceable view of the estimated costs and all the assumptions that were included in the Internet response option cost estimate. Additionally, the Internet response option cost estimate was developed in a separate file and rolled up into a single line item labeled “Internet” within the 2020 census life-cycle cost estimate, without including source information to show where the estimate originally came from or details of what the estimate was based on. This was problematic because Bureau officials were initially unable to locate the relevant files and were unable to provide us with an explanation for how the Internet response option estimates were developed. After several weeks, officials finally located the separate file that contained the high-level Internet response cost elements. The estimate was not accurate. Specifically, while accurate estimates are to be based on an assessment of most likely costs and historical data for similar programs, the Internet response option cost estimate was based on subject matter expert opinion and analogous data from the ACS, which is similar in function to the decennial census but not in scale. The scale of the decennial census is significantly larger than ACS, and will require IT systems and infrastructure to be sized accordingly. However, it is not clear in the documentation how the Bureau applied the ACS cost data in estimating 2020 census costs, nor could Bureau officials explain this. For example, 2020 census IT infrastructure costs were estimated to be higher than ACS IT infrastructure costs, but Bureau officials could not explain what assumptions and methodology were used to account for the much larger scale of IT infrastructure needed for the 2020 census compared to the ACS. Without insight into how historical data were used to estimate future costs, we cannot determine whether the cost models are accurate. Additionally, in 2014, the Bureau revised selected components of the 2020 census life-cycle cost estimate to inform the fiscal year 2015 budget request, but did not revise the Internet response option cost estimate as part of this update even though there had been significant changes to the program since 2011. For example, projects related to the Internet response option have evolved, and thus not all projects are represented in the 2011 $73 million estimate, such as non-ID processing. Also, the Bureau determined in 2013 that its existing IT infrastructure was not sufficient to support the scale-up needed for Internet response data processing and storage in the 2020 census, and is planning to address this as part of the enterprisewide IT data collection and processing program, known as CEDCAP. According to Bureau officials, they are in the process of developing estimated costs for relevant CEDCAP projects, and these will be incorporated into the Internet response option cost estimate. Additionally, the Internet response option cost estimate was not updated with more current data from the analogous ACS program, as a result of its implementation of an Internet response option in January 2013. The estimate was not credible. Sensitivity and risk and uncertainty analyses were conducted on the 2020 census life-cycle cost estimate, but they were not completed properly. For example, a risk and uncertainty analysis was only applied to fiscal years 2018 to 2020 rather than on the total cost estimate. According to Bureau officials, the Bureau focused on fiscal years 2018 to 2020 because 80 percent of all costs for the decennial census are expected to be incurred during this period. However, accounting for all risks throughout the life of the program is necessary to adequately capture the uncertainty associated with a program’s estimate. The Bureau also did not perform risk and uncertainty analyses on the Internet response option cost estimate to reflect the uncertainty introduced by the significant program changes discussed previously, such as the uncertainty associated with the initial estimate for IT infrastructure costs ($11.4 million). The 2020 Census Directorate officials have recognized that the estimate for the Internet response option had weaknesses and stated that the preliminary 2020 census rough-order-of-magnitude life-cycle cost estimate was not developed to be an official cost estimate, but rather a “top-down” approach to estimating costs and potential savings focused around the major design categories. Nevertheless, the Bureau considered the estimate to be a budget-quality estimate and used it to inform the fiscal year 2015 budget request. Bureau officials stated that the preliminary 2020 census life-cycle cost estimate (including the Internet response option estimate) will be updated once the preliminary design decision is made in September 2015. Additionally, Bureau officials identified several ongoing efforts that the Bureau has planned to improve its institutional cost estimating practices. Specifically, Bureau officials stated that the Bureau established a new centralized cost estimating office in August 2013 that is expected to, among other things, issue guidance and policies for developing reliable cost estimates and establish a standardized work breakdown structure for censuses and surveys by the third quarter of fiscal year 2015, create a cost estimating certificate program by the fourth quarter of fiscal year 2015, and conduct independent cost estimates, as needed. Bureau officials also stated that they have certified four staff in cost estimation techniques, moved the 2020 census estimate to a more robust cost estimating tool, and planned actions to address competency gaps in cost estimating. While these are important steps to institutionalizing good cost estimating practices, until the Bureau updates the Internet response option cost estimate to ensure that it conforms with best practices, the estimate will continue to be unreliable. In addition to the $12.7 billion 2020 census rough-order-of-magnitude life- cycle cost estimate, the Bureau estimated potential cost savings for the major design options, such as optimizing self-response, which includes the Internet response option. The Bureau estimated that implementing all design options could result in potential savings of up to $5 billion. Potential cost savings were estimated by adjusting inputs for different 2020 census design scenarios and calculating the difference between those estimated 2020 census costs and the cost of repeating the 2010 census design. For example, to estimate potential cost savings for the optimizing self-response design option, the Bureau reduced the amount of printing, postage, and infrastructure needed for processing paper questionnaires based on the percent of responses expected via Internet. The Bureau estimated that the optimizing self-response option could result in potential savings of about $550 million to $1 billion. However, the unreliability of estimated costs for the Internet response option discussed above casts doubt on the reliability of estimated cost savings associated with the optimizing self-response design option. While efforts are under way to deliver an Internet response option for the 2020 census, significant challenges remain. We identified four major challenges the Bureau faces in implementing an Internet response option for the 2020 census: project schedules associated with the Internet response option are not fully integrated; key questions may not be answered in time for the preliminary design decision; high-level time frames for cloud computing decisions and implementation for the 2020 census have not been determined; and gaps in IT skill sets continue to exist. According to 2020 Census Directorate and IT Directorate officials, the Bureau has recently begun to develop methodologies for answering key research questions, using a contractor to assist in assessing cloud computing technologies for the 2020 census, and implementing actions to close critical competency gaps. However, without established methodologies and time frames for completing these efforts, the Bureau will be limited in its ability to make informed design decisions, which could impact implementation of the 2020 Census. According to best practices identified in GAO’s Schedule Assessment Guide, a detailed schedule should be horizontally traceable, meaning that it should link products and outcomes associated with other sequenced activities. These links are commonly referred to as “hand-offs” and serve to verify that activities are arranged in the right order for achieving aggregated products or outcomes. Such mapping or alignment of levels enables different groups to work to the same master schedule. As previously mentioned, the Bureau has several different projects related to the Internet response option, planned or currently under way (see table 1 for the complete list of projects). Certain projects, such as IT Infrastructure 2020 Decennial Scale-Up and E-correspondence, were initiated in October 2014 and thus did not yet have detailed schedules. For selected ongoing projects, such as the Optimizing Self Response, Non-ID Processing, and 2014 Census Test projects, the Bureau had integrated the schedules into the 2020 census program’s integrated master schedule. However, for other projects, which have been under way for 5 months, the Bureau has not yet integrated their schedules with the 2020 census program’s integrated master schedule, so the dependencies among the different projects (e.g., hand-offs between teams) are not linked together. For example: The Bureau established a schedule for developing the Centurion Internet response option application for the 2015 tests, which included activities starting as early as July 2014. However, as of November 2014, this schedule was not yet linked to the overarching 2020 census program schedule. According to Bureau officials, the Centurion schedule was recently developed and is eventually to be integrated with the 2020 census program schedule. However, Bureau officials did not know when this would be complete—despite ongoing activities with the Centurion project and the 2020 census program. Activities for the 2015 Optimizing Self-Response Test and 2015 National Content Test were not included in the 2020 census program schedule. According to 2020 Census Directorate officials, detailed schedules for the 2015 tests were still under development and will be incorporated into the 2020 census program schedule once they are complete. However, as of November 2014, Bureau officials did not have a time frame for when this would be done, and preparations for the 2015 Optimizing Self-Response Test are well under way, with the test planned to begin in less than 5 months. As stated earlier, we have previously recommended that the Bureau take actions to improve the reliability of its schedules, including steps to ensure that all relevant activities and associated resources are included in the schedules and complete scheduling logic is in place. Bureau officials have recognized the need to fully integrate project schedules with the 2020 census program’s integrated master schedule, but stated that they are currently working to identify required resources for activities in the 2020 census program integrated master schedule. They stated that, in the meantime, the project teams are conducting work according to their project schedules, which are maintained separately. However, until the Bureau fully implements our recommendation to integrate dependent activities in the program schedule, the Bureau is unable to generate an accurate “critical path”—the sequence of steps needed to achieve the end goal that, if they slip, could negatively affect the overall project completion date. This limits the Bureau’s ability to use the 2020 census program schedule to estimate reliable dates, determine the impact of any schedule slippages on major milestones, and identify opportunities for increased efficiency. In September 2014, the Bureau released a planning document titled The Path to the 2020 Census Design Decision, which identified inputs, such as research questions, design components, and testing, needed to inform the preliminary design decision planned for September 2015. This included the following key research questions related to the Internet response option that were determined to be critical inputs into the preliminary design decision: What are the best methods for communicating the importance of responding to the 2020 census, including methods for promoting the use of Internet response? What percentage of the population has access to the Internet? What is the estimated Internet self-response rate? What IT infrastructure for security and scalability is necessary to support the Internet as the primary mechanism for self-response? Is there value in asking households to pre-register online for the 2020 census? Is it necessary to provide households with an identification code to respond via the Internet? While the Bureau has documented in relevant project plans its methods for answering the questions related to communication strategies, online pre-registration, and processing household responses without identification codes, it has not yet established how it will determine answers to the questions related to self-response rate and IT infrastructure for the preliminary design decision. As stated in 2020 census program management guidance, detailed project plans and specific research questions, as well as study plans describing the management and technical approach should be established for research and testing projects. According to Bureau officials in November 2014, they began to establish a new project team that was intended to be responsible for estimating the Internet self-response rate. However, Bureau officials did not have a time frame for when a project plan or study plan would be developed that would document the methodology for how the Internet self-response rate will be estimated to inform the preliminary design decision. Additionally, Bureau officials told us that they had established a new project—the 2020 Census Architecture and IT Roadmap—in June 2014 that is intended to help determine the IT infrastructure needed to support the Internet response option. This project is to deliver initial 2020 census architecture and IT roadmap documents by preliminary design. However, the Bureau had not yet established a project plan or study plan, and had not developed the methodology—as part of this project or another—for determining scalability and security infrastructure needs for the Internet response by September 2015. As previously stated, the Bureau is committed to producing the preliminary design decision and developing the life-cycle cost estimate for the 2020 census by September 2015. With about 8 months remaining until the preliminary design decision is to be made, and major tests already designed or completed (i.e., the 2014 Census test and the April 2015 optimizing self-response test), the Bureau has limited time to determine how these critical questions will be answered. Accordingly, until the Bureau establishes and implements clear plans for answering the Internet response rate and IT infrastructure questions, the Bureau will have limited information for beginning its development and implementation of systems and infrastructure. Also, as previously discussed, it is uncertain how complete or reliable the Bureau’s Internet response option and IT infrastructure cost estimates will be to inform key design decisions by this time. The OMB Federal Cloud Computing Strategy recognizes the importance of organizations planning effectively when selecting services to move to a cloud environment. The strategy recommends that organizations create roadmaps for cloud deployment and migration in order to prioritize services that minimize risks to the organization. NIST also recognizes that moving to a cloud environment is a business decision, where the organization’s business case should consider relevant factors such as transition and life-cycle costs and security and privacy requirements. As previously mentioned, the Bureau’s existing IT infrastructure is not sufficient to process and store the large volume of anticipated electronic Internet survey responses for the 2020 census. Accordingly, Bureau officials stated that the Bureau plans to use a cloud environment to provide the needed capability. The Bureau has taken steps to research the use of the cloud to deliver the level of scalability needed to support the Internet response option for the 2020 census. Specifically, the Bureau has identified and prioritized 2020 census capabilities to potentially move into the cloud environment, such as Internet data collection. Additionally, the Bureau plans to conduct volume testing of the Centurion application in a cloud environment beginning in November 2014, in order to collect additional data to determine if it can meet the expected capacity in 2020. IT Directorate officials also said that they are working closely with federal agencies such as NIST and the Federal Aviation Administration to determine requirements for moving services to a cloud environment. Although the Bureau expects cloud decisions to lay the groundwork for accommodating the volume of users expected for the 2017 Economic Census and 2020 Decennial Census, the Bureau has not established time frames to determine when key cloud computing decisions need to be made and actions need to be taken for the 2020 census, such as selecting, testing, and deploying a cloud environment that meets its needs for scalability, budget, security, and privacy protection of personally identifiable information. IT Directorate officials stated that by the third quarter of fiscal year 2015 the Bureau will develop a schedule for the IT Infrastructure 2020 Decennial Scale-Up project, which will include high- level time frames for a subset of relevant activities, such as conducting an analysis of alternatives to determine whether a cloud solution is the best alternative for addressing the scale-up. Additionally, IT Directorate officials told us that the Bureau plans to develop a strategy in 2016 to outline the overall approach for acquiring cloud solutions for the 2020 census, among other things. The Bureau is planning to begin systems readiness testing in October 2018, which is when the Bureau has determined that the systems and processes for the 2020 census must be developed and ready for end-to- end system testing (approximately 3.7 years away). As we have previously reported, the federal government procurement process can require a significant amount of time, and agencies can face challenges in implementing cloud computing, such as meeting federal security requirements, obtaining guidance, acquiring knowledge and expertise to implement cloud services, certifying and accrediting vendors, and ensuring data portability and interoperability. Without established high- level time frames, Bureau officials will not know whether there is enough time to effectively implement a cloud environment. The Bureau has not yet established such time frames for the 2020 cloud implementation approach due to a lack of internal cloud computing expertise. In an effort to offset this lack of internal expertise, the Bureau issued a task order in September 2014 to a contractor for assistance in assessing, analyzing, and recommending cloud computing technologies for the 2020 census. While this assistance may be helpful, until the Bureau, at a minimum, documents time frames for selecting, testing, and deploying its cloud environment, it will not know whether there is enough time to successfully implement a cloud solution that meets scalability, budget, security, and privacy protection of personally identifiable information needs for the 2020 census. As our prior work and leading guidance recognize, having the right knowledge and skills is critical to the success of a program. In response to prior GAO recommendations on developing strategic workforce planning capabilities, the Bureau completed an enterprise-wide competency assessment in 2013 and identified several mission-critical gaps in technical competencies within the IT and 2020 census workforce that would be needed to support the Internet response option. In August 2014, the Bureau completed action plans and targets aimed at addressing the IT and 2020 census workforce competency gaps. The gaps related to the Internet response option and planned actions to address them are summarized in table 3. Moving forward, the Bureau plans to monitor quarterly status reports on the implementation of these actions and closing competency gaps, beginning around December 2014. Fully implementing actions to close these competency gaps will be critical to ensuring the Bureau has the skills it needs to effectively develop and implement the Internet response option. The introduction of an Internet response option for the 2020 census has the potential to offer numerous benefits, such as added convenience for households in an increasingly Internet-enabled population to respond to the survey, better quality data which can result in less follow-up work, and reduced costs associated with processing paper questionnaires. Identifying ways to increase participation via the Internet will help increase the benefits of this response mechanism. While the Bureau’s initial estimate of approximately $73 million for the Internet response option was included in the fiscal year 2015 budget request, this estimate lacks reliability, which in turn, calls into question the reliability of the potential cost savings estimate of about $550 million to $1 billion for the optimizing self-response design category (which includes the Internet response option). Although the Bureau has several important cost estimating improvements under way, until the Bureau updates the Internet response option cost estimate to ensure that it meets best practices, the estimate will continue to be unreliable. Additionally, the Bureau’s ability to effectively manage the scheduling, task, and capability challenges it faces in planning for an Internet response option will be critical to the success of the 2020 census. Specifically, without fully implementing our prior recommendations on integrating dependent activities into the schedule, the Bureau continues to be unable to estimate reliable dates and make informed decisions based on an accurate critical path. Further, with about 8 months remaining before key design decisions need to be made, the Bureau has not established the methodologies for answering two Internet response option research questions that were deemed critical for the preliminary design decision—estimating the Internet self-response rate and determining the IT infrastructure for security and scalability needed. Accordingly, until clear plans for answering these questions are established and implemented, the Bureau will have limited information for beginning its development and implementation of systems and infrastructure. Additionally, even though the Bureau acknowledges the need to acquire a cloud solution to compensate for the fact that its existing IT infrastructure is not sufficient to support a wide-scale Internet response option for the 2020 census, it has not defined the high-level time frames for when key cloud computing decisions need to be made and implemented. As a result, with systems readiness testing for the complete 2020 census design planned to begin in October 2018, it is uncertain how the Bureau can ensure that there is sufficient time to accomplish this objective. Finally, the Bureau faces continuing gaps in mission-critical technical competencies, including cloud computing, security integration and engineering, and requirements development. Fully implementing the Bureau’s planned actions to close these gaps will be critical to ensuring that it has the skills needed to effectively deliver the 2020 census Internet response option. To ensure that the Bureau is better positioned to deliver an Internet response option for the 2020 Decennial Census, we are recommending that the Secretary of Commerce direct the Under Secretary for Economic Affairs to direct the Director of the Census Bureau to take the following three actions: ensure that the estimated costs associated with the Internet response option are updated to reflect significant changes in the program and to fully meet the characteristics of a reliable cost estimate; ensure that the methodologies for answering the Internet response rate and IT infrastructure research questions are determined and documented in existing or future project plans in time to inform key design decisions; and develop high-level time frames for selecting, testing, and deploying a cloud environment to guide the Bureau’s approach to enabling scalability for the 2020 census. We received written comments on a draft of this report from the Department of Commerce, which are reprinted in appendix II. The department neither agreed nor disagreed with our recommendations, but provided comments that are discussed in detail below. In its comments, the department stated that our conclusion regarding the unreliability of estimated costs and savings for the Internet response option seemed to be based on the fact that the Bureau did not build a bottom-up estimate using documented parameters from previous censuses or research. The department also stated that it needed to conduct critical tests and research to inform key cost parameters and that it planned to revise the estimated costs and savings as research and testing efforts are completed. The department added that it believed its top-down estimates were sufficient for the purpose of identifying major cost drivers for the decennial census. However, as stated in the report, our assessment took into account the Bureau’s top-down approach and that it had developed a less-rigorous, “rough-order-of-magnitude” estimate; we therefore performed a high-level analysis of the cost estimate and methodology. We also described what is expected of a rough-order-of-magnitude cost estimate and recognized that such an estimate is developed from limited data and in a short time. For example, regarding the lack of a work breakdown structure, our report notes that this should be initially set up when a program is established and updated with more detail over time as more information becomes known about the program. Nevertheless, even when taking these factors into consideration, the Bureau’s rough-order-of-magnitude estimate for the Internet response option was not reliable when assessed against best practices that are applicable to such estimates. For example, as we state in our report, the Internet response option estimate was not accurate, in part, because when the Bureau revised selected components of its estimate in 2014, it did not revise the Internet response option portion, despite significant changes to the program since the original 2011 estimate. These changes included the realization that additional IT infrastructure, beyond the Bureau’s existing infrastructure, would be needed in order to support the scale-up needed for Internet response data processing and storage. Given the weaknesses identified in our report, we continue to maintain that the Bureau cannot be assured that its top-down estimate was sufficient for the purpose of identifying major cost drivers for the decennial census. The department also recognized the need to improve its capabilities and skill sets in cost estimation and identified several planned and ongoing actions, such as standardizing a work breakdown structure, establishing enterprise guidance and policies for cost reporting, and continuing to hire and train certified cost estimators. As stated in our report, while these are important steps to institutionalize good estimating practices, until the Bureau updates the Internet response option cost estimate to ensure that it conforms to best practices, the estimate will continue to be unreliable. Accordingly, we maintain that actions to address our first recommendation are still needed. Regarding our conclusions on key challenges associated with delivering an Internet response option, the department stated that it believed it had developed project plans, methodologies, and time frames for making decisions related to IT infrastructure needs, including the use of cloud computing to support 2020 census requirements. The department further stated that the wording or descriptions of these activities in its documentation may use internal jargon that may make them difficult to find for those who do not regularly work on these activities, and that it would work to mitigate this issue. We disagree that a misunderstanding of the plans, methodologies, and time frames exists. During our review, we took appropriate measures to ensure that we had collected and analyzed the most current, complete, and accurate information available, including meetings with knowledgeable officials. Furthermore, at the conclusion of our review, we met with key officials from the 2020 Census and IT Directorates and confirmed the facts of our analysis, including the fact that project plans and methodologies for making decisions related to the IT infrastructure needs for the 2020 census did not exist. We disagree that a misunderstanding of the plans, methodologies, and time frames exists. During our review, we took appropriate measures to ensure that we had collected and analyzed the most current, complete, and accurate information available, including meetings with knowledgeable officials. Furthermore, at the conclusion of our review, we met with key officials from the 2020 Census and IT Directorates and confirmed the facts of our analysis, including the fact that project plans and methodologies for making decisions related to the IT infrastructure needs for the 2020 census did not exist. The department also commented that the documentation produced for the 2020 census design decision would include key dates for making the decisions (e.g., the need for and likely extent of cloud computing use), and revised estimates of key cost parameters (e.g., Internet response rates). The department stated that it had established a plan and an implementation team to produce this documentation, referred to as the 2020 Census Concept of Operations. While we recognize that the design decision would help inform some of the time frames related to cloud computing, our concern is that high-level time frames for when key decisions need to be made do not yet exist. Therefore, the Bureau does not have assurance that there will be enough time to overcome potential challenges in acquiring and effectively implementing cloud computing services, such as meeting federal security requirements, certifying and accrediting vendors, and ensuring data portability and interoperability. Further, while we support the Bureau’s efforts to ensure that the outcome of the design decision is appropriately documented, our report highlights the need for documenting the research methodologies that will be used to make those design decisions. As stated in our report, with about 8 months remaining until the design decision and with major tests already designed or completed, the Bureau has limited time to determine how critical research questions will be answered. Consequently, we maintain that actions are still needed to respond to our recommendations to develop key methodologies and high-level time frames for cloud computing decisions. The department provided two additional comments related to the background section of our report: First, the department noted concern that discussions of differential Internet availability among historically hard-to-count populations imply that this could lead to reduced coverage of these populations in the 2020 census and stated that the Bureau believed that this was unlikely to result in less coverage of these populations. The department explained that knowledge of the differentials would help the Bureau to effectively deploy outreach, partnership, social media, and other advertising investments in 2020. It further stated that the Internet would not be the only response option offered in 2020 because individuals without access to, or desire to use, the Internet will still be able to respond on paper, by phone, or in person with a census interviewer. We agree that the differential Internet availability among historically hard-to-count populations should not imply that the introduction of an Internet response option would lead to reduced coverage. To clarify that point, we modified the report to state that the Internet response option’s effect on historically hard-to-count populations is unclear. We intentionally did not draw conclusions about what the potential impact would be. The discussion of varying Internet availability is included to illustrate why the impact of an Internet response option on historically hard-to-count populations is a relevant topic to examine, and the report also notes that the Bureau plans to offer multiple self-response modes for the 2020 census, including paper and telephone. This information is introduced to provide context to the discussion of the Bureau’s completed steps and further plans to examine the impact of introducing an Internet response option on hard-to-count populations. Second, the department stated that it was uncertain whether the background section discussing prior related GAO reports took into account that some of the Bureau’s planned actions were not yet scheduled to be completed and expressed concern that this discussion implied that the Bureau had plans to complete all actions by this point in time. However, the section in question is intended only to summarize relevant prior GAO reports and recommendations and provide a brief status update on progress made to address these recommendations. As it is background information, we did not refer to or imply time frames for when the recommendations should be implemented. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Commerce, the Under Secretary for Economic Affairs, the Director of the U.S. Census Bureau, and interested congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4456 or chac@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our objectives were to (1) describe the Census Bureau’s (Bureau) efforts to identify demographic groups likely to use Internet response and how they compare to historically hard-to-count populations, (2) assess the reliability of estimated costs and savings for the Internet response option, and (3) determine key challenges associated with delivering an Internet response option for the 2020 census. To describe the Bureau’s efforts to examine the impact of an Internet response option on the historically hard-to-count populations, we identified and reviewed studies conducted by the Bureau on demographic groups likely to use the Internet response option and how they compare to historically hard-to-count populations. We also conducted a literature review to identify other relevant studies on hard-to-count populations, Internet usage among demographic groups, and the impact of an Internet response survey on hard-to-count populations. We interviewed Bureau officials on their plans to further assess the impact of an Internet response option on historically hard-to-count populations. To assess the reliability of estimated costs and savings for the Internet response option, we obtained and analyzed the Bureau’s documentation supporting the 2020 census rough-order-of-magnitude life-cycle cost estimate, the Internet response option portion of the cost estimate, and the potential cost savings estimate. We compared the cost estimating methodology and documentation against best practices for developing reliable cost estimates identified in GAO’s Cost Estimating and Assessment Guide.account that the Bureau developed the 2020 census cost estimate as a less-rigorous, “rough-order-of-magnitude” cost estimate, and therefore performed a high-level analysis of the Bureau’s cost estimate and methodology. We also interviewed Bureau officials to verify that our findings were accurate and discussed their approach to estimating costs and potential savings. Our report notes the instances where reliability impacts the quality of the cost estimate. When applying the best practices, we took into To identify key challenges to delivering an Internet response option for the 2020 census, we identified relevant experts within the Bureau’s key advisory groups: the National Academy of Sciences, National Advisory Committee, and Census Scientific Advisory Committee. These advisory groups consist of academic and industry experts from various fields, including information technology and Internet survey design, and they meet with the Bureau regularly to provide feedback on various areas, including the 2020 census program. We interviewed relevant experts from these advisory groups to discuss their perspectives on key challenges the Bureau faces in implementing an Internet response option. We also analyzed documentation on the Bureau’s projects related to the Internet response option, such as project plans, schedules, risk registers, and monthly status reports; program-level documentation, such as 2020 census and IT strategy documents; and workforce action plans. We also interviewed Bureau officials from the 2020 census program and the IT Directorate to obtain information on progress made on the Internet response option. We compared the Bureau’s efforts against relevant guidance and best practices, such as those identified in GAO’s Schedule Assessment Guide, prior GAO work, the National Institute of Standards and Technology’s guidance on cloud computing, the Office of Management and Budget’s Federal Cloud Computing Strategy, and the Software Engineering Institute’s Capability Maturity Model® Integration, to determine if there were any gaps. We aggregated the results to identify the key challenges the Bureau faces in delivering an Internet response option for the 2020 census and presented the preliminary challenges that we identified to experts and Bureau officials to obtain their feedback on the challenges. We also observed Bureau activities related to the Internet response option, including 2020 census program management reviews, advisory group meetings, demonstration of the Internet response application and pre- registration portal used for the 2014 census test, and operations of the 2014 census site test. We conducted this performance audit from May 2014 to February 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Carol R. Cha at (202) 512-4456 or chac@gao.gov. In addition to the contact named above, the following staff made key contributions to this report: Shannin G. O’Neill (Assistant Director), Jason Lee, Jennifer Leotta, Lee McCracken, Dana Pon, and Jeanne Sung.
The U.S. Census Bureau plans to significantly change the methods and technology it uses to count the population with the 2020 Decennial Census, such as offering an option for households to respond to the survey via the Internet. This involves developing and acquiring IT systems and infrastructure to support the collection and processing of Internet response data. GAO was asked to review the Bureau's efforts to deliver an Internet response option for the 2020 census. GAO's objectives were to (1) describe the Bureau's efforts to identify demographic groups likely to use Internet response and how they compare to historically hard-to-count populations, (2) assess the reliability of estimated costs and savings for Internet response, and (3) determine key challenges associated with delivering an Internet response option. To do this, GAO reviewed Bureau studies, cost estimates, project plans, schedules, and other documentation and compared them against relevant guidance. GAO also interviewed Bureau officials and experts. The U.S. Census Bureau (Bureau) has taken preliminary steps and plans to further examine the impact of introducing an Internet response option on historically hard-to-count segments of the population (these include, but are not limited to, minorities, renters, children, low-income households, and low-education households). For example, the Bureau is applying lessons learned from its implementation of an Internet response option for another household survey, called the American Community Survey, which is conducted on a smaller scale than the decennial census. Additionally, the Bureau is planning two 2020 census field tests in 2015 that are expected to provide data on Internet response rates among different demographic groups, including the historically hard-to-count populations. The Bureau's preliminary estimated costs of about $73 million for the Internet response option are not reliable because its estimate did not conform to best practices. For example, the estimate has not been updated to reflect significant changes related to the Internet response option that have occurred since it was developed in 2011. Additionally, the unreliability of the Bureau's cost estimate for the Internet response option cast doubt on the reliability of associated potential cost savings estimates. Officials have recognized weaknesses in the Bureau's cost estimate and stated that they plan to update it based on a preliminary decision for the overall design of the 2020 census. While efforts to deliver an Internet response option are under way, the Bureau faces several scheduling, task, and capability challenges in developing such an option for the 2020 census, including: Key questions related to estimating the Internet self-response rate and determining the information technology (IT) infrastructure needed to support it may not be answered in time for the preliminary design decision, scheduled for September 2015. Specifically, the Bureau has not developed project plans and research methodologies for answering these questions. In November 2014, officials stated that they had recently begun working on establishing methodologies for answering these questions. However, Bureau officials do not know when the methodologies will be established or when project plans will be updated or created to reflect this new work. Until such plans and methodologies are established, concerns will persist as to whether these two critical questions will be answered in time to inform the design decision in September 2015. High-level time frames for making decisions related to implementing cloud computing (i.e., a means for enabling on-demand access to shared and scalable pools of computing resources), such as selecting, testing, and implementing a cloud environment that meets the Bureau's scalability, budget, security, and privacy needs, have not been established. While Bureau officials estimated that such time frames will be established around June 2015, until they are established the Bureau will lack assurance that it has enough time to successfully implement a cloud environment prior to system testing, which is to begin in 2018. GAO recommends that the Department of Commerce's Census Bureau update estimated costs for the Internet response option and ensure they are reliable, develop methodologies for answering key research questions, and establish high-level time frames for cloud computing decisions. The department neither agreed nor disagreed with the recommendations.
OCC closed Rushville on December 18, 1992, on the basis of its determination that the bank had a negative net worth of about $326,000 and thus was insolvent. At the time, Rushville was a two-branch bank with $38 million in assets. During that same year, OCC also closed 15 other small U.S. banks with assets under $50 million. Rushville had been the subject of OCC scrutiny since at least 1978 when it entered into a memorandum of understanding with OCC in which Rushville directors agreed to correct such insider abuses as excessive insider fees and overdraft payments to directors. The Rushville directors consented to an OCC cease-and-desist order in June 1983 directing the bank to correct unacceptable lending practices, and the directors consented to an amended order OCC issued a year later in response to questionable expense payments to insiders. In 1984, OCC took the first of four civil money penalty actions against several Rushville directors on the basis of violations of banking laws. Finally, on November 12, 1992, OCC suspended the chairman from participating in the affairs of the bank for engaging in unsafe and unsound practices, and, on December 11, 1992, all but one bank director resigned from the board. Appendix II shows key events affecting Rushville, with a focus on events surrounding OCC’s closure of the bank. OCC assessed 14 Rushville directors civil money penalties totaling $374,000 from 1984 to 1993. OCC based these penalties on the individuals’ having undertaken prohibited actions, such as improper payments of Rushville directors’ legal fees; multiple violations of limits on loans to executive officers; and illegal loans to its holding company. When we completed our audit work in April 1998, civil money penalties of $295,000 assessed against bank directors had not been paid. From the mid-1980s until after Rushville’s closure, Rushville directors initiated a number of lawsuits seeking redress in the federal courts for what the directors believed were wrongful actions by federal banking regulators. In response to OCC’s 1985 civil money penalty assessment, the directors initially requested an administrative hearing to contest the penalties, but subsequently requested that OCC’s civil money penalty assessment be dismissed. The directors asserted that OCC did not have the authority to assess these civil money penalties. When the administrative law judge denied the request for dismissal, the directors unsuccessfully filed suit in the district court questioning OCC’s authority to assess civil money penalties. When the court found that it lacked jurisdiction to hear the case, the directors again requested an administrative hearing. The administrative law judge upheld the original penalty assessment. In June 1989, at the end of the administrative process, OCC imposed the penalties. The directors unsuccessfully appealed OCC’s decision to the Seventh Circuit Court of Appeals and the Supreme Court. In 1994, the directors filed suit in district court seeking money damages from the government for various common law and constitutional torts alleging that OCC violated the Administrative Procedure Act when it closed Rushville. The case was dismissed by the district court and the appeal at the Seventh Circuit Court of Appeals was dismissed. Subsequently, the directors filed three suits seeking redress. One suit alleged violations of the constitutional due process rights of Rushville’s directors by individual OCC officials. Another suit appealed OCC’s imposition of penalties and restitution in the early 1990s. Both suits were dismissed by the courts. The remaining suit seeking compensation for OCC’s seizure of the bank as a taking under the Fifth Amendment is still pending. Rushville directors have questioned whether OCC followed its policies and procedures in closing Rushville and have alleged that OCC misclassified performing loans during its last examination to make it appear that the bank was insolvent. Accordingly, you asked us to review whether OCC followed its policies and procedures in its net worth calculation and loan classifications during its final examination before Rushville’s closure. Although our in-depth review of 25 problem loans found that many loan classifications were not well documented, we found that, except for the lack of documentation, examiners followed established OCC procedures and generally accepted accounting principles in their net worth calculation and loan classifications. Our review found that OCC’s net worth calculation showing that Rushville was insolvent by about $326,000 was essentially based on examiners’ determination that the bank lacked sufficient equity to cover estimated loan losses from problem loans. Documentation from past examinations and interviews indicate that, over the previous decade, OCC had been critical of Rushville’s allowance for loan losses, which represented a reserve for bad debts. In 1983, the directors consented to an OCC cease-and-desist order that required Rushville to maintain a capital ratio of 7 percent. In December 1992, Rushville’s capital ratio was at minus 1 percent, which was well below the level required by the cease-and-desist order. Following generally accepted accounting principles, OCC examiners determined Rushville’s net worth during the December 1992 examination by valuing its loan portfolio and then determining the amount of equity the bank should be setting aside to absorb losses. By comparing the two amounts, the examiners determined that Rushville lacked sufficient bank equity to cover loan losses. Table 1 shows the calculation that OCC used to make its determination. We reviewed OCC workpapers from the final examination of Rushville and traced the amounts shown in table 1 back to the bank’s accounting records, and we met on several occasions with OCC officials to discuss the net worth calculation. After reviewing the OCC calculations, we found OCC’s determination to be appropriate. In a few instances, we found that OCC examiners chose to apply less stringent bases for their net worth calculation than they might have applied. For example, when OCC examiners calculated Rushville’s general reserve, they chose to use a 3-year average (1989 through 1991), which resulted in a lower general reserve amount. Examiners could have used the 3-year period of 1990 through 1992—the year of Rushville’s closure—which would have increased the reserve requirements by $53,000. However, examiners used the earlier 3-year average because they wanted to use a period that would not be influenced by their 1992 loan classifications. The largest OCC adjustment during the December 1992 net worth calculation was the $736,800 that examiners established for specific reserves, as shown in table 1. Examination records indicated that OCC examiners based this adjustment on their judgments (1) that borrowers were unlikely to repay some questionable loans and (2) that the supporting collateral was weak. We focused most of our attention on how the examiners supported this part of the net worth calculation because the specific reserves had the greatest impact on Rushville’s insolvency determination. The results of our review of Rushville’s loan portfolio are discussed in the following section. Workpapers from OCC’s final Rushville examination documenting the loan portfolio analysis indicated that examiners focused on a large number of problem loans, which are loans posing a greater than normal risk of default. OCC examiners reviewed 134 loans, or 77 percent of Rushville’s loan portfolio, which included 175 loans with a combined value of about $17.8 million. Of this 175-loan portfolio, examiners classified loans totaling $5.9 million as problem loans. OCC further documented decisions reached on 71 of these problem loans (64 reclassified loans from the previous examination and 7 newly classified loans) through a process called a migration analysis, which OCC uses in such cases to compare classifications and explain the basis for changes. Our review of these comparisons was the first step we took in reviewing the Rushville directors’ allegation that OCC inappropriately downgraded sound loans, thereby causing Rushville’s insolvency. We reviewed all 71 loans from OCC’s migration analysis and then focused on 20 of these loans that had outstanding balances of over $100,000. The loan comparison was a key starting point because it listed loans with a classification that had changed from the previous year. Rushville directors were concerned that many loans were improperly classified in 1992. In addition, we identified for further study five other loans from OCC’s loan comparison that had relatively large outstanding balances and OCC calculated specific reserves. These loans were not sufficiently documented by OCC for us to initially determine why the loans were written down. OCC examiners told us that many of the bank’s loans were poorly documented, and that they had to exercise considerable judgment in classifying Rushville loans because of poor loan records and the departure of Rushville directors and loan officers who were most knowledgeable about the loans. The anticipated losses from the 25 problem loans we reviewed represented over 88 percent of the $694,858 OCC charge-offs and over 77 percent of the $736,800 OCC calculated specific reserves. For 12 of the loans in our sample, where it was possible to identify the underlying rationale for the classification, OCC’s classifications seemed justified. For the remaining 13 loans in our sample, it was not completely clear how examiners arrived at their classifications. Such insufficient documentation was an agency problem we previously identified during an earlier 1993 review of the quality of OCC’s examinations. We also noted that the limited documentation OCC had on some of the Rushville loans exceeding $100,000 did not meet agency requirements that were in effect at the time of Rushville’s closure in 1992. OCC’s procedural guidance at that time required sufficient documentation of significant write-downs for loans over $100,000 so that a reviewer could understand the rationale for the write-down. To better understand the basis for OCC’s classifications of the remaining 13 sampled loans, we reviewed available Rushville loan files maintained by FDIC in Chicago and Dallas. We also asked OCC examiners and Washington, D.C., staff to summarize the factors influencing their classification of these loans. The FDIC files contained actual bank documents on some of the 13 loans, including loan files, security agreements, minutes from board meetings, and various legal documents, but we were unable to find key documents that would have allowed us to fully ascertain the basis for the examiners’ loan classifications for the 13 loans. In these cases, we used OCC summaries prepared at our request that generally supported OCC officials’ statements asserting that examiners used accepted agency norms for valuing loans. On the basis of the additional information provided by OCC on these loans and the lack of conflicting information in Rushville loan files maintained by FDIC, OCC’s classifications appeared appropriate. Finally, to ascertain the disposition value of the 13 problem loans, we asked FDIC to tell us the amount it received on the loans or their collateral at the time of their disposition. FDIC records showed that the 13 loans were sold with other FDIC assets, written off by FDIC, or sold for lower amounts than those that were shown on Rushville’s records following negotiations with the borrower. On average, FDIC received 35 percent of the loan’s book value, not considering the reserves for loan losses. Specifically, 10 of the loans were disposed of at an amount lower than the value projected by OCC, and the other 3 loans were sold for an amount higher than the amount OCC initially projected. (See app. III.) OCC closed Rushville 1 day before new regulatory procedures for closing problem institutions became effective on December 19, 1992, following the enactment of FDICIA. Rushville directors have expressed the belief that these new FDICIA procedures would have allowed the bank to remain open for at least an additional 90 days, and they have alleged that OCC closed the bank before the procedures came into effect to prevent Rushville’s directors from restoring Rushville to solvency. Accordingly, you asked us to review whether OCC closed Rushville before FDICIA’s implementation to prevent the bank from remaining open for at least another 90 days, and whether other banks were closed before the implementation of FDICIA to avoid having to keep them open under FDICIA. Our analysis of FDICIA and its effect on Rushville’s closure indicated that FDICIA’s implementation would not have provided an additional time period in which OCC would have let Rushville stay open. We found that OCC did not appear to have taken actions to quickly close other banks to avoid the effects of FDICIA’s implementation. To obtain evidence and views on whether FDICIA affected Rushville’s closure, we reviewed the requirements of FDICIA and met with Rushville directors and with OCC officials. Contrary to the Rushville directors’ belief that FDICIA allows for more lenient treatment of problem institutions, FDICIA’s capital provisions direct federal banking regulators to take prompt corrective action to resolve the capital weakness of institutions that fall below minimum capital standards. Specifically, the FDICIA provisions require OCC to promptly close critically undercapitalized banks, such as Rushville. The provisions, which supplement rather than limit or replace OCC’s existing enforcement authority and earlier capital adequacy guidelines, give OCC up to 90 daysto recapitalize, sell, or close such banks. However, OCC can take action to close such banks at any time during the 90-day period. According to OCC officials, they would have not delayed closing Rushville because of its insolvency and its inability to raise capital. Our review of other banks closed in the 6 months before and after the closure of Rushville did not find evidence that the other banks were treated more favorably. In the 6 months before Rushville’s closure, OCC closed 22 national banks—although 13 of the 22 banks were subsidiaries of a single holding company. We found that OCC closed these banks for reasons similar to those that were the basis for Rushville’s closure (i.e., insider abuse, inadequate reserves, and weak loan administration). Our review of these banks’ closing books indicated that OCC did not hasten their closure so as to close them before FDICIA came into effect. Similarly, we did not find evidence that other banks closed during the 6 months after Rushville’s closure were treated more favorably than Rushville. In the 6 months after Rushville’s closure, OCC closed 15 national banks. We did not find that these banks were allowed to remain open after FDICIA came into effect without an active plan for their recapitalization. Rushville directors alleged that, in early 1992, OCC conspired to close Rushville by contacting Liberty National Bank of Louisville (Liberty) to suggest that it call in a $800,000 loan to Rushville’s holding company, Hoosier Bancorp, which was collateralized by Rushville stock. In response to your request that we examine whether OCC was involved with the recall of the Hoosier Bancorp loan and determine whether OCC followed its policies and procedures in this matter, we reviewed OCC and Liberty documents but found no evidence that OCC’s contacts with Liberty were contrary to OCC policies and procedures. Our interviews with OCC officials and others found no support for the Rushville directors’ claim that OCC asked Liberty to recall the Hoosier loan. OCC officials and Liberty officers stated that OCC had not attempted to influence the recall of the Hoosier Bancorp loan. Moreover, our review of internal OCC documents and trial depositions did not reveal any evidence that OCC officials had asked Liberty officers to recall the loan. Officers of the creditor bank told us that an internal loan review committee identified the Hoosier loan as a problem in 1990 because the Rushville bank stock that collateralized the loan was of questionable value and they doubted the Rushville chairman’s capacity to repay the loan. Our review found that communication between Liberty and OCC during the period immediately preceding the 1992 loan recall involved Liberty officials’ initiating contacts with OCC officials to inform them of the bank’s intent to recall the loan and to later inform them about Rushville and its directors’ lawsuits against Liberty. OCC officials told us that although they might direct a bank to improve its loan portfolio, they would not direct a national bank to initiate a loan recall because such an action would necessitate OCC’s sharing information among banks. According to the officials, OCC would share information among banks only in situations where there is a compelling supervisory reason, such as when it learns of criminal activity that affects other banks. OCC examiners in Louisville said that they were never told by OCC officials in the Chicago district office or Washington, D.C., headquarters to ask Liberty to recall the Hoosier Bancorp loan. Examiners explained that the recall was strictly a business decision by bank officials in which they were not involved. Liberty officers told us that they sought repayment of Liberty’s loan to Hoosier Bancorp in 1992 because the Rushville bank stock that collateralized the loan was of questionable value. Liberty officers conducted several examinations of Rushville and were concerned about its poor financial condition. The officials said they were also prompted to seek repayment by their doubts about the Rushville chairman’s capacity to repay the loan. Records also show that Liberty sought termination of the Hoosier loan on two previous occasions. Liberty officers said their final recall decision was partly based on their concern that Rushville could be closed and their collateral rendered worthless under the prompt corrective action provisions of FDICIA. Liberty officers told us that they were also prompted to recall the loan by the November 12, 1992, suspension of the chairman from participating in managing Rushville. In retrospect, they said that their concerns about the chairman’s repayment ability were borne out by his failure to pay any of the outstanding loan amount since November 1992. A 1997 federal court judgment affirmed that Hoosier Bancorp and the chairman were liable for the full amount of the loan. Finally, we found no evidence that OCC examinations of Liberty in the 3 years before the recall influenced Liberty’s decision to seek the recall by singling out the Hoosier Bancorp loan as a problem loan warranting special attention. Reports on OCC examinations of Liberty did not list the Rushville loan as a problem loan until 1992. That year’s OCC report on the Liberty examination mentioned the loan as one of Liberty’s large problem loans. In the examination report, OCC agreed with Liberty’s internal classification of the Hoosier Bancorp loan and with Liberty’s allowance for losses on the loan. Rushville directors alleged that the penalties OCC assessed against them since the 1980s were arbitrary and excessive, and that OCC arbitrarily assessed several directors penalties because they were publicly critical of OCC. Accordingly, you asked us whether OCC had a process for determining penalties, whether the process was followed in the Rushville case, and whether the Rushville penalties were excessive. We found that OCC examiners and managers appear to have followed agency guidance in assessing penalties. However, we were unable to determine how OCC set many penalty amounts because documentation was incomplete, missing, or unavailable due to the length of time that has elapsed since many of the penalties were assessed. We did not find that OCC arbitrarily assessed directors penalties because they were publicly critical of OCC, as alleged by Rushville directors. In addition, we found that the penalties OCC assessed the Rushville directors, including the $250,000 penalty OCC assessed the Rushville chairman, while higher than average, were not the highest OCC has assessed directors and officers of other banks since 1989. OCC’s process for determining civil money penalties is a multistep process involving examiners and officials in the applicable OCC district office and Washington, D.C. After identifying violations and in concert with OCC district officials, examiners consider whether actions by responsible bank officers or directors warrant their recommending a money penalty and, if so, what level the penalty should be. These recommendations are sent to OCC staff in Washington, D.C., for review and analysis. The staff presents the case to OCC’s Supervision Review Committee, which is made up of senior OCC officials. The committee makes a recommendation to a Senior Deputy Comptroller who determines the final recommended penalty amounts. In the course of determining whether to assess a civil money penalty and the amount of the penalty, OCC issues a 15-day letter to affected individuals soliciting their views. At this point, the director or officer is provided an opportunity to negotiate the penalty. If the penalty assessment is contested, the case is brought before an independent administrative law judge. The judge’s decision is sent to the Comptroller of the Currency for the final determination of the penalty. The OCC’s determination may be appealed to the U.S. court of appeals. Evidence we reviewed indicated OCC followed its policies and procedures for the penalties it assessed against Rushville directors in the 1990s. We were not able to come to a similar conclusion on the penalties assessed in the 1980s because complete documentation was not available. Table 2 shows the amounts of penalties and their resolution for the penalties OCC assessed Rushville directors since 1984, which was the first year penalties were assessed against the Rushville directors. OCC was able to provide us with limited documentation in support of the penalty amounts it assessed in the 1980s. OCC officials told us that, with the passage of 14 years, it was difficult for them to locate additional records pertaining to some of the penalties assessed in 1984 and 1985. OCC’s inability to locate such records limited our ability to determine how the amounts were chosen. Moreover, we noted that OCC’s penalty assessment procedures at that time did not include guidance on the possible ranges of penalty amounts that could be assessed. We found that OCC initially set the amounts of the penalties it assessed in 1992 and 1993 on the basis of a penalty assessment matrix it began using in January 1991. The penalty matrix provides guidance for examiners to use in determining whether to assess civil money penalties and the amount of such penalties. The matrix, which is intended to make the process of civil money penalty assessment consistent and equitable, weights such factors as severity, intent, pecuniary gain, loss to the bank, and concealment. Although district examiners initially based the penalties they assessed in 1992 on penalty matrices, we found that many penalty amounts were increased as the penalty assessments went through OCC’s review process. We found that OCC procedures allow for such increases when examiners-in-charge and OCC officials believe circumstances warrant them. Specifically, written OCC policies and procedures emphasize that the matrix is only a guide to use in determining an appropriate penalty. OCC policies and procedures state that the matrix was not intended to reduce the penalty assessment process to a mathematical equation, and that it should not be a substitute for sound supervisory judgment. In setting the 1992 Rushville penalties, OCC appears to have followed its procedures that allow for such increases, but for two of the four assessments we found little documentation to support the increases in amounts or the use of factors not covered in the penalty matrix as the basis for setting penalties. Specifically, documented explanations for the 1992 penalty amounts were missing or incomplete in the following two instances. The $20,000 assessment against the chairman and the $15,000 assessment against a director reflected $10,000 and $5,000 increases, respectively, beyond the level the matrix recommended. The district staff’s recommendation for a higher amount was based on similar noncompliance by the chairman and director almost a decade earlier and by their continuing disregard for a cease-and-desist order. The district’s recommendation did not explain how the increases were chosen. Washington, D.C., staff disagreed with the district recommendation, arguing that the penalty matrix takes into account all of the circumstances that should be considered in assessing a penalty. The district’s recommendation was accepted by the Supervision Review Committee because of the chairman’s and the director’s significant and long-standing noncompliance. Penalties assessed against three directors were assessed at levels exceeding the initial recommended amounts that district examiners calculated using the matrix. In these instances, the matrix prepared by the district staff recommended a letter of reprimand, but OCC’s district office assessed the directors $10,000 each. The rationale given for the increase was that setting a penalty of less than $10,000 would imply that OCC viewed the directors’ current noncompliance as less serious than their previous noncompliance, which resulted in a $10,000 assessment, according to OCC documents. OCC officials said they also based the penalties they assessed in 1993 on penalty matrices. However, OCC was not able to furnish us with the applicable matrices because they could not be located. Other documents in OCC files provided insight into the rationale for the 1993 assessments, but these documents provided less insight into how penalty amounts for two of the three assessments were calculated. Specifically, documented explanations for the 1993 penalty assessments were missing or incomplete in the following two instances. The $25,000 penalty assessed against a director was first proposed to be $10,000. However, the Supervision Review Committee increased the penalty to $25,000 because the director was the nominal recipient of a $300,000 loan, which caused a loss of about $300,000. Although we found no documentation explaining how OCC arrived at the $25,000 penalty amount, an independent administrative law judge found that the assessment could have been as much as $5 million. The Comptroller of the Currency subsequently adopted the $25,000 amount. The $250,000 penalty assessed against the chairman was based on the district’s recommendation of a penalty amount of $125,000 or more. Although the Supervision Review Committee cited the chairman’s demonstrated “reckless disregard” for the law, for the soundness of the bank, and for his own fiduciary duties as the reason for assessing $250,000, we found no documentation explaining how OCC calculated the increased amount. OCC officials told us that there are numerous violations described in a variety of OCC documents justifying the $250,000 penalty. An independent administrative law judge found that the assessment could have been as much as $1.7 million, but subsequently the Comptroller of the Currency adopted the $250,000 amount. Evidence we reviewed indicated that OCC appears to have followed its policies and procedures in assessing penalties against Rushville directors who were publicly critical of OCC. Specifically, we did not find that OCC arbitrarily assessed two directors penalties because of their public comments, as alleged by Rushville directors. We found the directors were actually assessed penalties for various violations, including their noncompliance with a cease-and-desist order. Our review indicates that certain public statements by Rushville directors in newspapers in July 1993 were made after penalties were assessed and thus could not have influenced OCC penalty determinations in January 1993. OCC officials told us that it is common for bank officials to make statements critical of OCC after having civil money penalties assessed against them or having their banks closed. OCC officials said that the penalty-setting process does not consider these comments, and that such comments by Rushville officials were not part of the penalty determination. We found that the $250,000 penalty OCC assessed against the Rushville chairman was not the highest penalty OCC had assessed since 1989. Over the past 9 years, OCC assessed 21 individuals $250,000 or more. Twelve of these individuals were assessed over $250,000, of which 5 were assessed $1 million, and the highest amount assessed against 1 individual was $1.9 million. Our comparison of OCC’s penalty assessment for the chairman to its assessments in four similar cases involving penalties of more than $250,000 indicated that OCC did not appear to have applied a more stringent standard to the Rushville assessment. Former Rushville directors alleged that OCC prevented the chairman from selling his bank stock. You asked us whether OCC followed its policies and procedures in its involvement with the proposed sale of Rushville stock. In addition, you asked (1) whether OCC procedures and practices prevented a director or officer of a bank from selling stock in the bank and (2) how many times in the last 5 years OCC had prevented a director or officer from selling stock. Our review of OCC procedures and practices indicated that OCC does not prevent bank directors or officers from selling stock. Furthermore, our review of documentation and discussions with OCC officials provided no evidence that OCC had prevented a director or officer from selling stock during the last 5 years or that it would prevent such a stock sale in the future. Regarding this allegation, a number of the Rushville directors claimed that an OCC attorney expressly stated at the November 12, 1992, meeting at which the chairman was suspended from banking that he could not sell his Rushville stock. To better understand this allegation, we interviewed Rushville directors and OCC officials present at the meeting and reviewed their affidavits on the matter. We did not find any documentation substantiating the allegation that OCC officials prohibited the sale of stock. The OCC officials we interviewed denied the Rushville directors’ claim. These officials told us that it is OCC’s policy to approve the sale of stock by a suspended bank director or officer, and they said, generally, that their only concern is that a suspended director or officer not continue to be involved with the bank’s affairs. Specifically, the officials told us that at the suspension meeting, they told the chairman that he was being suspended, and then an OCC attorney read aloud the applicable banking law under which he was being suspended. OCC officials told us that a misunderstanding could have occurred because of the complicated language of the law and the adversarial nature of the suspension meeting. The law read to the chairman says that a person subject to a suspension order cannot transfer or attempt to transfer voting rights in any institution, but the law does not address the subject of stock sales. The suspension order presented to the Rushville chairman did not address the issue of whether he could sell his Rushville stock. OCC officials told us that they provided the chairman with no written guidance or instructions, and they said that OCC has no written procedures on steps to take in a suspension because suspensions occur so infrequently. In response to a lawsuit filed to allow the chairman to sell his stock, OCC officials sent a letter to the chairman’s attorney on December 4, 1992, telling him that the chairman could sell his stock. However, the officials stated in the letter that OCC would have to approve such a sale to ensure that the person purchasing the stock had no connection to the chairman. On December 23, 1992, the Department of Justice also notified the chairman’s attorney that the chairman could sell his stock. Following its closing, Rushville’s assets were acquired by FDIC in its role as the liquidator for the Bank Insurance Fund. Liquidation, which is the next step after an insolvent bank’s closure, is the process by which FDIC disposes of a bank’s assets and attempts to recover the costs it incurs in closing a bank. The final liquidation loss for Rushville amounted to about $8.8 million. This $8.8 million in liquidation costs can be broadly categorized as $2.4 million in liquidation expenses, which represent FDIC’s operational costs, and $6.4 million in losses from the disposition of Rushville assets, according to FDIC documents. Operational costs represent the cost of FDIC personnel directly assigned to the Rushville liquidation; other FDIC personnel supporting the liquidation; and professional fees for auditors, tax accountants, and appraisers. Losses from the disposition of assets mostly represented losses from the sale of Rushville’s commercial and real estate loans. Liquidation costs were partly offset by revenues from interest on performing loans and earnings from Rushville’s securities. We requested comments on a draft of this report from OCC, FDIC, and the Federal Reserve. OCC generally agreed with the draft report’s contents (see app. IV). FDIC and the Federal Reserve neither expressed any concerns nor offered any comments. We are sending copies of this report to the Ranking Minority Member of your committee, the Indiana congressional delegation, other interested congressional committees, the Comptroller of the Currency, the Chairman of the Federal Deposit Insurance Corporation, the Chairman of the Board of Governors at the Federal Reserve System, and other interested parties. We will also make copies available to others upon request. Major contributors to this report are listed in appendix V. Please contact me at (202) 512-8678 if you or your staff have any questions. To determine whether OCC followed its policies and procedures in calculating Rushville’s net worth and classifying the bank’s loans, we met with OCC officials and staff in Washington, D.C.; Chicago, IL; Indianapolis, IN; and Louisville, KY. In addition, we reviewed documentation available at each of these locations, including examination workpapers. We also asked the Rushville directors to indicate which loans they believed were misclassified. Since the directors did not provide us with a list of misclassified loans, we focused our attention on 71 problem loans (64 reclassified loans from the previous examination and 7 newly classified loans) that were in OCC’s migration analysis. We then focused on 20 loans that had outstanding balances exceeding $100,000. In addition, we identified five smaller loans that had relatively large outstanding balances and OCC calculated specific reserves from OCC’s migration analysis for further study. In addition, we reviewed Rushville loan files maintained by FDIC. We also discussed the final examination and closure with FDIC staff in Washington, D.C.; Chicago; Indianapolis; and Dallas; and the Federal Reserve staff in Washington, D.C., and Chicago. Staff in our Accounting and Information Management Division also reviewed OCC’s net worth calculation to determine whether OCC followed generally accepted accounting principles. To determine the impact of FDICIA on the closure of Rushville, we met with Rushville directors and their attorneys to discuss their views on Rushville. In addition, we discussed their allegations with OCC officials and staff in Washington, D.C.; Chicago; Indianapolis; and Louisville and reviewed OCC documentation. We also reviewed OCC’s closing books on 18 banks closed before and after the implementation of FDICIA. Additionally, we discussed FDICIA implications with the Federal Reserve staff in Washington, D.C., and Chicago. Our Office of the General Counsel also reviewed legal issues concerning FDICIA and the Rushville closure. To determine whether OCC contacts with Liberty regarding the recall of the Hoosier Bancorp loan followed OCC policies and procedures, we met with directors from Rushville and several Liberty officers and reviewed documentation they provided. In addition, we discussed the recall allegation with OCC officials and staff in Washington, D.C.; Chicago; Indianapolis; and Louisville. We also reviewed OCC documentation on contacts with Liberty. To determine whether OCC followed its policies and procedures in assessing civil money penalties, we met with directors from Rushville and reviewed documentation they provided. In addition, we discussed the Rushville civil money penalty allegation with OCC officials and staff in Washington, D.C.; Chicago; and Indianapolis. We also reviewed available documentation at each location and discussed this allegation with the Federal Reserve staff in Washington, D.C. Our Office of the General Counsel also reviewed the legal questions concerning the assessment of civil money penalties. To ascertain the nature of OCC’s involvement with the proposed sale of Rushville stock by the chairman and whether OCC followed its policies and procedures, we met with Rushville directors and reviewed documentation they provided. In addition, we discussed the proposed Rushville stock sale allegation with OCC officials and staff in Washington, D.C., and Indianapolis and also reviewed available documentation at each location. Additionally, our Office of the General Counsel reviewed the legal issues concerning the chairman’s suspension as it related to the proposed sale of stock, and we discussed this allegation with a Justice attorney who had responsibility for representing OCC in this matter. Finally, we reviewed various documents provided to us by several sources. The directors of Rushville gave us documents that they considered relevant, including a chronology of events and related depositions. We reviewed approximately 100 boxes containing OCC documents subpoenaed by your office and Rushville loan files maintained by FDIC. We also reviewed OCC’s supervisory monitoring system files on Rushville and Liberty, which provided a comprehensive picture of the background, condition, and status of the banks and OCC’s supervisory plans. In addition, we reviewed legal documents from federal courts involving recent court proceedings regarding Rushville directors. We conducted our review from August 1997 through April 1998 in accordance with generally accepted government auditing standards. OCC examination disclosed unacceptable lending practices and deterioration in Rushville’s overall condition. Rushville directors consented to an OCC cease-and-desist order addressing criticized loans, loan policy, credit and collateral exceptions, loan review, allowance for loan and lease losses, budgets, expenses, and conflicts of interest. Rushville directors consented to an OCC cease-and-desist order requiring the appointment of a president and a certified accountant. A national newspaper listed Rushville as one of the nation’s most troubled banks. A Liberty officer contacted OCC to inform it that Liberty intended to demand payment in full on the Hoosier Bancorp loan. OCC began its final examination of Rushville. OCC suspended the bank chairman for engaging in unsafe and unsound practices and insider abuse. State of Indiana withdrew its funds, thereby straining liquidity. Liberty declared holding company loan in default and demanded principal and interest. OCC asked for FDIC-assisted purchase of Rushville or payout of insured deposits. Four Rushville directors resigned. OCC examination showed the bank was capital insolvent and its liquidity seriously impaired. The Comptroller of the Currency declared Rushville insolvent, and FDIC was appointed its receiver. OCC closed Rushville. 1. 2. 3. 4. (146,632) 5. (145,000) 6. (125,000) 7. (44,700) 8. (43,756) 9. (26,772) 10. (18,191) 11. (12,020) 12. (6,517) 13. (2,358) ($385,139) Kane A. Wong, Assistant Director Gerhard C. Brostrom, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the events leading up to the 1992 closure of the Rushville National Bank by the Office of the Comptroller of the Currency (OCC), focusing on whether OCC followed its policies and procedures in its: (1) net worth calculation and loan classifications, which led to Rushville's being declared insolvent; (2) decision to close the bank before implementation of the Federal Deposit Insurance Corporation Improvement Act (FDICIA); (3) contacts with the bank that recalled a loan to Rushville's holding company; (4) determination of civil money penalties assessed against the former Rushville chairman and directors; and (5) involvement with the proposed sale of Rushville holding company stock by Rushville's suspended chairman. GAO noted that: (1) during its review, it found that OCC properly calculated Rushville's net worth; (2) also, GAO did not find evidence that OCC's loan classifications or insolvency determination were improper; (3) although some calculations and classifications were based to a great extent on examiner judgment, the examiners' net worth calculation and loan classifications followed OCC procedures; (4) however, GAO's review of loan classifications was made more difficult by the lack of certain documentation; (5) GAO determined that FDICIA's prompt corrective action provisions would not have allowed Rushville to remain open longer; (6) Congress enacted FDICIA to eliminate delays in the closure of problem institutions, and OCC officials told GAO that, for that reason, even if they had not had a pre-FDICIA basis to close Rushville, they would have closed the bank without delay once FDICIA was implemented; (7) in GAO's review of OCC electronic mail and related documents, it found no support for the allegation that OCC tried to close Rushville by seeking to influence the recall of a loan made by a creditor bank to Rushville's holding company; (8) OCC officials and officers of the creditor bank told GAO that OCC never attempted to influence the recall of the holding company loan; (9) officers of the creditor bank told GAO that they first sought repayment of the loan in 1990 because the Rushville bank stock that collateralized the loan was of questionable value and they doubted the Rushville chairman's capacity to repay the loan; (10) regarding the penalties assessed against Rushville directors, GAO found that OCC followed its policies and procedures; (11) however, in a number of instances in the 1990s, the penalties ultimately assessed by OCC were higher than those originally proposed by district officials; (12) while documentation was insufficient for GAO to ascertain how the OCC amounts were determined, OCC procedures allow for such penalty adjustments when circumstances warrant; (13) GAO found no evidence substantiating the Rushville directors' assertion that an OCC official told the Rushville chairman during the meeting at which he was suspended that he could not sell his stock; and (14) moreover, when OCC became aware of the misunderstanding, OCC sent a letter to the chairman stating that he could sell his stock subject to OCC approval.
DOD relies increasingly on globally networked computer systems to manage the information it uses to perform operational missions and daily management functions. These systems provide military offensive and defensive capabilities as well as intelligence support. According to DOD, the department operates 2 million to 3 million computers, 100,000 local area networks, and 100 long-distance networks—including service-based, joint defense, and intelligence computers and networks such as the Global Command and Control System, which supports distributed collaborative, worldwide planning for crisis and contingency operations, and the Joint Worldwide Intelligence Communication System, with more than 100 sites worldwide. DOD views information as a strategic resource vital to national security and information superiority as the foundation of its vision of modern warfare. It has concluded that IA is essential to DOD’s information superiority. DOD defines IA as “Information Operations that protect and defend information and information systems by ensuring their availability, integrity, authentication, confidentiality, and non-repudiation . . . includes providing for the restoration of information systems by incorporating protection, detection, and reaction capabilities.” In this context, availability is assured access by authorized users, integrity is protection from unauthorized change, authentication is verification of the originator, confidentiality is protection from unauthorized disclosure, and nonrepudiation is undeniable proof of participation. A 1997 DOD task force report acknowledged that the department requires substantial IA capabilities for its highly interconnected computing and communications environment, noting that “without information assurance, it is increasingly likely that our forces will fail to accomplish their mission.” Other policy and guidance documents also emphasize the critical role of IA in DOD’s mission. In October 1998, the Joint Doctrine for Information Operations identified IA as an essential component of the military’s defensive information operations. In February 2000, the DOD CIO’s annual report on IA identified it as the department’s second highest priority IT issue, following Year 2000 remediation. In March 2000, the Deputy Secretary of Defense issued a guidance and policy memorandum recognizing the pivotal role of global networking in departmental activities and requiring the use of IA safeguards and operational procedures for all of DOD. Defense operations rely increasingly on interconnected information systems, which results in sharing of security risks among all interconnected organizations. In this environment, an adversary need only find and penetrate a single poorly protected system and then use access to that system to penetrate other interconnected systems. Consequently, coordination of IA efforts across DOD is important to maintain adequate security throughout its systems and networks. Historically, the department’s information systems have also been beset by vulnerabilities. Reports by us and DOD document serious and pervasive deficiencies that can impair the military’s ability to (1) control physical and electronic access to its systems and data, (2) ensure that software is properly authorized, tested, and functioning, (3) limit employees’ ability to perform incompatible functions, and (4) resume operations in the event of a disaster. Numerous Defense functions, including weapons and supercomputer research, logistics, finance, procurement, personnel management, military health, and payroll have been adversely affected by system attacks and fraud. DOD, in turn, has acknowledged the need for improvements. The department’s IA challenges are heightened by the growing threat of Internet-based attacks. Intrusions into government information systems— including DOD’s—continue to escalate, in number and complexity, requiring better detection, faster damage containment, and more efficient reporting mechanisms. Furthermore, DOD recognizes that increasing availability of its systems to authorized users has also increased opportunities for unauthorized access, presenting the most serious threat to DOD information. In this environment, security incidents remain an ongoing problem for DOD. DOD has identified an even more fundamental challenge underlying these organizational and technological challenges—a shortage of qualified personnel to fill positions that manage and protect its information systems. Although poor planning of system procurements, downsizing of military and civilian personnel, and an increased emphasis on outsourcing have contributed to DOD’s IT personnel shortage, this shortage also reflects a broader problem in recruiting and retaining IT security professionals in both the public and private sectors, according to a DOD human resources study. In January 1998, the Deputy Secretary of Defense responded to these challenges by forming DIAP and assigning responsibility for its oversight to DOD’s CIO, who is also the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence (ASD(CI)). DIAP was established to meet DOD’s need for “integrated, comprehensive, and consistent Defense-wide IA practice,” and to develop DOD into “a model practitioner of IA” for the nation. In February 1999, DOD’s CIO established four critical departmentwide strategic IA goals: Make IA an integral part of DOD mission readiness criteria. Enhance DOD personnel IA awareness and capabilities. Enhance DOD IA operational capabilities. Establish an integrated DOD Security Management Infrastructure. DIAP was intended to help meet these goals by planning, coordinating, integrating, and overseeing IA activities, and by supporting review and assessment of IA resource investments on a departmentwide basis. In this regard, DIAP became DOD’s official program for ensuring the continual integration and coherent execution of all IA functions, activities, and program resources. DIAP was to continually monitor and act as a facilitator for the execution of IA resources, which remained the responsibility of the commanders-in-chief, military services, and Defense agencies where the activities and programs reside. Responsibility for the creation and management of DIAP was assigned to the Director of Information Assurance, a position reporting to the CIO. The Director was designated DIAP’s program manager and was authorized a staff of representatives from DOD component organizations to support Defense-wide IA planning, programming, budgeting, and execution review. In addition to the DIAP staff, the Director of IA also maintains a staff dedicated to the Office of Infrastructure and Information Assurance (I&IA). As depicted in its management plan, the DIAP program structure also included the following individuals and organizations that contribute to achieving the department’s IA goals: DOD CIO Council − monitors and coordinates IT investments, including National Information Security (INFOSEC) manager − assesses cyber threats and security posture for national security systems; Defense Information Infrastructure adviser − plans, develops, and supports CDIAP would also work with an Intelligence Community (IC) coordinator to ensure integration and compatibility of IA efforts. Appendix I provides a description of certain IA-related Defense organizations and their relationships with DIAP, based on the implementation plan. The implementation plan also established the general structure of the DIAP staff and assigned it a variety of responsibilities. The plan described expected DIAP staffing levels, with specific numbers of personnel to be provided by the Office of the Secretary of Defense, the Joint Staff, each of the military services, and several Defense agencies. As described in the implementation plan, DIAP staff was divided into two teams, the Functional Evaluation and Integration Team (FEIT) and the Program Development and Integration Team (PDIT). FEIT was assigned responsibility for development of Defense-wide IA performance goals, standards, metrics, and oversight of functional areas. The organization of FEIT reflects, in part, each of the four DOD IA goals corresponding to readiness assessments, human resources, the operational environment, and security management. The remaining FEIT functional areas address policy integration, architecture, acquisitions, and research and development. PDIT was assigned responsibility for oversight, coordination, and integration of the department’s IA resource programs through participation in DOD’s IA planning and budgeting processes, and was specifically charged with tasks such as categorizing program activities, developing departmentwide budgets, and preparing the CIO’s annual IA assessment. Liaison positions were established to coordinate DIAP activities with special communities whose interests span multiple functional areas. Appendix II lists DIAP staff responsibilities for each program area as outlined in the implementation plan. The objectives of our review were to (1) examine the progress and accomplishments of DIAP since its inception and (2) identify obstacles to further progress. To determine the progress and accomplishments of DIAP, we ascertained its mission, responsibilities, and organization through analysis of documents provided by DOD. We gathered and analyzed information on DIAP plans, activities, products, and accomplishments from the DIAP staff, the Office of the ASD(CI) (OASD(C Ballistic Missile Defense Organization, Defense Advanced Research Projects Agency, Defense Information Systems Agency (DISA), Defense Intelligence Agency, Defense Logistics Agency, Defense Security Service, National Imagery and Mapping Agency, National Reconnaissance Office, and National Security Agency (NSA). We selected these organizations primarily based on their roles in defense- related IA as documented in the 1999 DOD CIO Annual Information Assurance Report. We also reviewed DOD self-assessments, plans for departmentwide IA activities, and inspector general reports on DIAP and other departmentwide IA activities. We focused on the DIAP’s accomplishments and plans most clearly tied to DOD’s IA goals, and thus did not compile a comprehensive inventory of all DIAP accomplishments, particularly those led by other Defense components. We identified interactions of the DIAP staff with each Defense organization and the impact of DIAP staff efforts as perceived by those organizations. We also reviewed IA plans, products, and accomplishments of groups outside of the DIAP staff and assessed the mechanisms used to integrate the activities of these groups with those of the DIAP staff. These outside groups included the Joint Staff, the IA Panel, the INFOSEC Research Council, and the Office of Infrastructure and Information Assurance in OASD(CI). We did not attempt to determine the proportion of IA accomplishments attributable to the DIAP staff or to other organizations. We verified activities and events related to accomplishments where feasible but did not verify all claims of accomplishments. To identify challenges to DIAP, we obtained information from DOD officials, staff members, contractors, and other federal government representatives that showed evidence of factors that hindered DIAP activities and the achievement of DIAP objectives. Finally, we compared DIAP’s management approach with characteristics of high-performing organizations. DOD’s IA readiness assessment goal states that “All DOD organizational elements shall operate and maintain their computer-based information functions, information systems and their supporting networks and resources at levels of IA consistent with the enterprise and network mission functions they perform.” Recognizing the importance of IA to department readiness, DIAP has drafted metrics for IA readiness assessment at a strategic, departmentwide level in the areas of people, operations, training, equipment and infrastructure, and processes. An example of an IA metric is the number of system outages caused by infrastructure failures during a fixed period. Joint force-level IA metrics were separately developed, approved, and issued by the Joint Staff for its own use and for use by the commanders-in-chief, military services, and combat support agencies. The DIAP staff plans further development of department readiness metrics, coordination between department and joint force metrics, and integration of the metrics into management processes. Although these metrics have been developed, systems and processes are not yet in place to provide department decisionmakers with data to assess the department’s IA readiness status. DOD plans indicate that department- level readiness reports will not be available before late 2002. DIAP personnel stated several factors that contribute to this shortfall. First, they said that DOD’s automated readiness reporting systems are limited in their current capability to capture IA-related inputs, and they said that these systems could not be easily modified to provide that capability. Second, they noted that processes for developing strategic department-level and joint force IA metrics have been largely independent of each other, presenting risks for unnecessarily burdensome reporting requirements and interpretation conflicts. Further, neither DIAP nor the Joint Staff have taken steps, such as testing the metrics on a specific program, to ensure that the metrics are appropriate. Therefore, data reported by components, services, and agencies may not provide a true picture of DOD’s readiness status. Without reliable reporting on IA, the Congress and the department lack important information with which to determine whether DOD is maintaining adequate levels of operational readiness. DIAP contributed to a 1999 joint study by the Undersecretary of Defense for Personnel and Readiness and ASD(CI), which concluded that the weakest link in its IA is the people who use, administer, and manage its information systems and technologies. The study also identified a lack of information about the composition and activities of the department’s IT personnel as the key human resources issue affecting DOD IA and recommended department actions aimed at establishing a sustaining pool of skilled IA/IT professionals to meet the current and future technological needs of the department. The DIAP staff supported the development and coordination of a policy directive, issued in July 2000, which assigned DOD organizations to lead implementation of each recommendation. The DIAP staff plans to coordinate an action plan with responsible DOD organizations to address the new policy. According to DIAP officials, a key factor limiting progress in improving DOD’s human resources practices is that certain DOD components have not yet submitted training and certification plans to the OASD(CI) study to identify needed policies and requirements for defense of DOD’s computer networks, then supported the development of DOD instructions intended to implement those policies. DIAP also assisted in developing a Defense-wide policy that requires vulnerability notices issued by components to be coordinated with the Joint Task Force-Computer Network Defense to ensure consistent communications about vulnerabilities across the department. The DIAP staff plans to use information it has gathered about IA operations to develop a policy for certifying IA support facilities, policy and instructions for continuity of operations at IA support facilities, and a program structure for coordinating IA support groups across the department. The Joint Staff is developing guidance for operating such facilities. The DIAP staff expects to contribute enhancements to a Defense- wide IT database for IA-related system components. However, the department still lacks comprehensive operational policies and procedures that would provide consistency in IA monitoring and management across the department. For example, no departmentwide policy on the use of intrusion-detection systems has been established. The absence of such operational policies impairs DOD’s ability to realistically manage risks to its information and systems. Also, the DIAP staff has not addressed identification and implementation of the best IA tactics, techniques, and procedures in the operations of DOD components as described in the DIAP strategic plan. DIAP officials said that unmet staffing expectations had prevented them from taking action on this objective. Public key infrastructure (PKI) technology is the foundation of DOD’s security management services, which provide confidence in secure operation of the Defense information infrastructure. Program management for DOD’s PKI initiatives rests with NSA, and DISA and NSA have established a partnership for developing and applying PKI throughout the Defense information infrastructure. In support of this goal, DIAP has established processes to consistently budget and track component activities in implementing PKI technology associated with computer applications. DIAP helped to draft and coordinate the department’s guidance on adapting applications to use public key technology, which was issued in November 1999 and augmented more general PKI guidance that was issued in May 1999. Current DIAP activity focuses on working with components to ensure that adequate steps are being taken to plan and budget for applications capable of supporting DOD PKI policy and to establish a Defense-wide PKI budget. The DIAP staff has begun to maintain a list of successfully tested PKI applications and plans to issue an annual report of “enabled” systems, a mechanism that would support identification of duplicate testing across DOD organizations. In addition, some coordination of coalition-related PKI issues has been performed within the Office of I&IA. Future plans include overseeing and coordinating development of a Defense-wide key management infrastructure. While progress has been made on PKI, other security management technologies have not yet been planned or coordinated on a departmentwide basis. For example, DIAP staff have not addressed topics such as workstation security, virtual private networks, and security management tools. According to DIAP officials, resources have not been available to address these additional topics. Establishing a program baseline is useful as a way to define the activities, human resources, and funding required to meet performance-based goals, and can be used to facilitate effective program management and oversight. The DIAP staff has begun to build a program baseline that catalogs the full range of department efforts and resources in IA. The staff reviewed the largest element of the baseline—the Information Systems Security Program, managed by NSA—to understand its components, and established an IA Resources Team to address other component IA activities and programs that are not a part of that program. The staff also developed instructions, categorization methods, and an automated tool for tracking component IA funding requests for fiscal years 2002 through 2007. In addition, the staff compiled the annual report for the CIO on departmentwide IA efforts and participated in various departmental planning and budgeting activities. The DIAP staff has earmarked $1.2 million of the OASD(Cbudgeting, definition of the IA domain and mission, and development of an investment strategy and cost model for components’ IA-related resources. Although DIAP has developed mechanisms to define an IA program baseline, its work to date is incomplete. Specifically, DIAP’s efforts relied on program data provided by the Defense components that are neither complete nor consistent. For example, information on IA resources associated with embedded systems—such as computers that control the functions of airplanes or tanks—has not been gathered. Further, differing internal policies and procedures for structuring budgets among components have produced inconsistent information. For example, a portion of the budgets reported by components did not fit any of the 10 classifications used by DIAP. Three additional factors have contributed to the incomplete and inconsistent view of DOD’s IA resources. First, no budget or funding was specifically identified for DIAP at the time of its creation, and DIAP therefore remains dependent on discretionary funding from OASD(CI) to support staff activities. Second, DIAP staff have not initiated the development of a detailed system of IA budget codes for identifying and comparing IA efforts and resources across DOD, as called for in the DIAP management plan. While DIAP officials said they lacked the staff needed for this assignment, we also noted that other DOD officials disagreed on the need for these detailed IA budget codes. Third, DIAP has not yet integrated planning, programming, and budgeting data with the department’s acquisition management or requirements-generation systems to provide a comprehensive view of IA resources and funding priorities. A DIAP official stated that program staff have no plans to address this issue until DIAP achieves greater influence on DOD’s program management processes. Without the information that an IA baseline would provide, DOD remains limited in its ability to determine its IA expenditures and unmet resource needs, and therefore it is not positioned to effectively manage and oversee its attempts at improvement. Since its inception, DIAP has been involved in the development and integration of departmentwide IA policies. For example, the DIAP staff provided support by developing a pilot library of IA policy. The I&IA staff partially addressed the need for policy integration and evolution planning by performing a high-level analysis of existing policy to develop an IA policy framework and to identify gaps and issues. The Joint Staff’s Office of IA also partially addressed this area by developing a matrix summarizing IA documents applying only to the military services. The Joint Staff plans to continue updating military guidance with IA considerations. The DIAP staff plans to continue its support for policy development through participation in department IA working groups, and expects to expand the content and search capabilities of its policy tool, provide demonstrations and briefings, and distribute copies of the tool if adequate funding is provided for fiscal year 2001. The primary means for considering changes to IA policy within the department is now the IA Panel, which was formed to provide advice on IA policy to the Director of IA and the Military Communications-Electronics Board (MCEB)—a group of department-level executives responsible for providing guidance, direction, and coordination on communications and electronics matters for DOD components. The panel has addressed several areas of policy development, such as the use of mobile code and foreign national access to DOD’s unclassified network. Other groups such as the DIAP staff and I&IA staff contributed to IA policy development in areas such as computer network defense and the Global Information Grid by collaborating with a wide range of working groups and Defense organizations. In addition, staff in I&IA led the coordination efforts of the department’s IA Policy Working Group. Although progress has been made in selected areas of IA policy, representatives of the IA Panel, DIAP staff and I&IA staff stated that they had not developed a strategy to ensure that the full scope of IA issues associated with DOD policies, directives, and guidance are being addressed. In addition, DIAP officials stated that they were not assessing the departmentwide implementation of IA policy, as assigned in the implementation plan, and had no plans to determine compliance with IA policies across DOD. DIAP has not fully addressed its assigned responsibilities in three other areas—architectural standards and system transformation, acquisition support and product development, and research and technology. A variety of other entities within the department were also involved in these areas. However, their work was not coordinated or integrated with other related DIAP activities. In the area of architectural standards and system transformation, DIAP was to ensure integration of IA technologies, products and procedures through approaches such as enterprisewide standards and incremental improvement. DIAP’s activities in IA architecture focused on participating in the IA Architecture Working Group formed in August 1999. The initial task of this group was to produce an IA architecture prototype based on the systems and operations of the United States Pacific Command. With DIAP staff participation, the group identified information exchange requirements for the command and developed an IA architectural framework to describe existing and future IA capabilities. The group expects to apply its architectural framework to additional Defense environments, however, no detailed plan has been developed. Although it was initially involved in the working group’s activities, continued DIAP participation is uncertain. The DIAP staff position for IA architecture has been vacant since November 1999, and the DIAP staff has not integrated this work into its other activities. Regarding acquisition support and product development, DIAP was to focus on development and implementation of guidance for department IA requirements, products, and technology trends. However, no milestones were established, and DIAP management told us that no significant progress had been made. For example, an effort to revise DOD directives to address IA-related acquisition was suspended because it could not be completed in time to be integrated with other upgrades to DOD policies. Plans for fiscal year 2001 focus on different issues, such as placing an IA advocate in each department group involved in IT acquisitions, increasing Defense program managers’ awareness of IA-related issues, and proposing improvements to DOD directives to address IA in acquisitions. In the area of research and technology, the DIAP was tasked with leveraging existing research and development activities inside and outside DOD to ensure that they are consistent with the department’s mission needs and changes in IT. While some actions were initially taken by DIAP staff to participate in IA research coordination activities, DIAP has stopped doing this and is no longer working towards this objective. Specifically, DIAP staff in the past participated in the INFOSEC Research Council, an affiliation of Defense research organizations that coordinates DOD efforts relating to IA research and development. The council identified a list of “hard problems” in IA research to aid in planning research and also developed a database of IA-related research programs. Since the departure of the DIAP staff member for IA research in February 2000, however, there has been little coordination between DIAP staff and the council, and no efforts have been undertaken to link existing IA research work to other areas of DIAP responsibility such as policy development or acquisition, as called for in the DIAP implementation plan. Furthermore, the staff has no plans to coordinate or integrate IA research with other DOD technology management activities, such as forecasting and technology transfer, which are important in an environment of rapid technology change. DIAP’s progress has been hampered by several challenges in establishing an infrastructure for Defense-wide IA to support the department’s goals. Specifically, DOD has not yet applied an effective management framework for structuring, operating, and overseeing Defense-wide IA efforts consistent with the characteristics of high-performing organizations. Little evidence exists that the management practices associated with model organizations have been applied to DIAP, and DOD executives acknowledged that such practices were not in place. Moreover, DIAP has not been staffed as intended, and guidance and oversight activities have been weakened by a lack of continuity in key organizations responsible for those areas. Consequently, some functions assigned to DIAP have not been fulfilled. Taken together, these challenges have limited DIAP accomplishments and impeded DOD’s ability to determine the effectiveness of its IA improvement efforts. Over the past decade, the Congress has established a framework designed to create and sustain high-performing organizations across the government. Our work in assessing federal agencies under this legislation and guidance has consistently shown the need to build and strengthen their management through a disciplined implementation of management practices, such as those used by high-performing organizations: A clear mission and vision that is communicated by top leadership. A strategic planning process that yields results-oriented goals and measures. Organizational alignment to achieve goals. The use of sound financial and performance information to make decisions. The strategic use of technology to achieve goals. Effective management of human capital. We identified concerns about DIAP in each of these six areas. A clear and consistent mission and vision of an organization’s path through change is essential to obtaining strong, visible, and sustained commitment of top leadership. Communication of the common mission and vision throughout an organization ensures that program roles are understood and fulfilled. Differences in understanding of and commitment to an organization’s mission and vision can hamper the effectiveness of decision- making processes, management approaches, personnel development, and program integration. We found disagreement among DOD officials regarding DIAP’s mission and vision and, in some cases, a lack of support for the role of DIAP as outlined in its implementation plan. Officials representing several DOD components expressed a need for products and activities planned for DIAP such as IA policy and training. They also stated that the current level of coordination and planning would not have occurred without DIAP and the visibility provided by that program. However, other officials cited a lack of DOD leadership and support for DIAP and stated that individual components should continue to manage their own IA activities without DIAP involvement. Taken together, these views indicate that support for DIAP is not consistent across the department and that communication about DIAP’s mission and vision from DOD leaders has not been adequate. Results-oriented goals and quantifiable measures provide essential mechanisms for promoting a common view of what is to be accomplished and for assessing the progress of programs. DIAP was specifically charged with the development of Defense-wide IA performance goals, standards, and metrics in its eight functional areas, a central responsibility for performance-based management. DOD executives acknowledged the need for comprehensive performance goals and measures to manage DIAP and its staff, but also acknowledged that this approach is not yet being used. Departmental IA readiness metrics are under development, and performance goals and metrics have been drafted for the DIAP staff; however, both products require further development and are not yet suitable for assessing performance. Further, none of DOD’s IA annual reports, which are prepared by the DIAP staff, have presented data that show how DIAP’s activities have helped achieve the department’s IA goals. DOD officials have concluded that work on performance goals and measures cannot start before a baseline of IA resources is established. Yet progress in establishing a program baseline has been slow, as previously noted. As a result, it is unknown when departmental performance goals and measures will be completed or when DOD will be able to use them to conduct performance-based IA management. High-performing organizations find ways to integrate contributions from various efforts to support organizational processes and achieve expected results. Effective integration requires that contributors understand and are committed to their assigned responsibilities. Mechanisms for ensuring the accountability of contributors are also important for supporting organizational goals. As described in appendix I, responsibilities for achieving DIAP goals are dispersed among various organizations. In addition to its executive positions, advisory bodies, and coordination groups, DIAP has sponsored or participated in at least 39 IA-related working groups involving three distinct reporting chains (civilian defense, military, and intelligence). Yet Defense policy does not assign DOD components and their managers specific responsibilities with regard to DIAP and its groups nor are mechanisms to enforce such responsibilities in place. Without specific definition of their responsibilities and accountability for their involvement with DIAP activities, Defense components have provided inconsistent support in areas such as assigning staff, responding to information requests, attending coordination meetings, and reporting plans and progress on DOD IA initiatives. A DIAP Program Execution Plan, as envisioned in the 1999 DOD CIO Annual Information Assurance Report, would clarify organizational responsibilities with regard to DIAP, but such a plan has not yet been developed. The DIAP staff itself cannot require DOD components to contribute to its activities or respond to its requests for information. Further, DIAP managers have no mechanism for ensuring that DOD organizations meet their commitments to provide staff to the program. In addition, DIAP funding, which is provided by OASD(CI). Changes in the purpose and constituents of organizations such as the IA Group, the Senior IA Steering Committee, and DOD’s CIO Council have also impeded the alignment of defense organizations with DIAP goals. According to DOD officials, these organizations did not begin to address their responsibilities for guidance and oversight until their reconstitution as the IA Panel and the CIO Executive Board in late 1999 and early 2000. Clear and comprehensive definition of and accountability for the assignments for these groups and their interaction with other areas of DIAP are essential to ensuring alignment of DIAP groups and goals. Accurate, reliable, and timely data form the foundation for sound management decision-making. Obtaining quality data is dependent on the procedures used to verify and validate the data collected for performance assessment. Well-established data definitions and collection procedures are essential to building confidence in performance information. DOD officials were unable to provide information on the department’s total budget, expenditures, and departmental status for IA and could not estimate when that information would be available. This is due, in part, to limitations in capturing the IA-related data by the automated systems DOD uses for planning, programming, and budgeting; readiness reporting; and personnel classification, as described in the sections on accomplishments earlier in this report. According to DOD officials, problems have also surfaced with collection and verification of component programmatic, financial, and technical data for DIAP due to differences in interpreting terminology and instructions across the Defense community. According to DIAP officials, neither DOD leadership nor DIAP management have assessed the existing systems or procedural limitations for collecting IA data or developed a plan for systematically remedying them. Without timely, reliable, and useful financial and performance reporting, performance-based management for the department’s IA activities will be difficult. Performance-based management has been shown to work best when it is integrated into the culture and day-to-day activities of organizations. Since IT figures prominently in DOD’s view of IA implementation—as shown by initiatives on PKI, intrusion detection, and vulnerability management— such technology presents DOD with opportunities to establish an electronic foundation for IA performance management. Although DIAP has supported the definition and planning of such technology initiatives, DIAP officials told us that they have not yet evaluated the corresponding opportunities for enhancing IA management processes and controls. Elements of a technology vision that would support performance-based management have surfaced in efforts such as the IA architecture framework, but DOD officials agreed that these elements have not yet been integrated at the department level. Planning for integration of IA technologies with IA performance management processes would help to ensure that IA decisions remain relevant to the evolving IA environment. Organizational success is greatly enhanced by making the right employees available and providing them with the training, tools, structures, incentives, and accountability to work effectively. DOD itself has recognized that the success of it IA initiatives depends on qualified personnel. This success also hinges on the availability and skills of personnel charged with DIAP management. Although DOD has attempted to improve its utilization of department-level IA staff by consolidating the IA Group and IA Panel, it has not yet taken steps to ensure that DIAP staffing levels consistently meet the department’s overall commitment. It also has not addressed several outstanding personnel issues that DIAP officials believe are important to the program’s effective operation. Specifically, formal position descriptions that would identify the knowledge, skills, and experience needed by DIAP staff have not yet been developed. In addition, incentives have not been developed for staffing DIAP positions that are hard to fill because of perceived drawbacks in career advancement, nor have clear expectations for personnel performance been set using individual performance objectives and plans. Addressing such issues could provide better overall staffing of DIAP and improve the program’s performance. Although IA is a top DOD IT priority and DIAP is responsible for promoting consistent IA across the department, DOD has never fully staffed the program. Specifically, various DOD organizations have not fully and promptly met their commitments to provide DIAP with staff. It took 8 months for DIAP to acquire its initial staff, and it has not achieved the total of 30 to 34 personnel specified in its approved implementation plan. Instead, the greatest number of these positions filled at any one time has been 16. During our review, the DIAP staff consisted of 12 personnel primarily detailed from NSA and DISA. The Joint Staff, military services, and other Defense agencies were also directed through DIAP’s implementation plan to provide personnel to the DIAP staff office; however, they have not filled the positions identified in that plan, frequently citing their own personnel shortages as a constraint on assigning staff to departmental IA efforts. These staffing shortfalls have limited the ability of the DIAP staff to achieve its objectives and reach its planned full operational capability, and have impeded development of performance goals, measures, and plans that would further define the responsibilities and future efforts for DIAP. In addition to staffing shortfalls, continuing changes to department-level groups during the life of DIAP—specifically, the IA Group, the Senior DIAP Steering Group, and DOD’s CIO Council—have limited the guidance and oversight of DIAP’s initial work. In the fall of 1999, the IA Group formed by DOD’s IA Management Plan was merged with a previously existing working group, the IA Panel. The groups were examining related issues and held substantially the same membership, which created creating scheduling conflicts that affected meeting attendance. The reconstituted IA Panel has incorporated the responsibilities of the IA Group into its mission and reports to both the MCEB and the Director of I&IA. The official charters of the IA Panel and MCEB reflecting these role changes had not been approved at the close of this review. Nevertheless, the IA Panel has provided a forum for information exchange among components on IA issues and was acknowledged during several of our interviews as an effective mechanism for department IA coordination. The disbanding of the Senior DIAP Steering Group has also contributed to DOD’s limited guidance and oversight of DIAP. The steering group had been intended to provide strategic direction and guidance on IA issues to DIAP and later to the DOD CIO and CIO Council. At the close of our review, strategic direction and guidance for department IA were being developed by staff in the Office of I&IA. However, the draft IA Panel charter indicates that MCEB would share this role with the Director of IA. Accordingly, a revision to the MCEB charter was being coordinated to reflect its added IA responsibilities. Neither the DOD CIO Council nor its successor, the CIO Executive Board, have provided direction or guidance to DIAP to date, although both have discussed departmental IA issues. The DOD CIO Council, chartered 8 months before the formation of DIAP, was officially disbanded in March 2000 by the Deputy Secretary of Defense and replaced by the larger CIO Executive Board to provide a more decision-oriented approach toward department acquisition, management, use, and oversight of technology. DIAP management and implementation plans had called for increasing Defense agency representation on the CIO Council to improve its ability to address IT and IA across the department. However, the new CIO Executive Board has not adopted this approach. Instead, its membership does not include many DOD agencies, and thus these agencies do not participate in board decisions. While DIAP has addressed issues related to DOD’s departmental IA goals, established new IA policy, improved communication across the department, and initiated mechanisms for monitoring IA efforts throughout DOD, many IA issues remain on which it has not taken action or only begun to work. Given the high priority that DOD puts on IA, we believe the DIAP should have made progress on more of its implementation plan objectives by this time and gone further with the ones it has begun to address. Top- level DOD management has not carried out oversight commensurate with the program’s high-priority role nor has DIAP received the resources that were judged necessary by DOD when the program was initiated. DOD continues to face significant personnel, technical, and operational challenges in implementing an effective departmentwide IA program— something it cannot afford to ignore. A stronger management framework for DIAP consisting of adequate funding and oversight would establish the foundation needed to make greater progress in addressing such challenges. To significantly improve departmentwide management of IA, we recommend that the Secretary of Defense take the following actions: Commit senior department personnel to developing a DIAP Program Execution Plan that further defines and integrates DIAP-related roles and responsibilities, organizational relationships and accountability, ongoing efforts, and plans; establishes commitments to DIAP at the component, service, and agency levels; specifies measurable outcomes related to department operations for determining the success of DIAP and time frames for achieving them; and builds on existing DIAP accomplishments. Establish written objectives and agreements for departmentwide support of DIAP that provide for clear and realistic responsibilities, adequate personnel, expected outcomes, and mechanisms for monitoring and enforcing agreements. The agreements should specify the organizational positions and entities responsible for integrating DOD’s IA actions, managing IA-related aspects of DOD’s mission performance, and providing independent oversight and assessment of IA improvement. Establish a structured process led by the DOD CIO and CIO Executive Board for regularly monitoring the progress of DIAP toward achieving department goals and using these results to adjust IA program objectives and resources. Reinforce the department’s commitment to the high priority of IA by providing regular reporting to the Secretary of Defense on the progress, issues, and results of actions to establish IA readiness assessment across the department. We also recommend that the DOD CIO take the following actions: Define a program budget element or subelement that encompasses IA- related personnel and activities of OASD(CI) Director of Information Assurance take the following actions: Develop and implement a plan for instituting IA readiness metrics that addresses key obstacles that have hindered efforts to date through (1) enhancements to existing automated reporting systems to capture IA- related data, (2) improved coordination between proposed department- level and joint force IA metrics, and (3) validation of the proposed metrics to ensure that they produce useful information. Develop and implement an action plan for achieving the department’s July 2000 IA human resources policy directive. Develop comprehensive operational polices and procedures to provide consistency in IA monitoring and management across the department, Expand security management technology planning to include issues beyond PKI, including workstation security, virtual private networks, and security management tools. Complete development of an IA program baseline, including establishing a detailed system of budget codes for identifying IA resources across the department and integrating planning, programming, and budgeting data with the department’s acquisition management and requirements-generation systems. Develop and implement a strategy for establishing an integrated set of DOD IA policies, directives, and guidance, and establish a mechanism for determining whether DOD components are in compliance. Take steps to fully address assigned DIAP responsibilities in three other areas—architectural standards and system transformation, acquisition support and product development, and research and technology. In oral comments on a draft of this report, the Deputy Assistant Secretary of Defense for Security and Information Operations concurred with all of our recommendations except one. Regarding our draft recommendation that the DIAP Director develop a strategy for establishing an integrated set of IA policies, directives, and guidance, DOD stated that IA policy development was the responsibility of the IA Directorate within OASD(CI) IA Director. For consistency, we also directed several other recommendations to the OASD(CIf you or your office have any questions on this report, please call me at (202) 512-3317. Major contributors to this report included John de Ferrari, Peggy Hegg, and Paula Moore. The table below identifies the key entities and officials that interact with DIAP and their associated responsibilities. The composition of each group is described. Figure 1 provides a conceptual view of DIAP interorganizational relationships. The Program Development and Integration Team (PDIT) is responsible for overseeing, coordinating, and integrating departmental information assurance (IA) resources. Specifically, PDIT is responsible for developing broad, easily understood, operationally oriented DIAP developing input to the Defense planning guidance for DIAP overseeing component participation in the Planning, Programming, and Budgeting System (PPBS); continually monitoring the IA plans, activities, and resource investments of the components and, in conjunction with the Critical Asset Assurance Program, assessing the adequacy of resources necessary to ensure the continual operational readiness of the Defense information infrastructure; preparing IA program guidance on behalf of the DOD Chief Information Officer (CIO); correlating responses to IA program queries from the Congress, the Undersecretary of Defense (Comptroller), and the Office of Planning, Analysis, and Evaluation; preparing and coordinating the DOD CIO’s annual IA assessment; developing, coordinating, and supporting DOD-wide program and resource issues for submission by the Director of Information Assurance to the Senior DIAP Steering Group, and providing support to the Office of Planning, Analysis, and Evaluation as part of the Defense Resources Board process; reviewing and recommending, as appropriate, adjustments to the component program objective memorandums to support the integrated priority lists of the unified combatant commanders; preparing, in coordination with the Information Systems Security Program staff, the DIAP Congressional Justification Book; working with staff of the Undersecretary of Defense (Comptroller) and the Office of Planning, Analysis, and Evaluation to design and implement appropriate budget exhibits for collecting, monitoring, and reporting DIAP resources; and developing and coordinating input for the IA portion of the DOD Information Technology Strategic Plan. The Functional Evaluation and Integration Team (FEIT) is responsible for overseeing, coordinating, and integrating departmental IA activities and for providing a means to measure their effectiveness. Specifically, FEIT’s staff is responsible for serving as principal evaluators for each of FEIT’s functional areas (see ensuring integration of their particular functions with the other providing continual evaluation of component IA programs to ensure the Defense-wide application of FEIT’s capabilities; ensuring that their functions are consistently implemented, integrated, efficient, and programmatically supported; developing solutions, such as program recommendations, when components fail to provide necessary resources for their IA programs; supporting presentations of DIAP issues to the Defense Resources Board and Joint Requirements Oversight Council; developing Defense-wide IA performance goals, standards, and metrics; providing functional oversight and ensuring coherent integration throughout DOD. The eight functional areas of FEIT are readiness assessment, human resources, policy integration, security management, operations environment, architecture standards and transformation strategies, acquisition support and product development, and research and technology. Detailed descriptions of the functional areas are provided below. The readiness assessment area is responsible for providing data needed to accurately assess IA readiness and for focusing plans, programs, and decisions within PPBS. Specific responsibilities include addressing IA requirements identification, vulnerability and threat assessments, and Defense-wide IA-related standards and metrics for military readiness reporting. The human resources area is responsible for providing for sufficient, adequately trained and educated personnel to conduct IA functions throughout the department. Specific responsibilities include addressing human resources development; education, training, and awareness; and manpower. The policy integration area is responsible for providing consistent implementation of DOD IA-related policies throughout the department. Specific responsibilities include addressing national security, federal government, and IA policies and priorities. The security management area is responsible for providing for the incorporation of appropriate security services that allow and promote global interoperability while preserving legitimate law enforcement and national security purposes. Specific responsibilities include addressing key management, workstation security, virtual private networks, tools and security management applications, and development of an integrated security management infrastructure. The operational environment area is responsible for providing for the continual visibility of the department’s and the intelligence community’s IA operational readiness postures through appropriate monitoring of the enterprise information systems and through other intelligence and law enforcement sources. Specific responsibilities include addressing operational monitoring and network management, intrusion detection, incident response, defensive information operations, and attack sensing and warning. The architectural standards and system transformation area is responsible for providing for the integration of adequate IA technologies, products, and supporting procedures in the information technologies (IT), systems, and networks acquired by the department. Specific responsibilities include addressing enterprisewide standards and conformance, implementation and incremental improvement, modernization of legacy systems, survivability of common infrastructures, accreditation and readiness standards, multilevel security, and embedded IA capabilities. The acquisition support and product development area is responsible for providing for continual improvement in the department’s IA readiness posture through disciplined, performance-based investments in security- enabled IT acquisitions. Specific responsibilities include addressing development of IA-related acquisition guidance; integration of mission need statements and operational requirements review of departmental protection profiles; identification of technology, product, and acquisition trends and the development of strategies for dealing with those trends; and product evaluation, validation, and integration guidance. The research and technology area is responsible for providing for the research and development of IA technologies and techniques consistent with current and anticipated DOD mission needs and changes in IT. Specific responsibilities include addressing leveraging of DOD, government, commercial, and academic research; anticipation of new technologies; development of synchronized IA solutions; budget categories; and leveraging of existing research coordination activities. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
The components, military services, and agencies of the Department of Defense (DOD) share many risks in their use of globally networked computer systems to perform operational missions. Many reports of vulnerabilities, organized intrusions, and theft related to department systems and networks have underscored weaknesses in DOD systems. In January 1998, DOD responded to these risks by announcing its plans for a Defense-wide Information Assurance Program to promote integrated, comprehensive, and consistent information assurance (IA) practices across the department. Although the program has addressed issues related to DOD's departmental IA goals, established new IA policy, improved communication across the department, and introduced mechanisms for monitoring IA efforts throughout DOD, many IA issues remain unaddressed. Given the high priority that DOD puts on IA, GAO believes the the program should have made progress on more of its implementation plan objectives by this time and gone further with the ones it has begun to address. Top-level DOD management has not carried out oversight commensurate with the program's high-priority role and the program has not received the resources that were judged necessary by DOD when the program was initiated. DOD continues to face significant personnel, technical, and operational challenges in implementing an effective departmentwide IA program--something it cannot afford to ignore. A stronger management framework for the program consisting of adequate funding and oversight would establish the foundation needed to make greater progress in addressing such challenges.
FOIA establishes a legal right of access to government records and information on the basis of the principles of openness and accountability in government. Before the act (originally enacted in 1966), an individual seeking access to federal records had faced the burden of establishing a right to examine them. FOIA established a “right to know” standard for access, instead of a “need to know” standard, and shifted the burden of proof from the individual to the government agency seeking to deny access. FOIA provides the public with access to government information either through “affirmative agency disclosure”—publishing information in the Federal Register or on the Internet or making it available in reading rooms—or in response to public requests for disclosure. Public requests for disclosure of records are the best known type of FOIA disclosure. Any member of the public may request access to information held by federal agencies without showing a need or reason for seeking the information. Not all information held by the government is subject to FOIA. The act prescribes nine specific categories of information that are exempt from disclosure: for example, trade secrets and certain privileged commercial or financial information, certain personnel and medical files, and certain law enforcement records or information (see app. II for a complete list). In denying access to material, agencies may cite these exemptions. The act requires agencies to notify requesters of the reasons for any adverse determination (that is, a determination not to provide records) and grants requesters the right to appeal agency decisions to deny access. In addition, agencies are required to meet certain time frames for making key determinations: whether to comply with requests (20 business days from receipt of the request); responses to appeals of adverse determinations (20 business days from filing of the appeal); and whether to provide expedited processing of requests (10 calendar days from receipt of the request). The Congress did not establish a statutory deadline for making releasable records available, but instead required agencies to make them available promptly. Although the specific details of processes for handling FOIA requests vary among agencies, the major steps in handling a request are similar across the government. Agencies receive requests, usually in writing (although they may accept requests by telephone or electronically), which can come from any organization or member of the public. Once received, the request goes through several phases, which include initial processing, searching for and retrieving responsive records, preparing responsive records for release, approving the release of the records, and releasing the records to the requester. Figure 1 is an overview of the process, from the receipt of a request to the release of records. During the initial processing phase, a request is logged into the agency’s FOIA system, and a case file is started. The request is then reviewed to determine its scope, estimate fees, and provide an initial response to the requester (in general, this simply acknowledges receipt of the request). After this point, the FOIA staff begins its search to retrieve responsive records. This step may include searching for records from multiple locations and program offices. After potentially responsive records are located, the documents are reviewed to ensure that they are within the scope of the request. During the next two phases, the agency ensures that appropriate information is to be released under the provisions of the act. First, the agency reviews the responsive records to make any redactions based on the statutory exemptions. Once the exemption review is complete, the final set of responsive records is turned over to the FOIA office, which calculates appropriate fees, if applicable. Before release, the redacted responsive records are given a final review, possibly by the agency’s general counsel, and then a response letter is generated, summarizing the agency’s actions regarding the request. Finally, the responsive records are released to the requester. Some requests are relatively simple to process, such as requests for specific pieces of information that the requester sends directly to the appropriate office. Other requests may require more extensive processing, depending on their complexity, the volume of information involved, the requirement for the agency FOIA office to work with offices that have relevant subject-matter expertise to find and obtain information, the requirement for a FOIA officer to review and redact information in the responsive material, the requirement to communicate with the requester about the scope of the request, and the requirement to communicate with the requester about the fees that will be charged for fulfilling the request (or whether fees will be waived). Specific details of agency processes for handling requests vary, depending on the agency’s organizational structure and the complexity of the requests received. While some agencies centralize processing in one main office, other agencies have separate FOIA offices for each agency component and field office. Agencies also vary in how they allow requests to be made. Depending on the agency, requesters can submit requests by telephone, fax, letter, or e-mail or through the Internet. In addition, agencies may process requests in two ways, known as “multitrack” and “single track.” Multitrack processing involves dividing requests into two groups: (1) simple requests requiring relatively minimal review, which are placed in one processing track, and (2) more voluminous and complex requests, which are placed in another track. In contrast, single-track processing does not distinguish between simple and complex requests. With single-track processing, agencies process all requests on a “first-in, first-out” basis. Agencies can also process FOIA requests on an expedited basis when a requester has shown a compelling need for the information. As agencies process FOIA requests, they generally place them in one of four possible disposition categories: grants, partial grants, denials, and “not disclosed for other reasons.” These categories are defined as follows: Grants: Agency decisions to disclose all requested records in full. Partial grants: Agency decisions to withhold some records, in whole or in part, because such information was determined to fall within one or more exemptions. Denials: Agency decisions not to release any part of the requested records because all information in the records is determined to be exempt under one or more statutory exemptions. Not disclosed for other reasons: Agency decisions not to release requested information for any of a variety of reasons other than statutory exemptions. The categories and definitions of these “other” reasons for nondisclosure are shown in table 1. When a FOIA request is denied in full or in part or the requested records are not disclosed for other reasons, the requester is entitled to be told the reason for the denial, to appeal the denial, and to challenge it in court. In addition to FOIA, the Privacy Act of 1974 includes provisions granting individuals the right to gain access to and correct information about themselves held by federal agencies. Thus, the Privacy Act serves as a second major legal basis, in addition to FOIA, for the public to use in obtaining government information. The Privacy Act also places limitations on agencies’ collection, disclosure, and use of personal information. Although the two laws differ in scope, procedures in both FOIA and the Privacy Act permit individuals to seek access to records about themselves—known as “first-party” access. Depending on the individual circumstances, one law may allow broader access or more extensive procedural rights than the other, or access may be denied under one act and allowed under the other. Consequently, Justice’s Office of Information and Privacy issued guidance that it is “good policy for agencies to treat all first-party access requests as FOIA requests (as well as possibly Privacy Act requests), regardless of whether the FOIA is cited in a requester’s letter.” This guidance was intended to help ensure that requesters receive the fullest possible response to their inquiries, regardless of which law they cite. In addition, Justice guidance for the annual FOIA report directs agencies to include Privacy Act requests (that is, first-party requests) in the statistics reported. According to the guidance, “A Privacy Act request is a request for records concerning oneself; such requests are also treated as FOIA requests. (All requests for access to records, regardless of which law is cited by the requester, are included in this report.)” Although both FOIA and the Privacy Act can apply to first-party requests, these may not always be processed in the same way as described earlier for FOIA requests. In some cases, little review and redaction (see fig. 1) is required: for example, for a request for one’s own Social Security benefits records. In contrast, various degrees of review and redaction could be required for other types of first-party requests: for example, files on security background checks would require review and redaction before being provided to the person who was the subject of the investigation. Both OMB and the Department of Justice have roles in the implementation of FOIA. Under various statutes, including the Paperwork Reduction Act, OMB exercises broad authority for coordinating and administering various aspects of governmentwide information policy. FOIA specifically requires OMB to issue guidelines to “provide for a uniform schedule of fees for all agencies.” OMB issued this guidance in April 1987. The Department of Justice oversees agencies’ compliance with FOIA and is the primary source of policy guidance for agencies. Specifically, Justice’s requirements under the act are to make agencies’ annual FOIA reports available through a single electronic access point and notify the Congress as to their availability; in consultation with OMB, develop guidelines for the required annual agency reports; and submit an annual report on FOIA litigation and the efforts undertaken by Justice to encourage agency compliance. Within the Department of Justice, the Office of Information and Privacy has lead responsibility for providing guidance and support to federal agencies on FOIA issues. This office first issued guidelines for agency preparation and submission of annual reports in the spring of 1997. It also periodically issues additional guidance on annual reports and on compliance, provides training, and maintains a counselor service to provide expert, one-on-one assistance to agency FOIA staff. Further, the Office of Information and Privacy makes a variety of FOIA and Privacy Act resources available to agencies and the public via the Justice Web site and online bulletins (available at www.usdoj.gov/oip/index.html). In 1996, the Congress amended FOIA to provide for public access to information in an electronic format (among other purposes). These amendments, referred to as e-FOIA, also required that agencies submit a report to the Attorney General on or before February 1 of each year that covers the preceding fiscal year and includes information about agencies’ FOIA operations. The following are examples of information that is to be included in these reports: number of requests received, processed, and pending at the end of the median number of days taken by the agency to process different types of number of determinations made by the agency not to disclose information and the reasons for not disclosing the information; disposition of administrative appeals by requesters; information on the costs associated with handling of FOIA requests; and full-time-equivalent staffing information. In addition to providing their annual reports to the Attorney General, agencies are to make them available to the public in electronic form. The Attorney General is required to make all agency reports available online at a single electronic access point and report to the Congress no later than April 1 of each year that these reports are available in electronic form. (This electronic access point is www.usdoj.gov/oip/04_6.html.) On December 14, 2005, the President issued Executive Order 13392, setting forth a policy of citizen-centered and results-oriented FOIA administration. Briefly, according to this policy, FOIA requesters are to receive courteous and appropriate services, including ways to learn about the status of their requests and the agency’s response, and agencies are to provide ways for requesters and the public to learn about the FOIA process and publicly available agency records (such as those on Web sites). In addition, agency FOIA operations are to be results-oriented: that is, agencies are to process requests efficiently, achieve measurable improvements in FOIA processing (including reducing backlog of overdue requests), and reform programs that do not produce appropriate results. To carry out this policy, the order required, among other things, that agency heads designate Chief FOIA Officers to oversee their FOIA programs. The Chief FOIA Officers were directed to conduct reviews of the agencies’ FOIA operations and develop improvement plans to ensure that FOIA administration was in accordance with applicable law, as well as with the policy set forth in the order. By June 2006, agencies were to submit reports that included the results of their reviews and copies of their improvement plans. A major focus of the order was for agency plans to include specific activities that the agency would implement to eliminate or reduce any FOIA backlog of overdue requests: that is, requests for records that have not been responded to within the statutory time limit. Note that this backlog of overdue requests is distinct from the pending cases reported in the annual reports (those FOIA cases open at the end of the reporting period). For the annual reports, agencies are required by the statute to provide a count of FOIA requests that are still pending (that is, not yet closed) at the end of the reporting period. In response to this annual report requirement, agency tracking systems and processes have been geared to providing statistics on pending requests. Pending cases totals would generally be larger than backlog, as the term is used in the Executive Order, since they would include any requests received within the last 20 to 30 working days of the reporting period, which would not be overdue. The order also instructed the Attorney General to issue guidance on implementation of the order’s requirements for agencies to conduct reviews and develop plans. In addition, the order instructed agencies to report on their progress in implementing their plans and meeting milestones as part of their annual reports for fiscal years 2006 and 2007; agencies were instructed to account in the annual report for any milestones missed and also to report them to the President’s Management Council. In April 2006, the Department of Justice posted guidance on implementation of the order’s requirements for FOIA reviews and improvement plans. This guidance suggested a number of areas of FOIA administration that agencies might consider when conducting their reviews and developing improvement plans. (Examples of some of these areas are automated tracking capabilities, automated processing, receiving/responding to requests electronically, forms of communication with requesters, and systems for handling referrals to other agencies.) To encourage consistency, the guidance also included a template for agencies to use to structure their plans and to report on their reviews and plans. The order’s emphasis on backlog provided an incentive for agencies to focus on reducing overdue requests. With respect to backlog reduction, the guidance stated that agencies were not limited to time horizons in fiscal years 2006 and 2007 only. According to the guidance, if an agency believed that reform could enable it to process requests in a more efficient manner, thereby reducing its backlog, the agency should consider implementing these measures even though they might result in a short- term increase in backlog, as long as it was confident of a long-term benefit. At the same time, the guidance advised agencies to consider what they might do to counterbalance any anticipated short-term effect through other means of backlog reduction. Also included in this guidance was supplemental information on preparing the annual FOIA reports for fiscal years 2006 and 2007. According to the guidance, the annual reports for fiscal years 2006 and 2007 were to include an additional section on agencies’ progress in implementing their plans to improve their FOIA activities. The guidance provided a template for reporting progress and stated that, for the fiscal year 2006 report (due February 1, 2007), agencies should be able to report on progress for at least 7 months (i.e., from no later than June 14, 2006, to late January 2007). The improvement plans are posted on the Department of Justice Web site at www.usdoj.gov/oip/agency_improvement.html. In June 2007, the Attorney General submitted a report to the President on the progress that agencies made in the first months of implementing their FOIA improvement plans, as reported in the fiscal year 2006 annual reports of all 92 federal departments and agencies. The report provided an overall assessment of progress followed by a more detailed discussion of agency activities. According to this assessment, agencies made measurable progress in implementing the Executive Order during the first reporting period (about 7 months of activity under the FOIA improvement plans), with more than half the agencies (54) reporting successes in achieving all their milestones and goals on time. Discussing 25 key agencies, the report stated that 22 reported meaningful progress in FOIA administration, with 11 achieving all milestones on time; however, 3 reported one or more milestones for which they failed to achieve progress. The report also discussed areas where agencies reported deficiencies in meeting their early milestones or goals, and it made recommendations for improving FOIA implementation. In addition, it presented progress charts for the 25 key agencies showing whether they had achieved their planned goals and milestones. Also in June 2007, the Department of Justice posted guidance on providing updated status reports to the President’s Management Council. These status reports were required by August 1, 2007, from agencies who reported deficiencies in meeting the goals in their fiscal year 2006 annual FOIA reports. According to this guidance, such agencies were to report on their progress toward completing the corrective steps described in their annual reports. In the updated status reports, agencies were instructed to account for any missed milestone by identifying it and outlining the steps taken and to be taken to address the deficiency. In September 2007, the Department of Justice posted guidance to agencies on submitting backlog reduction goals for fiscal years 2008, 2009, and 2010. According to the guidance, any agency that had any request or appeal pending beyond the statutory time period at the end of fiscal year 2007 was to establish backlog reduction goals for fiscal years 2008, 2009, and 2010, and was to publish such goals on the agency’s Web site. Those goals were to be expressed in two ways. First, each agency was required to set a goal for the number of requests and the number of appeals that it planned to process during each fiscal year from 2008 through 2010. Second, each agency was required to set a goal for the number of requests and the number of appeals that the agency estimated would be pending beyond the statutory time period (i.e., backlog of overdue requests) at the end of each fiscal year from 2008 through 2010. In October 2007, Justice issued supplemental guidance on the section of the fiscal year 2007 annual FOIA reports in which agencies were to describe progress on their improvement plans and provide certain additional statistics. Among other things, this guidance required agencies to track their 10 oldest pending requests; to track the number of consultations received, processed, and pending; and to report this information in their fiscal year 2007 annual FOIA reports. It also provided templates for the progress reports and additional statistics. In 2001, in response to a congressional request, we prepared the first in a series of reports on the implementation of the 1996 amendments to FOIA, starting from fiscal year 1999. In these reviews, we examined the contents of the annual reports for 25 major agencies (shown in table 2). They include the 24 major agencies covered by the Chief Financial Officers Act, as well as the Central Intelligence Agency and, until 2003, the Federal Emergency Management Agency (FEMA). In 2003, the creation of DHS, which incorporated FEMA, led to a shift in some FOIA requests from agencies affected by the creation of the new department, but the same major component entities were reflected in the 25 agencies. Our previous reports included descriptions of the status of reported FOIA implementation, including any trends revealed by comparison with earlier years. We noted general increases in requests received and processed, as well as growing numbers of pending requests carried over from year to year. In addition, our 2001 report disclosed that data quality issues limited the usefulness of agencies’ annual FOIA reports and that agencies had not provided online access to all the information required by the act as amended in 1996. We therefore recommended that the Attorney General direct the Department of Justice to improve the reliability of data in the agencies’ annual reports by providing guidance addressing the data quality issues we identified and by reviewing agencies’ report data for completeness and consistency. We further recommended that the Attorney General direct the department to enhance the public’s access to government records and information by encouraging agencies to make all required materials available electronically. In response, the Department of Justice issued supplemental guidance, addressed reporting requirements in its training programs, and continued reviewing agencies’ annual reports for data quality. Justice also worked with agencies to improve the quality of data in FOIA annual reports. Most recently, our March 2007 FOIA report discussed the fiscal year 2005 annual report data, as well as the agency improvement plans submitted in response to the Executive Order. Among other things, we observed that agencies showed great variations in the median times to process requests (less than 10 days for some agency components to more than 100 days at others) but that the ability to determine trends in processing times is limited because these times are reported in medians only, without averages (that is, arithmetical means) or ranges. Although medians have the advantage of providing representative numbers that are not skewed by a few outliers, it is not statistically possible to combine several medians to develop broader generalizations (as can be done with arithmetical means). We suggested that to improve the usefulness of the statistics in agency annual FOIA reports, the Congress consider amending the act to require agencies to report additional statistics on processing time, which at a minimum should include average times and ranges. The Openness Promotes Effectiveness in Our National Government Act (OPEN Government Act) of 2007, enacted December 31, 2007, as Public Law 110- 175, included provisions expanding reporting requirements to include average and range information, along with median processing time statistics. Regarding the improvement plans, we reported in 2007 that the 25 agency plans mostly included goals and timetables addressing the areas of improvement emphasized by the Executive Order. We noted that almost all plans contained measurable goals and timetables for avoiding or reducing backlog. Although details of a few plans could be improved, all the plans focused on making measurable improvements and formed a reasonable basis for carrying out the goals of the Executive Order. The data reported by 21 major agencies in annual FOIA reports from 2002 to 2006 reveal a number of general trends. (Data from four agencies are omitted, as discussed below.) Among these trends are increases in requests received, processed, and pending. Specifically, the public continued to submit more requests for information from the federal government through FOIA, and the numbers of requests processed also increased. In addition, the number of pending requests increased because of increases at DHS, which accounted for about half of all pending requests at the end of fiscal year 2006. However, the rate of increase in pending requests was less than in the previous year. Our statistical analysis omits data from the General Services Administration (GSA), and the Departments of Agriculture and Housing and Urban Development (HUD), because we did not have reasonable assurance that the data in their fiscal year 2006 FOIA annual reports were accurate and complete. We also omitted the Central Intelligence Agency, which did not provide information in response to our requests, so we could not assess its data reliability. The other three agencies did not provide evidence of internal controls that would provide reasonable assurance that FOIA data were recorded completely and correctly, or they acknowledged material limitations of the data. The accuracy of annual report data is important so that government FOIA operations can be monitored and understood by the Congress and the public. To provide reasonable assurance of accuracy, agencies rely on internal controls to minimize the risk that data are incomplete or incorrect. Specific examples of such controls include supervisory or other reviews of the quality of data entry, spot checks of selected records, software edit checks of data entered (such as prevention of duplicate entries), and other manual or automatic processes to detect data entry errors. We determined that GSA did not have adequate internal controls to provide reasonable assurance that the data in its fiscal year 2006 annual report were accurate and complete. Although about one-third of GSA FOIA requests were handled by FOIA staff at GSA headquarters, mechanisms had not been established to verify that data were entered correctly into the system that tracked FOIA requests. One staff person was responsible for entering data, but the data were not checked periodically to ensure that they were correct. Agency officials told us that errors could be caught if, for example, the GSA program office responding to a request observed a discrepancy when the request letter was transmitted to the program office. However, they acknowledged that for the fiscal year 2006 annual report, the FOIA office did not perform regular reviews or spot checks of the data to check for errors. Since the 2006 annual report was prepared, GSA has increased the staff at the headquarters FOIA office, and it has changed its approach to FOIA tracking by implementing a centralized tracking system for requests handled both by headquarters and by the GSA regional offices. According to officials, the centralized tracking system provides the agency with additional controls, but GSA had not established procedures for checks to ensure that information on requests was entered correctly at all stages. Until the agency establishes checks of data entered or other internal controls, such as periodic reviews, it will have reduced assurance that data are captured completely and accurately. Data from HUD are omitted because HUD officials told us that the fiscal year 2006 annual FOIA report statistics were not accurate. As part of its improvement plan implementation, HUD performed an organizational realignment in which FOIA processing functions were transferred to the Office of the Executive Secretariat. According to the Executive Secretary, after the realignment, the office found that many requests in the department’s FOIA tracking system were incorrectly recorded as open, although they had in fact been closed. According to this official, the department’s regional and field offices had not been consistently closing requests in the system, resulting in inaccuracies. In addition, not all field offices were using the tracking system, but were using spreadsheets and other means of tracking. According to HUD officials, they were taking actions to remedy these problems by working with the field offices to make sure that data were entered correctly and cases closed out properly. Also, in the department’s progress report on its improvement plan, HUD reported that it had selected and was acquiring a new automated FOIA tracking system. According to the department, in December 2007, it began implementing this system and training staff in its use, and all offices (including headquarters) would be required to use it. However, the implementation was not yet complete departmentwide. Further, although the department planned to develop policies and procedures to govern the use of the system, it had not yet done so; if well designed, these policies and procedures could help ensure that all FOIA offices, including regional and field offices, are using the tracking system consistently and that information is entered accurately and promptly. Until the department develops and establishes such policies and procedures, it will be unable to provide annual report data that are accurate and complete. We are also omitting data from the Department of Agriculture. In our March 2007 report on the FOIA annual reports for fiscal year 2005, we omitted data from the department’s annual FOIA report because a major component acknowledged material limitations in its data. Although most Agriculture components expressed confidence in their data, one component did not: the Farm Service Agency, which reportedly processed over 80 percent of the department’s total FOIA requests. According to this agency’s FOIA Officer, portions of the agency’s data in annual reports were not accurate or complete. We recommended that the department revise its FOIA improvement plan to include activities, goals, and milestones to improve data reliability for the Farm Service Agency and to monitor results. Since then, Agriculture has taken actions to improve the reliability of its data, such as issuing guidance and conducting training. The department is also developing an electronic tracking software system that it expects to improve the timeliness, accuracy, depth, and breadth of the department’s FOIA reporting. However, our reliability assessment was performed toward the end of fiscal year 2006, and our recommendation was made in March 2007, which was after the data for the annual report were assembled. Thus the department’s actions were not undertaken in time to affect the statistics for fiscal year 2006. If the department continues its improvement efforts, including establishing internal controls and processes to ensure that data are entered accurately and completely, it should increase its assurance that the FOIA data collected by the Farm Service Agency are complete and accurate. The numbers of FOIA requests received and processed continue to rise, but the rate of increase has flattened in recent years. Figure 2 shows total requests reported for the 21 agencies for fiscal years 2002 through 2006. This figure shows SSA’s share separately because of the large number of requests that the agency reported. As the figure shows, not only do SSA’s results dwarf those for all other agencies, they also reveal a major jump in requests received and processed from 2004 to 2005 (an increase of 92 percent), as well as a continued rise in 2006 (an increase of 8 percent). In 2005, SSA attributed the jump to an improvement in its method of counting requests and stated that, in previous years, these requests were undercounted. Because of the undercount in previous years and the high volume of SSA’s requests, including SSA’s statistics in governmentwide data would obscure year-to-year comparisons. Figure 3 presents statistics omitting SSA on a scale that allows a clearer view of the rate of increase in FOIA requests received and processed in the rest of the government. As this figure shows, when SSA’s numbers are excluded, the rate of increase is modest and has been flattening: For the whole period (fiscal years 2002 to 2006), requests received increased by about 23 percent, and requests processed increased by about 23 percent. Most of this rise occurred from fiscal years 2002 to 2003: about 18 percent, both for requests received and for requests processed. In contrast, in the last two fiscal years, the rise was much less: for requests received, the rise was roughly 3 percent from fiscal year 2004 to 2005 and another 1 percent to 2006; for requests processed, the rise was about 2 percent from fiscal year 2004 to 2005 and another 2 percent from fiscal year 2005 to 2006. Specifically with regard to SSA, in fiscal year 2006, as in the previous year, the vast majority of requests reported fall into a category SSA calls “simple requests handled by non-FOIA staff;” according to SSA, these are typically requests by individuals for access to their own records, as well as requests in which individuals consent for SSA to supply information about themselves to third parties (such as insurance and mortgage companies) so that they can receive housing assistance, mortgages, and disability insurance, among other things. SSA stated that these requests are handled by personnel in about 1,500 locations in SSA, including field and district offices and teleservice centers. Such requests are almost always granted, according to SSA, and most receive immediate responses. According to SSA officials, they report these requests because, as discussed earlier, Justice guidance instructs agencies to treat Privacy Act requests (requests for records concerning oneself) as FOIA requests and report them in the annual reports. SSA attributed the jump that occurred in fiscal year 2005 to an improvement in its method of counting these simple requests, which can be straightforwardly captured by its automated systems. For the past several years, these simple requests have accounted for the major portion of all SSA requests reported (see table 3). In fiscal year 2006, all but about 34,000 of SSA’s over 18 million requests fell into this category. From fiscal years 2002 to 2005, SSA’s FOIA reports attributed the increases in this category largely to better reporting, as well as actual increases in requests. Besides SSA, agencies reporting large numbers of requests received were the Departments of Defense, Health and Human Services, Homeland Security, Justice, the Treasury, and Veterans Affairs, as shown in table 4. The rest of agencies combined account for only about 3 percent of the total requests received (if SSA’s simple requests handled by non-FOIA staff are excluded). Table 4 presents, in descending order of request totals, the numbers of requests received and percentages of the total (calculated with and without SSA’s statistics on simple requests handled by non-FOIA staff). Most FOIA requests in 2006 were granted in full, with relatively few being partially granted, denied, or not disclosed for other reasons (statistics are shown in table 5). This generalization holds with or without SSA’s inclusion. The percentage of requests granted in full was about 87 percent, which is about the same as in previous years. However, if SSA’s numbers are included, the proportion of grants dominates the other categories— raising this number from 87 percent of the total to 98 percent. This is to be expected, since SSA reports that it grants the great majority of its simple requests handled by non-FOIA staff, which make up the bulk of SSA’s statistics. Compared to 2005, there was a slight increase in the percentage of denials: from 0.75 percent to 1.18 percent of total requests received (excluding SSA); this is an increase of 10,860 denials. The percentage of requests not disclosed for other reasons (excluding SSA) decreased from 8.0 percent to 7.9 percent (a decrease of 2,644 requests not disclosed for other reasons). As shown in figure 4, three of the seven agencies that handled the largest numbers of requests (see table 4) also granted the largest percentages of requests in full: the Department of Health and Human Services (HHS), SSA, and the Department of Veterans Affairs (VA). Figure 4 shows, by agency, the disposition of requests processed: that is, whether a request was granted in full, partially granted, denied, or “not disclosed for other reasons” (see table 1 for a list of these reasons). As the figure shows, the numbers of fully granted requests varied widely among agencies in fiscal year 2006. Four agencies made full grants of requested records in over 80 percent of cases they processed—HHS, SSA, VA, and the Small Business Administration (SBA). This is a decrease from last year, when two other agencies—Energy and the Office of Personnel Management (OPM)—also made full grants of requested records in over 80 percent of the cases they processed. This year, Energy provided full grants 75 percent of the time, compared to 82 percent last year, and OPM provided full grants 67 percent of the time, compared to 81 percent last year. In contrast, several agencies tended not to make full grants. Of 21 agencies, 10 made full grants of requested records in less than 40 percent of their cases (compared to 12 in 2005). Four of these 10 agencies—the Agency for International Development (AID), DHS, the National Science Foundation (NSF), and State—made full grants in less than 20 percent of cases processed; in contrast, in 2005, only 2 agencies (NSF and State) fell into this category. This variance among agencies in the disposition of requests has been evident in prior years as well. In many cases, the variance can be accounted for by the types of requests that different agencies process. For example, as discussed earlier, SSA grants a very high proportion of requests because most of its requests are for personal records that are routinely made available to the individuals concerned (or to others with their consent). Similarly, VA routinely makes medical records available to individual veterans, and HHS also handles large numbers of Privacy Act requests. Such requests are generally granted in full. Other agencies, on the other hand, receive numerous requests whose responses must routinely be redacted to prevent disclosure of personal or other exempt information. For example, NSF reported in its fiscal year 2005 annual report that most of its requests (an estimated 90 percent) are for copies of funded grant proposals. The responsive documents are routinely redacted to remove personal information on individual principal investigators (such as salaries, home addresses, and so on), which results in high numbers of “partial grants” compared to “full grants.” For 2006, the reported time required to process requests (by track) varied considerably among agencies. Table 6 presents data on median processing times for fiscal year 2006. For agencies that reported processing times by component rather than for the agency as a whole, the table indicates the range of median times reported by the agency’s components. As the table shows, 10 agencies had components that reported processing simple requests in less than or equal to 10 days: these components are parts of DHS, Energy, the Interior, Justice, Labor, Transportation, Education, HHS, the National Aeronautics and Space Administration (NASA), and the Treasury. For each of these agencies, the lower value of the reported ranges is less than or equal to 10. On the other hand, median time to process simple requests is relatively long at seven organizations— components of DHS, Energy, Interior, Justice, Education, the Environmental Protection Agency (EPA), and NASA—as shown by median ranges whose upper-end values are greater than 100 days. For complex requests, the picture is similarly mixed. Components of six agencies (the Interior, Labor, HHS, NASA, the Treasury, and VA) reported processing complex requests quickly—with a median of less than 10 days. In contrast, other components of several agencies (DHS, Energy, Justice, Transportation, Education, EPA, HHS, the Nuclear Regulatory Commission, State, and the Treasury) reported relatively long median times to process complex requests—with median days greater than 100. Five agencies (AID, HHS, NSF, SBA, and SSA) reported using single-track processing. The median processing times for single-track processing varied from 7 days (at SBA) to 399 days (at an HHS component). The median processing times for requests pending also varied widely among the agencies. In 2006, eight agencies reported median processing times for pending requests greater than 1 year (defined as 251 business days) in length. These eight agencies are AID, DHS, Energy, the Interior, Justice, Education, HHS, and VA. One agency reported a component having a median processing time for its pending cases of 1,200 days, which is nearly 5 years. As we reported in our March 2007 report, our ability to make further generalizations about FOIA processing times is limited by the fact that, as required by the act, agencies report median processing times only and not, for example, arithmetic means (the usual meaning of “average” in everyday language). With only medians, it is not statistically possible to combine results from different agencies to develop broader generalizations, such as a governmentwide statistic based on all agency reports, statistics from sets of comparable agencies, or an agencywide statistic based on separate reports from all components of the agency. This was the basis for the suggestion in our previous report that the Congress consider amending the act to require agencies to report average times and ranges; this requirement is a provision of the OPEN Government Act, enacted December 31, 2007, as Public Law 110-175. In addition to the increase in numbers of requests processed at the 21 agencies, the number of pending cases—requests carried over from one year to the next—has increased. In 2002, pending requests at the 21 agencies were reported to number about 135,000, whereas in 2006, about 218,000—38 percent more—were reported. In fiscal year 2006, as shown in figure 5, the rate of increase flattened: the pending totals rose 12 percent from 2005, compared to a rise of 20 percent from fiscal year 2004 to 2005. These statistics include pending cases reported by SSA, because SSA’s pending cases do not include simple requests handled by non-FOIA staff (for which SSA does not track pending cases). As the figure shows, these pending cases do not change the governmentwide picture significantly. In contrast, since its establishment in 2003, DHS has accounted for a major and increasing portion of pending requests governmentwide, as shown in figure 6. Although 11 other agencies reported that their numbers of pending cases had increased since 2003, these increases were offset by decreases at other agencies, so that, as the figure shows, pending cases for the other 20 agencies combined are relatively stable. Within DHS, about 89 percent of pending cases are from Citizenship and Immigration Services (CIS), which receives the vast majority of all FOIA requests sent to the department—over 100,000 incoming requests annually. According to the department, most of CIS’s FOIA requests come from individuals and their representatives seeking information contained within the so-called Alien Files (A-files); this information may be used in applying for immigration benefits or in immigration proceedings, as well as for genealogy studies. One issue in relation to these files is that about 55 million hard-copy A-files are shared with Immigration and Customs Enforcement (ICE), which can lead to delays in locating, referring, and processing documents. According to the department, CIS and ICE have convened a working group to establish a streamlined approach to processing documents in the A-files, and they are also assessing digitization of the files, which would allow both components to electronically access any file. Table 7 shows the percentage of the total pending requests that each agency accounted for in fiscal year 2006; to provide an idea of the scale of these requests in comparison to the agency’s annual workload, the last column provides the number received. As the table shows, the six agencies that accounted for most requests received also accounted for the most requests pending, although DHS’s rank in the number of pending requests was higher than its rank in the number of received requests. The table also shows the great variation in the relationship between pending and received numbers for individual agencies. Another way to consider progress in reducing pending cases is through individual agency processing rates—that is, the number of requests that an agency processes relative to the number it receives. Agencies that process more requests than they receive will decrease the number of pending cases remaining at the end of a given year. From 2002 to 2006, individual agencies show mixed results in this regard. In figure 7, bars extending above the centerline at 100 percent indicate that an agency reported processing more requests than it received in that year, whereas bars dropping below the centerline indicate that it reported processing fewer than it received. In Justice’s guidance on the annual reports for fiscal year 2006, it directed agencies to include additional statistics as part of the new section on agencies’ progress implementing their improvement plans. These additional statistics included the time ranges of requests pending. Based on these statistics, figure 8 provides a timeline showing the oldest pending requests reported by each of the agencies. As seen in the figure, as of the end of calendar year 2006, the age of the oldest pending requests ranged from less than 1 year to about 18 years. Note that these requests were those reported in the fiscal year 2006 annual reports; they do not necessarily remain open. Agencies are required to meet certain time frames for determining whether to comply with requests: generally 20 business days from receipt of the request, although this time may be extended by 10 days in “unusual circumstances,” such as when requests involve a voluminous amount of records or require consultation with another agency. The Congress did not establish a statutory deadline for making releasable records available, but instead required agencies to make them available promptly. However, it is not uncommon for agencies to spend much more than the statutory 20 or 30 days to determine whether records can be released and to supply the records. According to our examination of selected case files and discussions with agency officials, the factors that contribute to requests remaining open include the following: Requests may involve large volumes of responsive records. Requests may require extensive review and consultations. Agencies may need to notify submitters of information before disclosure. Requests may be delayed until ongoing investigations are completed. Finally, at one agency component, requests more than 6 years old received low priority because the component believed that they could no longer be pursued in litigation. Requests may involve large volumes of responsive records. For requests that involve large volumes of responsive records, it may take significant time to assemble, review and redact, and duplicate records. In addition, processing of such requests may be delayed while requests received earlier are processed. In addressing such requests, agencies report that they contact requesters to determine whether a more limited or targeted selection of records will meet their needs, and that this can lead requesters to narrow their requests. In addition, agencies may use multitrack processing, putting voluminous or complex requests in a separate queue (which allows relatively simple requests to be processed more quickly). The selection of agencies’ oldest case files that we reviewed included several examples of voluminous requests. For example, at Defense, 5 of the 10 oldest cases remained open, in part because the responsive records were voluminous. For one request for records on the 1972 Strategic Arms Limitation Talks (SALT), Defense’s case file indicated that the request involved the review and coordination of 936 pages of top secret documents. A request for 1970 SALT records involved 613 pages of top secret documents. At HHS, 4 of the 10 oldest case files included references to voluminous records. For example, for a media request for background information on a report by the Centers for Medicare and Medicaid Services on an incident involving an error at a hospital, the centers indicated that the responsive documents were bulky, consisting of about 500 to 600 pages of records. At VA, all 10 oldest pending requests, dating from 2003 to 2005, were in the VA Office of the Inspector General. For one of these cases, the responsive records were described as voluminous (about 700 pages) and in need of review by legal staff; the requester was informed that because of this, they would be placed in a queue with other voluminous requests requiring legal review. The request reached the head of the queue about 2 years later (May 2007), and three incremental releases were made from May to June 2007. (According to VA, this request was closed on August 17, 2007.) VA officials also described a more recent voluminous request involving a database containing more than 72,000 active files, with 431 data elements and over 4 million PDF files, each of which had to be reviewed for personal data. At Justice, two of the six oldest case files included letters explaining that because of the high volume of responsive records associated with each, the requests had been placed in the pending queue for processing. In one case, a letter indicated that the request had moved from number 91 in the pending queue in October 1990 to number 54 in November 1993; according to the letter, the processing delay was caused by large numbers of requests received, as well as the need to devote part of the office’s resources solely to processing documents in response to legislative requirements (the President John Fitzgerald Kennedy Assassination Records Collection Act of 1992). Requests may require extensive review and consultations. Review of requests may require coordination with many organizational components, or they may require the agency to consult with other agencies. If responsive records are classified, they must be reviewed and redacted by personnel with appropriate clearances. Classified or intelligence issues may involve both internal reviews and external consultation when other agencies must review and approve the release of information gathered before a case can be closed. Agency officials stated that this coordination can be time-consuming, especially when it is not clear which agencies have ownership of the information. In other cases, proper review and redaction may require the involvement of subject-matter experts or others with specialized knowledge. Defense’s oldest case files, as described above, included several involving top secret documents, which required extensive reviews by multiple components before release. All but one of Defense’s 10 oldest cases showed evidence of consultations and coordination, in some cases with multiple organizations (these included Commerce, State, and the Central Intelligence Agency). In one of Justice’s six oldest cases, the responsive documents had been sent to external agencies for review of classified documentation to determine whether the material warranted continued declassification and whether it could be released. Agencies may need to notify submitters of information before disclosure. Before releasing information under FOIA, federal agencies are generally required to provide predisclosure notifications to submitters of confidential commercial information. Officials stated that when agencies receive requests for proprietary, acquisition, or procurement records, the submitter notification process can delay closure of these cases. For example, NSF officials stated that most of their requests are for copies of funded grant proposals, which require FOIA staff to contact grantees for approval of the release. According to NSF, many of these grantees are academics who are not familiar with FOIA processes (including the submitter review process); NSF officials state that locating the submitters and explaining the process can be time-consuming. Requests may be delayed until ongoing investigations are completed. According to our analysis of the 10 oldest case files from selected agencies, several old requests remained open because they sought documents regarding investigations that were still ongoing. At DHS and VA, most of the oldest FOIA requests remain open because the responsive records are relative to ongoing investigations. Some examples of these requests follow: At DHS, 8 of the 10 oldest pending requests (dating from 2000 to 2001) were requests directed to the Coast Guard for documents on vessel incidents (such as collisions between vessels). In these cases, the Coast Guard responded to requesters that, as the incident was still under investigation, material might be protected from release as part of an ongoing law enforcement proceeding; the exemptions cited included 7(A), which exempts records or information compiled for law enforcement purposes to the extent that the production of such records could be expected to interfere with enforcement proceedings. The requesters were offered the choice of receiving any material available at the time or authorizing an extension until the investigation was complete; in these cases, requesters asked that requests remain open, pending completion of the investigation. At VA, 7 of the 10 oldest pending requests (dating from 2003 to 2005) were for documents concerning investigations or reviews by the VA Office of the Inspector General. For example, one request was for records of an investigation of medical research activities at a VA medical center that was opened after employees reported that established research procedures were not being properly followed. Another was for records regarding complaints filed against a health care provider. In these and other cases, requesters were informed that the records were not yet releasable and cited exemptions, including 7(A). For these 7 requests, VA informed requesters that it would keep the requests open until the investigations were complete. The Director of Justice’s Office of Information and Privacy stated that the agencies could have simply closed the requests as denials under exemption 7(A) and any other applicable exemption (see app. II); she also noted that these requests remained open in accordance with the requesters’ wishes. Requests more than 6 years old may receive low priority. At one agency component, a set of old cases remained open because the agency believed they were no longer subject to litigation. In accordance with the general federal statute of limitations, lawsuits against the United States generally are barred after 6 years after the right of action first accrues. At Justice’s Criminal Division, requests over 6 years old were given lower priority than requests for which litigation was deemed likely, and, in some instances, the original request processing files were lost. That is, the Criminal Division was unable to locate the original processing files for 4 requests that it had identified as among its 10 oldest pending requests, dating from around the early 1990s. (Justice officials later informed us that one of these cases was in fact closed and had been incorrectly identified.) In August 2007, the Chief of the division’s FOIA/Privacy Act Unit (now retired) told us that he could not account for the loss, but that the unit had recently undergone a move and personnel changes, which might have been contributing factors. According to this official, the unit was creating replacement files from a tracking database and would then take action to close the requests. According to its former chief, the FOIA/Privacy Act Unit gave priority to avoiding litigation, since lawsuits can generate a significantly increased workload and slow down other FOIA processing. For example, according to this official, the Criminal Division was then processing over 30,000 documents as a result of a lawsuit. Criminal Division officials stated that because of the magnitude of this task, which was subject to supervision by the court and potential sanctions if not timely, it was not practical to divert resources to process older cases. Although the goal of avoiding litigation is reasonable, the lack of priority given to the division’s oldest case files is inconsistent with the department’s expressed emphasis on what it termed “an emerging area of concern”—the longest-pending FOIA requests that agencies have on hand. According to Justice, its Office of Information and Privacy (which has lead responsibility for providing guidance and support to federal agencies on FOIA issues) established as a backlog-related goal the regular closure of the 10 oldest FOIA requests pending at eight senior leadership offices in the department, for which the office performs FOIA processing. According to the department, this served as an example for other agencies, some of which followed suit. (Also, in October 2007, Justice issued new requirements for all agencies to report on their 10 oldest pending requests and 10 oldest pending consultations received from other agencies.) Further, although the statute of limitations may prevent requesters from filing suit after 6 years, following a practice that avoids applying resources to cases older than this has the potential effect of increasing the number of very old open requests having little prospect of being closed. In response to this issue, the Criminal Division’s FOIA/Privacy Act Unit began taking action to close the requests that had missing case files, according to its former chief. Also, in December 2007, the current deputy chief of the Criminal Division told us that an attorney had been detailed to work full time on the oldest cases (those dating from 2000 and before); according to this official, the Criminal Division had decreased its pending list by over 100 cases between September 14 and November 29, 2007. However, the division’s improvement plan did not address closing its oldest cases, and the division had not established time frames for doing so. Although the actions described by the deputy chief, if implemented appropriately, should help to address this issue, establishing goals and time frames would provide further assurance that attention to this issue is sustained appropriately. Without such goals and time frames, the Criminal Division risks perpetuating the tendency for the oldest requests to remain open indefinitely. Following the emphasis on backlog reduction in Executive Order 13392 and agency improvement plans, many agencies have shown progress in decreasing their backlogs of overdue requests as of September 2007. Specifically, of 16 agencies we reviewed that were able to provide statistics, 9 decreased overdue or pending requests, 5 experienced increases, and 2 had no material change. However, the statistics provided by these agencies varied widely, representing a mix of overdue cases and total pending cases, as well as varying time frames. Further, 3 of the 21 agencies were unable to provide statistics supporting their backlog reduction efforts, and 1 provided statistics by component, which could not be aggregated to provide an agencywide result. (The remaining agency reported no backlog before or after implementing its plan.) Tracking and reporting statistics on overdue cases is not a requirement of the annual FOIA reports or of the Executive Order. Although both the Executive Order and Justice’s implementing guidance put a major emphasis on backlog reduction, agencies were given flexibility in developing goals and metrics that they considered most appropriate in light of their current FOIA operations and individual circumstances. As a result, agencies’ goals and metrics vary widely, and progress could not be assessed against a common metric. Justice’s most recent guidance directs agencies to set goals for reducing backlogs of overdue requests in future fiscal years, which could lead to the development of a consistent metric; however, it does not direct agencies to monitor and report overdue requests or to develop plans for meeting the new goals. Table 8 shows statistics provided by 16 agencies in response to our request for numbers of overdue requests before and after the implementation of the improvement plans. “After” statistics were as of September 14, 2007 (except as noted in the table). “Before” (baseline) statistics were generally as of about June 2006. As the table shows, a few agencies provided statistics on pending requests rather than overdue requests. As shown in table 8, since implementing their FOIA improvement plans, eight agencies showed significant decreases in their backlogs of overdue cases (AID, DHS, EPA, Interior, Labor, the Nuclear Regulatory Commission, the Treasury, and VA), and one (Energy) showed decreases in pending requests (Energy does not distinguish overdue requests from pending requests in its reduction efforts). Because of the large numbers of pending and overdue requests that it accounts for governmentwide, DHS’s reduction is particularly notable. According to its statistics, DHS succeeded in reducing backlog by 29 percent since June 2006, reducing its overdue requests by almost 30,000. DHS officials, including the Deputy Chief FOIA Officer, attributed the department’s success to activities performed as part of its improvement plan for both 2006 and 2007. For 2006, DHS’s improvement plan goals related to backlog reduction included hiring additional personnel, implementing operational improvements at CIS, meeting with an important requester group (the American Immigration Lawyers Association) to discuss file processing and customer service enhancements, and establishing a monitoring program under which all DHS components submitted weekly and monthly data to DHS’s Chief FOIA Officer. Officials also cited improvements to the department’s Web site to assist requesters in properly drafting and directing their requests; increased outreach and assistance by the central FOIA office to components; formalized employee training programs; and the launch of an Internet-based FOIA correspondence tracking and case management system for FOIA offices at DHS headquarters, which is to streamline the tracking of requests. In addition, DHS’s Deputy Chief FOIA Officer told us that she attributes the department’s progress to an increased focus on customer service and communication with requesters, as well as efforts to streamline FOIA processing using available technologies. Also notable is VA’s performance: it reported achieving a backlog reduction of over 80 percent from August 2006 to September 2007—a reduction of 9,550 requests. This is also significant to the overall backlog picture, as VA accounts for significant portions of governmentwide requests received and pending (table 7 provides numbers for fiscal year 2006). VA attributed its backlog reduction to the improvements resulting from meeting the milestones that it had set in its improvement plan and the increased management emphasis on backlog reduction. VA’s 2006 improvement goals were to implement quarterly backlog snapshot reporting for all components; analyze these snapshots to identify offices with significant backlogs; identify the department’s 10 oldest FOIA requests and estimated completion dates; and conduct FOIA site visits. In its annual report, VA reported meeting these goals, as well as a number of goals for 2007, including analysis of backlog and solutions. Other agencies did not reduce their backlog of overdue or pending cases: two agencies with minor backlog saw no material change, but five agencies saw significant increases: Commerce saw a minor increase in its backlog of 7 overdue requests, for a total of 188 (Commerce generally receives about 2,000 FOIA requests a year). According to Commerce’s Departmental FOIA Officer, the department received a large number of voluminous requests in the period before September 14, which she said was because of the election year, and many of these requests were requests for congressional correspondence and correspondence logs. According to this official, because such logs and correspondence involve other agencies, such requests require external consultation, which can be time-consuming. She also stated that the department’s backlog of overdue requests varies from day to day, and that by September 30, 2007, it had fallen to 159. For agencies such as Commerce, whose processing rates have fluctuated closely around 100 percent (see fig. 7), such variations are not surprising. NSF’s pending requests rose from 5 to 7 from its fiscal year 2006 report to September 14, 2007; NSF processes around 250 to 350 requests per year. As these numbers show, NSF does not face major backlog issues. Further, when dealing with small numbers that can vary daily, a difference of 2 between snapshot dates does not provide a meaningful indication of a trend. SSA, State, and Defense saw rises in overdue requests, and NASA and Education saw rises in pending requests: Although SSA stated in its fiscal year 2006 annual report that it had reduced its backlog by 5 percent, it experienced a rise in its backlog of overdue requests by September 2007. SSA officials, including the Principal Public FOIA Liaison, attribute this rise to difficulties in migration to a new electronic FOIA tracking system, recent loss of experienced staff, and an increase in complex requests in 2007. According to these officials, this increase in requests occurred because of events that led to heightened public interest, such as SSA field office closures. Although SSA is expecting to lose more senior staff in 2007, officials hope to reduce backlog by streamlining operations and careful management. For example, according to agency FOIA officials, SSA is tasking junior-level personnel, including administrative and office automation staff, with the responsibility of responding to requests from frequent requesters seeking routine statistical data, thus allowing senior analysts to work on more complex requests. According to FOIA officials at State’s Office of Information Programs and Services, the department’s backlog of overdue requests increased because of conflicting demands on the staff that coordinate and process FOIA requests. For example, staff resources were redirected in response to a department priority placed on passport processing. State also reported that it experienced an increase in the number of congressional requests for documents, the expedited processing of which often competes for the same staff. According to the department, it plans to address its backlog challenges by efforts to better track and control the FOIA workload. Defense attributes most of the rise in its backlog of overdue requests to an unforeseen influx of requests received by the Defense Security Service (DSS). According to the chief of Defense’s Freedom of Information Policy Office, DSS accounts for over 10,000 of Defense’s 23,255 backlogged requests. This official told us that these requests are primarily Privacy Act requests for background investigation files from individuals who were investigated by DSS for security clearances over the past 15 years. According to this official, the DSS backlog increased because, among other things, personnel security investigation resources were transferred from DSS to OPM when OPM assumed the personnel security investigation mission in 2005; the increased security awareness within the country since the events of September 11, 2001, caused more employers of former Defense personnel to ask for security clearance information; and the war in Iraq caused a significant increase in the use of cleared contractors for critical positions. The chief of Defense’s Freedom of Information Policy Office stated that the department plans to modify its FOIA improvement plan to address this new backlog. NASA saw an increase of more than 100 percent in pending requests from February 2006 to September 14, 2007. According to NASA’s Chief FOIA Public Liaison Officer, during this past fiscal year, it experienced an unexpectedly large increase in FOIA workload because of high-visibility incidents that led the public and media to increase their FOIA requests, such as an incident involving an astronaut accused of attempted murder, foam-related issues with the shuttle tanks, and contracts with NASA’s new exploration vehicle. Most of these requests involve information that is being considered under civil and criminal proceedings, operational safety reviews, and internal controls; as a result, according to this official, they required extensive legal reviews concerning the initial release determinations. Education experienced a 44 percent increase in pending requests from June 2006 to September 14, 2007. Education’s improvement plan goals included closing its 10 oldest requests by January 2007, as well as 10 percent of 480 requests that it identified as pending as of June 2006. In its annual report, the department stated that it exceeded its 10 percent goal, but that it did not close its 10 oldest requests because resources had been reallocated to address other unplanned FOIA priorities and workload. In addition, the Director of Regulatory Information Management Services in the department’s Office of Management told us that the department has experienced an increase in pending/backlog FOIA requests because of the growing number of FOIA requests seeking responsive documents of a cross-cutting nature, which require substantial time and attention from senior personnel. Although the statistics provided by the 16 agencies indicate that many agencies have made reductions, the governmentwide picture is not clear because the types of statistics varied widely, representing both overdue and pending cases and varying time frames. Further, 3 of the 21 agencies were unable to provide statistics supporting their backlog reduction efforts, and 1 provided statistics by component, which could not be aggregated to provide an agencywide result. Table 9 shows the variations in the dates of the baseline statistics for the agencies. For agencies that provided the number of their overdue cases, the dates generally depended on when the agencies first began to collect such statistics. Some agencies had collected preimplementation backlog numbers as a baseline for their improvement plans, and others planned to determine backlog of overdue cases as part of the implementation of their plans. For agencies that provided pending statistics, the dates generally depended on the systems and processes used to develop the statistics. Some agencies provided statistics on backlog of overdue requests, some provided numbers of pending requests, and some provided a combination. For the four agencies providing only pending statistics, the actual backlogs of overdue requests would probably be lower, since overdue cases are a subset of pending cases. Those providing pending statistics did so either because their systems were not set up to track overdue requests, because they chose not to distinguish them, or both. For example, according to NASA, its current in-house FOIA database is designed to report statistics only on open requests and does not distinguish those that are over the statutory limit (20 or, in some cases, 30 days). Therefore, NASA provided us numbers pertaining to all open requests. On the other hand, Education and Energy chose not to distinguish between pending cases and those over the statutory limit. Energy explained this decision on the grounds that it ensured that all cases were given the same priority and that new cases would receive just as efficient a response as old ones. Two agencies (Justice and Defense) provided a mix of pending and overdue statistics: Justice reported by components, which provided a mix of pending and overdue statistics, as well as baselines associated with dates ranging from September 2005 to October 2006. Because of this mix, the statistics could not be aggregated and directly compared to give a meaningful departmentwide result. However, of 28 components, 13 reported decreases in overdue or pending cases, 6 reported increases, 7 reported no overdue requests, 1 was a new component that had no preimplementation history, and 1 component did not provide statistics for the time frames requested. According to Defense, it had not previously tracked backlog in the sense of the Executive Order (that is, overdue requests), and it planned to use the September 2007 statistics collected for us as a baseline for future tracking. Similarly, the Department of the Interior could not provide us with a preimplementation baseline because it did not track its backlog of overdue requests at that time. Interior told us that it has now modified its tracking system to allow it to monitor overdue cases in real time. Table 10 shows the four agencies for which information was not provided or was not sufficient for a clear conclusion. (The remaining agency, SBA, did not report a backlog for either June 2006 or September 2007.) Three agencies did not provide any statistics. Transportation and HHS, both of which have decentralized FOIA programs, told us that collecting and providing such statistics was not feasible. Transportation stated that it would have been extremely burdensome to do so because its operating administrations are set up to capture open requests and not overdue requests. Similarly, HHS officials told us that the decentralized nature of the department’s FOIA operations and the manual processes that are used to compile statistics made it impractical to compile the requested data. One agency, OPM, did not provide a baseline statistic. It reported 152 overdue requests as of September 2007, but without a baseline, no conclusion on its progress is possible. According to OPM, because it did not establish a numerical goal for backlog reduction (its goal was to eliminate its backlog of overdue requests), it did not record the number of overdue requests as a baseline before implementing its improvement plan. A major reason for this variation in statistics is that agencies did not necessarily have systems or processes to record backlog in the sense of the Executive Order (requests for records that have not been responded to within the statutory time limit); instead, their systems or processes were based on recording statistics required for the annual reports, which include a count of open requests pending at the end of the reporting period but do not include backlog of overdue requests. This challenge is compounded for agencies with highly decentralized programs or manual processes, for which assembling the statistics, even if they were available, is a significant task. In addition, the goals that agencies set regarding backlog reduction varied widely. In our March 2007 report, we noted that almost all plans contained measurable goals and timetables for avoiding or reducing backlog. However, the goals concentrated on a wide variety of targets and metrics. For example, some goals and milestones were focused on activities that could be expected to reduce backlog by contributing to efficiency, such as conducting reviews, setting up monitoring mechanisms, hiring staff, conducting training, and making other process improvements. Others were numerical goals aimed at particular metrics, such as reducing processing time; completing a certain percentage of requests within 20 days; reducing the number of some subset of requests (such as the 10 oldest cases, those over a year old, cases opened before a particular date, or cases at particular components); or reducing the number of pending or overdue requests by a certain percentage, to a certain number, or to a certain proportion of requests received per year. The goals also covered a variety of time frames, so that not all agencies set numerical goals for the first reporting period (which ended about 7 months after they began implementing their improvement plans), but instead set only process goals. The reason for this variety of goals and milestones is that Justice’s guidance on implementing the Executive Order gave agencies broad flexibility in designing their plans. This guidance emphasized that identifying ways to eliminate or reduce backlog should be a major underpinning of the implementation plans of all agencies that had backlogs. However, the guidance allowed agencies to develop goals that they considered most appropriate in light of their current FOIA operations; it did not prescribe any particular metric for all agencies to use. According to the Director of Justice’s Office of Information and Privacy, the guidance was intended to provide flexibility to agencies in developing appropriate measurements that best fit their individual circumstances. As a result, the goals and milestones set by agencies included a wide variety of different aims and measures. In recent guidance issued to implement the Attorney General’s recommendations for improving FOIA implementation, Justice directed agencies to develop backlog reduction goals for fiscal years 2008 to 2010. The guidance directs agencies to estimate the number of requests they expect to receive during each fiscal year and to set goals both for the number of requests they intend to process and for the number of requests pending beyond the statutory limit (backlog) at the end of each fiscal year. According to the Director of Justice’s Office of Information and Privacy, this guidance was aimed at ensuring the continuation of the improvement process begun by the Executive Order. This guidance continues the flexible approach set in earlier guidance, in that it gives agencies freedom to set goals that they consider appropriate and realistic to their own circumstances. The Director of the Office of Information and Privacy also stated that by directing agencies to establish these goals, the office intended to establish a core definition for what is being tracked and to encourage agencies to begin focusing on this metric. However, the guidance does not direct agencies to modify their existing improvement plans or to otherwise develop strategies, plans, or milestones to achieve these new goals, which are in addition to the specific goals set in their improvement plans. The guidance also does not direct agencies to track and report the actual number of requests pending beyond the statutory limit. Without such planning and tracking, agencies may face challenges in achieving the reductions envisioned. Neither the public nor the agencies could effectively monitor progress unless agencies put in place processes and systems that allow them to track and report their backlogs of overdue requests. The annual FOIA reports continue to provide valuable information about the public’s use of this important tool to obtain information about the operations and decisions of the federal government. However, the value of this information depends on its accuracy. In some cases, agencies were not able to provide assurance that their information was accurate and complete. It is important for agencies to ensure that they have appropriate procedures and internal controls, so that agencies and the public have reasonable assurance that FOIA data are reliable. Some of the challenges that agencies face in processing FOIA requests include the need to review and redact sometimes large volumes of responsive records, to consult with other agencies or confer with multiple organizations, and to provide predisclosure notifications to information submitters. These practical challenges provide some insight into the reasons why backlogs can develop and grow, as well as an appreciation of the need for sustained attention to ensure that backlogs do not become unmanageable. For example, in one agency component, the pressure to avoid litigation, while ensuring that some newer requests were responded to promptly, led to a situation in which very old cases may remain open indefinitely. Establishing goals and time frames to close such cases could help avoid this result. The challenge to agency management is to determine how to apply finite resources to respond to the multiple and sometimes competing demands placed on their FOIA programs. The progress that many agencies have made in reducing backlog suggests that the development and implementation of the FOIA improvement plans have had a positive effect. However, in the absence of consistent statistics on overdue cases, it is not possible to make a full assessment of governmentwide progress in this area. Justice’s latest guidance on setting backlog reduction goals is a step toward developing such statistics, although it does not explicitly ask agencies to track and report them. However, on the principle that “what gets measured gets managed,” the chances of agency success in achieving reductions could be increased if they monitor and report statistics on their backlog of overdue requests, as well as develop plans for achieving their goals. Such reporting would further the aim of the statute and the Executive Order to inform citizens about the operations of their government and the FOIA program in particular. To help ensure that FOIA data in the annual reports are reliable, we are recommending that the Administrator of General Services ensure that appropriate internal controls are put in place to improve the accuracy and reliability of FOIA data, including processes, such as checks and reviews, to verify that required data are entered correctly. To help ensure that FOIA data are reliable, we are recommending that the Secretary of Housing and Urban Development ensure that appropriate policies and procedures are put in place to improve the accuracy and reliability of FOIA data, including procedures to ensure that all FOIA offices use tracking systems consistently and that information is entered accurately and promptly. We previously made a recommendation to the Department of Agriculture regarding the reliability of FOIA data at the Farm Service Agency; we are making no further recommendations at this time because the department has improvement efforts ongoing that, if implemented effectively, should help ensure that required data are entered correctly. We are also recommending that the Attorney General take the following actions: To help ensure that its oldest requests receive appropriate attention, direct the Criminal Division to establish goals and time frames for closing its oldest requests, including those over 6 years old. To help agencies achieve the backlog reduction goals planned for fiscal years 2008, 2009, and 2010 and to ensure that comparable statistics on backlog are available governmentwide, direct the Office of Information and Privacy to provide additional guidance to agencies on (1) developing plans or modifying existing plans to achieve these goals and (2) tracking and reporting backlog. We provided a draft of this report to OMB and the 24 agencies included in our analysis for review and comment. All generally agreed with our assessment and recommendations or had no comment. Seven agencies provided written comments: AID, Energy, EPA, GSA, Homeland Security, Justice, and OPM (printed in apps. III through IX). The Department of Veterans Affairs provided comments by e-mail. In addition, the Departments of Commerce, Defense, the Interior, Justice, and State provided technical comments by e-mail or letter, which we incorporated as appropriate. In written comments from the Department of Justice, the Director of Justice’s Office of Information and Privacy provided additional information on its planned actions related to our recommendations (see app. VIII), and in later contacts, the department confirmed that it generally agreed with our findings and recommendations. The Director described actions that the department has taken to help agencies achieve the backlog reduction goals planned for fiscal years 2008, 2009, and 2010. The Director also stated that Justice intends to issue further guidance to agencies, which will encourage agencies both to ensure appropriate planning to meet their backlog reduction goals and to reduce the age of their oldest requests, as well as provide additional requirements for reporting on backlogged requests. In addition, the Director provided information on actions that the Criminal Division has taken to ensure that the oldest requests receive appropriate attention. She stated that her office has been advised by the FOIA Office of the Criminal Division that it has established goals and time frames for closing its oldest requests and that the likelihood of litigation is no longer a consideration for prioritizing requests older than 6 years (see app. VIII). The Administrator of General Services concurred with our findings and recommendations and stated that the administration has developed and implemented an automated tracking system, providing it with internal control of the data. In addition, the Administrator stated that GSA had increased the FOIA staff, resulting in more checks and reviews to verify that data are entered correctly (see app. VI). Three agencies provided written comments describing additional actions taken in regard to overdue requests: The Director of the Department of Homeland Security’s GAO/OIG Liaison Office concurred with our findings and recommendations and described actions taken by the department to continue to decrease the number of overdue requests, actions taken by CIS to expedite FOIA processing, and the priority given to departmentwide guidance (see app. VII). The Assistant Administrator and Chief Information Officer of the Environmental Protection Agency described actions that the department has taken to ensure that it continues to decrease the number of its overdue requests (see app. V). The Director of the Office of Personnel Management stated that the office did not dispute our statement that it had not established a backlog baseline, and added that since the audit was completed it has established backlog reduction goals and determined a baseline for overdue requests (see app. IX). Two agencies provided written comments agreeing with the information presented on their FOIA programs: For the Department of Energy, the Director of the Office of Management/Chief Freedom of Information Officer provided comments (see app. IV). For the Agency for International Development, the Assistant Administrator of the Bureau for Management provided comments (see app. III). Finally, the GAO Liaison of the Department of Veterans Affairs provided e- mail comments agreeing with the information presented on the department’s FOIA program. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies of this report to the Attorney General, the Director of the Office of Management and Budget, and the heads of departments and agencies we reviewed. Copies will be made available to others on request. In addition, this report will be available at no charge on our Web site at www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-6240 or koontzl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix X. Our objectives were to (1) determine the status of agencies’ processing of Freedom of Information Act (FOIA) requests and any trends that can be seen, (2) describe factors that contribute to FOIA requests remaining open beyond the statutory limits, and (3) determine to what extent agencies have made progress in addressing their backlogs of overdue FOIA requests since implementing their improvement plans. To determine the status of agencies’ processing of FOIA requests and any trends, we analyzed annual report data for fiscal years 2002 through 2006. Our intended scope was the 24 agencies covered by the Chief Financial Officers Act, plus the Central Intelligence Agency (herein we refer to this scope as governmentwide). To gauge agencies’ progress in processing requests, we analyzed the workload data (from fiscal year 2002 through 2006) included in the 25 agencies’ annual FOIA reports to assess trends in volume of requests received and processed, median processing times, and the number of pending cases. All agency workload data were self-reported in annual reports submitted to the Attorney General. To provide assurance that the data reported in the annual reports were reliable, we interviewed officials from selected agencies and assessed the internal controls that agencies had in place for ensuring that their data were complete and accurate. Our strategy for assessing data reliability was to assess agencies on a 3-year rotational basis. In both fiscal year 2006 and fiscal year 2007, we selected the Social Security Administration and the Department of Veterans Affairs for assessment because they processed a majority of the requests governmentwide, as well as eight additional agencies. To ensure that we selected agencies of varying size, we ordered the remaining agencies according to the number of requests they received (from smallest to largest) and divided the resulting list into sets of three; we assessed the first member of each set last year and the second of each set this year. This year, in addition to the Social Security Administration and the Department of Veterans Affairs, the following agencies were selected for assessment: the Departments of Homeland Security, Housing and Urban Development, Justice, and Transportation, as well as the Agency for International Development, the Central Intelligence Agency, the Environmental Protection Agency, and the General Services Administration. We also chose to revisit the Department of Agriculture, which we assessed last year, because we had determined that we could not be assured that data from a component, the Farm Service Agency, were accurate and complete. Thus, we planned to assess a total of 11 agencies in fiscal year 2006. We performed assessments at 10 of these agencies; we did not assess the Central Intelligence Agency because it did not provide information in response to our requests. As a result of these assessment efforts, we omitted 4 of the 25 agencies from our analysis: the Central Intelligence Agency, the General Services Administration, and the Departments of Agriculture and Housing and Urban Development. We eliminated the Central Intelligence Agency, because without its participation, we were unable to determine whether it had internal controls ensuring that its data were accurate and complete. We eliminated the General Services Administration and the Departments of Agriculture and Housing and Urban Development because they did not provide evidence of internal controls that would provide reasonable assurance that FOIA data were recorded completely and accurately, or they acknowledged material limitations of the data. As a result, our statistical analysis for this report was based on data from a total of 21 agencies’ annual reports. Table 11 shows the 25 agencies and their reliability assessment status. To describe factors that contribute to FOIA requests remaining open beyond the statutory limits, we analyzed case files for the 10 oldest pending requests at selected agencies and discussed these cases and the reasons they remained open with agency officials. We also interviewed agency officials regarding the factors they considered most relevant for their agencies. To determine to what extent agencies made progress in addressing backlogged FOIA requests since implementing their improvement plans, we analyzed the improvement plan progress reports included in the fiscal year 2006 annual reports of the 21 major agencies whose internal controls we evaluated as sufficient in order to determine whether the agencies met their 2006 backlog reduction milestones. In order to determine whether agencies made a reduction or an increase in backlogged cases, we analyzed statistics provided by the agencies on their backlogs at different points in time. We discussed the information in the progress reports and backlog statistics with agency officials to determine their views on the reasons for backlog increases or decreases, as well as their progress on their improvement plans. In addition, we reviewed the requirements for reporting progress contained in the Executive Order, implementation guidance from the Office of Management and Budget and the Department of Justice, other FOIA guidance issued by Justice, and our past work in this area. We conducted this performance audit from May 2007 to March 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The act prescribes nine specific categories of information that are exempt from disclosure. Matters that are exempt from FOIA (A) Specifically authorized under criteria established by an Executive Order to be kept secret in the interest of national defense or foreign policy and (B) are in fact properly classified pursuant to such Executive Order. Related solely to the internal personnel rules and practices of an agency. Specifically exempted from disclosure by statute (other than section 552b of this title), provided that such statute (A) requires that matters be withheld from the public in such a manner as to leave no discretion on the issue or (B) establishes particular criteria for withholding or refers to particular types of matters to be withheld. Trade secrets and commercial or financial information obtained from a person and privileged or confidential. Interagency or intra-agency memorandums or letters which would not be available by law to a party other than an agency in litigation with the agency. Personnel and medical files and similar files the disclosure of which would constitute a clearly unwarranted invasion of personal privacy. Records or information compiled for law enforcement purposes, but only to the extent that the production of such law enforcement records or information could reasonably be expected to interfere with enforcement proceedings; would deprive a person of a right to a fair trial or impartial adjudication; could reasonably be expected to constitute an unwarranted invasion of personal privacy; could reasonably be expected to disclose the identity of a confidential source, including a state, local, or foreign agency or authority or any private institution which furnished information on a confidential basis, and, in the case of a record or information compiled by a criminal law enforcement authority in the course of a criminal investigation or by an agency conducting a lawful national security intelligence investigation, information furnished by confidential source; would disclose techniques and procedures for law enforcement investigations or prosecutions, or would disclose guidelines for law enforcement investigations or prosecutions if such disclosure could reasonably be expected to risk circumvention of the law; or could reasonably be expected to endanger the life or physical safety of an individual. Contained in or related to examination, operating, or condition of reports prepared by, on behalf of, or for the use of an agency responsible for the regulation of supervision of financial institutions. Geological and geophysical information and data, including maps, concerning wells. In addition to the contact named above, key contributions to this report were made by Ashley Brooks, Barbara Collier, Eric Costello, Marisol Cruz, Wilfred Holloway, David Plocher, Kelly Shaw, and Elizabeth Zhao.
Under the Freedom of Information Act (FOIA), federal agencies must generally provide access to their information, enabling the public to learn about government operations and decisions. To help ensure proper implementation, the act requires that agencies report annually to the Attorney General on their processing of FOIA requests. For fiscal year 2006, agencies were also to report on their progress in implementing plans to improve FOIA operations, as directed by a December 2005 Executive Order. A major goal of the order was reducing backlogs of overdue FOIA requests (the statute requires an agency to respond to requests within 20 or, in some cases, 30 working days with a determination on whether it will provide records). For this study, GAO was asked, among other things, to determine trends in FOIA processing and agencies' progress in addressing backlogs of overdue FOIA requests since implementing their improvement plans. To do so, GAO analyzed 21 agencies' annual reports and additional statistics. Based on data reported by major agencies in annual FOIA reports from fiscal years 2002 to 2006, the numbers of FOIA requests received and processed continue to rise, but the rate of increase has flattened in recent years. The number of pending requests carried over from year to year has also increased, although the rate of increase has declined. The increase in pending requests is primarily due to increases in requests directed to the Department of Homeland Security (DHS). In particular, increases have occurred at DHS's Citizenship and Immigration Services, which accounted for about 89 percent of DHS's total pending requests. However, the rate of increase is slightly less than it was in fiscal year 2005. Following the emphasis on backlog reduction in Executive Order 13392 and agency improvement plans, many agencies have shown progress in decreasing their backlogs of overdue requests as of September 2007. In response to GAO's request, 16 agencies provided information on their recent progress in addressing backlogs; results showed that 9 achieved decreases, 5 experienced increases, and 2 had no material change. Notably, according to this information, DHS was able to decrease its backlog of overdue requests by 29,972, or about 29 percent. However, the statistics provided by the 16 agencies varied widely, representing both overdue cases and all pending cases, as well as varying time frames. Further, 3 of 21 agencies reviewed were unable to provide statistics supporting their backlog reduction efforts, and 1 provided statistics by component, which could not be aggregated to provide an agencywide result. (The remaining agency reported no backlog before or after implementing its plan.) Tracking and reporting numbers of overdue cases is not a requirement of the annual FOIA reports or of the Executive Order. Although both the Executive Order and Justice's implementing guidance put a major emphasis on backlog reduction, agencies were given flexibility in developing goals and metrics that they considered most appropriate in light of their current FOIA operations and individual circumstances. As a result, agencies' goals and metrics vary widely, and progress could not be assessed against a common metric. The progress that many agencies made in reducing backlog suggests that the development and implementation of the FOIA improvement plans have had a positive effect. However, in the absence of consistent statistics on overdue cases, it is not possible to make a full assessment of governmentwide progress in this area. Justice's most recent guidance directs agencies to set goals for reducing backlogs of overdue requests in future fiscal years, which could lead to the development of a consistent metric; however, it does not direct agencies to monitor and report overdue requests or to develop plans for meeting the new goals. Without such planning and tracking, agencies may be challenged to achieve the reductions envisioned.
After the September 11, 2001, terrorist attacks, Congress passed the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism (USA PATRIOT) Act of 2001, which amended and broadened the scope of the Bank Secrecy Act (BSA) to include additional financial industry sectors and a focus on the financing of terrorism. Subsequently, Congress passed the Intelligence Authorization Act for 2004, which established Treasury’s Office of Intelligence and Analysis (OIA). OIA is a member of the Intelligence Community, as defined under Executive Order 12333, as amended. The Intelligence Reform and Terrorism Prevention Act of 2004 identified the Secretary of the Treasury or his or her designee as the lead U.S. government official to the Financial Action Task Force (FATF), to continue to convene an interagency working group on FATF issues. TFI’s mission is to marshal Treasury’s policy, enforcement, regulatory, and intelligence functions in order to safeguard the U.S. financial system from abuse and sever the lines of financial support to international terrorists, WMD proliferators, narcotics traffickers, money launderers, and other threats to U.S. national security. The formation of TFI combined both existing and new units of Treasury. Five key components are included under the umbrella of TFI: Office of Foreign Assets Control (OFAC), formed in 1950, administers and enforces sanctions. Financial Crimes Enforcement Network (FinCEN), formed in 1990, administers and enforces the BSA and serves as the United States’ financial intelligence unit (FIU). Treasury Executive Office for Asset Forfeiture (TEOAF), formed in 1992, administers the Treasury Forfeiture Fund—the receipt account for the deposit of non-tax forfeitures made by member agencies. Office of Terrorist Financing and Financial Crimes (TFFC), established in 2004, serves as TFI’s policy and outreach arm. OIA, also established in 2004, performs Treasury’s intelligence functions, integrating Treasury into the larger Intelligence Community, and providing intelligence support to Treasury leadership. FinCEN is a Treasury bureau; the other four components are offices within TFI, which is a part of Treasury’s structure of departmental offices. Figure 1 shows TFI’s current organizational structure. To achieve its mission, TFI components often work with the following: Other U.S. government agencies. For instance, OFAC works with State and Justice, among others, to designate individuals and organizations under 21 separate sanctions programs. TFFC also works with State, Justice, and other agencies in developing and advocating a U.S. position in international forums related to money laundering and illicit financing. In addition, TEOAF works with State and Justice to administer sharing of large case forfeiture proceeds with foreign governments, pursuant to international treaties, whose law enforcement personnel cooperated with U.S. federal investigations. Other TFI components. For example, OIA provides information to OFAC to assist in making decisions regarding whether to pursue designations of individuals and organizations. For completed designations, OIA also works with OFAC to declassify intelligence information for public dissemination. Private sector. For example, in its role as the Secretary’s delegated administrator of the BSA, FinCEN regularly interacts with the private sector, including the financial sector. One such mechanism for maintaining formal ties to the private sector is Treasury’s BSA Advisory Group. FinCEN also conducts informal consultations with financial institutions regarding their individual financial intelligence efforts. Foreign governments and international organizations. Treasury heads the U.S. delegation to the FATF, an international body that develops and implements multilateral standards relating to anti-money laundering and counterterrorist financing. TFFC leads this effort on behalf of Treasury. Similarly, FinCEN works with foreign governments to develop and strengthen capabilities of their FIUs as well as to respond to requests for assistance from foreign FIUs, which totaled more than 1,000 in fiscal year 2008. As shown in figure 2, the size of TFI’s staff has grown from approximately 500 in fiscal year 2005 to approximately 650 in fiscal year 2008. FinCEN, with 299 full-time equivalents (FTE) in fiscal year 2008, is TFI’s largest component, and OIA gained the most staff—90—from fiscal years 2005 through 2008. As shown in figure 3, TFI’s budget has grown from approximately $110 million in fiscal year 2005 to approximately $140 million in fiscal year 2008. With a budget of approximately $86 million, FinCEN has the largest budget of any TFI component. In addition, OIA’s budget has grown at the greatest rate, from about $9 million in fiscal year 2005 to about $20 million in fiscal year 2008. According to TFI, it undertakes five functions in order to achieve its mission. Officials from TFI and its interagency partners cited strong collaboration with TFI in several areas, but differ about the quality of collaboration regarding U.S. participation in some international forums. According to TFI, it undertakes five functions to safeguard the financial system from illicit use and to combat rogue nations, terrorist supporters, WMD proliferators, money launderers, drug kingpins, and other national security threats. These functions are (1) building international coalitions, (2) analyzing financial intelligence, (3) administering and enforcing the BSA, (4) administering and enforcing sanctions, and (5) administering forfeited funds. TFI employs two primary means to build international coalitions to support U.S. national security interests. These are deepening engagement in international forums and improving international partners’ capacity. Deepening engagement in international forums. TFI and other U.S. agencies participate in several international organizations intended to strengthen the international financial system so that it cannot be exploited by criminal networks. Two examples are the FATF and the Egmont Group. TFFC leads the U.S. delegation to the FATF, while FinCEN leads U.S. participation in the Egmont Group. According to TFI officials, U.S. participation in such organizations provides a unique opportunity to engage with international counterparts in the effort to develop international standards and a framework for countries to implement legal regimes that protect the international financial system from abuse. TFI also uses international forums to advance the U.S. agenda in areas such as nonproliferation. For example, according to TFI, it has been working closely with other G-7 countries to determine what steps can be taken to isolate proliferators from the international financial system through multilateral action. For instance, according to TFI officials, they are working with State to encourage the more than 85 countries that participate in the Proliferation Security Initiative to use financial measures to combat proliferation support networks. In addition to playing a leadership role in these organizations and forums, TFI officials report that they are also working to expand these organizations’ membership so as to broaden the reach of international financial standards. For example, as of March 2009, FinCEN was sponsoring 12 countries’ membership in the Egmont Group, including Afghanistan, Saudi Arabia, Pakistan, and Yemen. According to FinCEN officials, the addition of such new members will greatly strengthen FinCEN’s ability to obtain valuable information related to the activities of illicit financial networks. Improving international partners’ capacity. As part of TFI, FinCEN has made engagement with foreign FIUs in the detection and deterrence of crime one of its strategic objectives. To accomplish this objective, FinCEN has undertaken a variety of efforts to strengthen the global network of FIUs. For example, according to FinCEN officials, they engage in a variety of cooperative efforts with other FIUs aimed at fostering productive working relationships and best practices. In addition, according to TFI officials, they participate in mutual evaluation studies, as part of its participation in the FATF, to identify measures to improve other FATF members’ regulatory regimes related to combating money laundering and terrorist financing. For example, in fiscal year 2008, the FATF performed six mutual evaluations; the United States delegation, led by TFFC, sent representatives to serve as assessors for four of these mutual evaluations. TFI officials cite OIA’s analysis of financial intelligence as a critical part of TFI’s efforts because it underlies TFI’s ability to utilize many of its tools. The first step in disrupting and dismantling illicit financial networks is identifying those networks, according to TFI officials. They said that the creation of OIA was critical to TFI’s ability to effectively identify these illicit financial networks. As a member of the broader intelligence community, OIA performs analysis of intelligence information related to national security threats with a view toward potential action and utilization of tools available to TFI. Staff in other TFI components and TFI management then use this intelligence analysis to draft papers to implement such strategies or actions. In addition, TFI utilizes intelligence analysis to assess the impact of the actions it takes. For example, according to the Under Secretary for TFI, intelligence analysts have assessed the impact of previous financial actions taken to address the national security threat posed by North Korea. Those assessments were then used to shape the U.S. policy response to the most recent missile and nuclear tests by North Korea. According to TFI officials, FinCEN’s administration of the BSA plays a key role in TFI’s ability to achieve its mission. The BSA includes a variety of reporting and record-keeping requirements that provide useful information to law enforcement and regulatory agencies. For example, pursuant to the BSA, Treasury (FinCEN) requires financial institutions to report suspicious financial activities relevant to a possible violation of law. Such suspicious activity reports (SAR) are then analyzed by FinCEN and made available to the law enforcement and regulatory communities. In 2007, financial institutions filed nearly 1.3 million SARs, which federal, state, and local law enforcement agencies use in their investigations of money laundering, terrorist financing, and other financial crimes. The BSA, as amended by the USA PATRIOT Act, also grants Treasury additional authorities, which are delegated to FinCEN, to combat money laundering and terrorist financing. For example, Section 311 of the USA PATRIOT Act amended the BSA to provide an additional tool to safeguard the U.S. financial system from illicit foreign financial institutions and networks. According to TFI officials, Section 311 is an important and extraordinarily powerful tool, as it authorizes Treasury to find a foreign jurisdiction, foreign financial institution, type of account, or class of transaction as being of “primary money laundering concern.” Such a finding enables Treasury to impose a range of special measures that U.S. financial institutions must take to protect against illicit financing risks posed by the target. These special measures range from enhanced record- keeping and reporting requirements up to prohibiting U.S. financial institutions from maintaining certain accounts for foreign banks if they involve foreign jurisdictions or institutions found to be of primary money laundering concern. The imposition of economic sanctions has been a long-standing tool for addressing a range of national security threats. OFAC currently maintains primary responsibility for administering more than 20 separate sanctions programs. (See app. II for a list of current U.S. sanctions programs.) These sanctions programs fall into two categories: (1) country-based programs that apply sanctions to an entire country—such as Cuba, Iran, or Sudan— and (2) targeted, list-based programs that address individuals or entities engaged in specific types of activities such as terrorism, WMD proliferation, or narcotics trafficking. For example, according to TFI officials, they use the authorities under the International Emergency Economic Powers Act and Executive Order 13224 to designate those who provide support to terrorists, freezing any assets they have under U.S. jurisdiction and preventing U.S. persons from doing business with them. From fiscal years 2004 through 2008, Treasury designated or supported the designation of more than 1,900 individuals and organizations under various sanctions programs. To help ensure compliance with U.S. sanctions programs, Treasury also has the authority to impose civil penalties on individuals and organizations that violate U.S. sanctions. From 2004 through 2008, OFAC imposed more than 1,500 civil penalties related to violations of its sanctions programs. As a result, OFAC assessed nearly $15 million in penalties. According to TEOAF, an important tool in the U.S. fight against money laundering is asset forfeiture. Forfeiture assists in the achievement of TFI’s mission in two ways. First, asset forfeiture strips away the profit from illegal activity, thus making it less attractive. According to TEOAF, in fiscal year 2008 it received more than $500 million in total forfeiture revenue; the majority, after net expenses, came from forfeitures processed by Immigration and Customs Enforcement and the Internal Revenue Service–Criminal Investigation. Second, according to the Director of TEOAF, the revenue derived from such forfeited assets can be used to fund federal law enforcement activities, including initiatives directed at further combating illicit financing networks. For example, in fiscal year 2008, TEOAF provided approximately $1 million in funding to Immigration and Customs Enforcement to provide training to international partners. Specifically, the funding was provided to allow the expansion of existing training activities to assist in combating bulk cash smuggling by terrorist groups and other criminal networks. Collaborating with interagency partners is important to TFI’s ability to perform effectively. Many of the tools TFI utilizes to combat national security threats involve multiple agencies reviewing the proposed action. For example, according to Treasury officials, they consult with officials from State, Justice, and the Department of Homeland Security on decisions to designate individuals or organizations that support terrorism. In addition, other tools, such as advocating actions to strengthen the international financial system through the FATF, benefit from the expertise and input from collaboration with a variety of agencies, including State, Justice, the Securities and Exchange Commission, the Department of Homeland Security, and others. Prior GAO work has identified several practices that can enhance and sustain such interagency collaboration. One such practice is establishing compatible policies, procedures, and other means to operate across agency boundaries. Another practice is developing a mechanism for monitoring, evaluating, and reporting on the results of collaborative efforts. Officials at TFI and other agencies said that they generally are satisfied with the quality of interagency collaboration. TFI’s interagency partners report close, collaborative relationships in many situations. For example, State officials told us that they have strong working relationships with officials in almost all TFI components. They highlighted their collaboration with TFI during the designation process and suggested that it is generally effective. These officials commented that if State has information from its embassies abroad that indicates that a specific designation would be particularly damaging to U.S. foreign policy interests, they relay this information to Treasury and discuss alternative approaches. State officials added that the designation process operates effectively, even when agencies may have disagreements over a particular designation, because the National Security Council leads a process to coordinate terrorism designations. It serves as an impartial arbiter that prevents any single agency from exerting too much influence. In addition, Justice officials described a strong working relationship with FinCEN regarding asset forfeiture and money laundering issues. Specifically, they recounted effective communication and information sharing. For example, Justice officials told us that FinCEN has granted Justice access to BSA data, thus allowing Justice to perform its own analyses for law enforcement purposes. Additionally, Justice officials said that FinCEN has helped them utilize its network of international contacts at other countries’ FIUs. However, TFI’s interagency partners have expressed concerns regarding collaboration in other areas. For example, in September 2008, we reported that State and Justice expressed concerns regarding Treasury’s consultations with them when implementing Section 311 of the USA PATRIOT Act. In addition, TFI and other agencies’ officials differed about the effectiveness of interagency collaboration for the function of building international coalitions, particularly when participating in the international forums of the FATF and FATF-Style Regional Bodies (FSRB). On the one hand, TFFC officials suggest that interagency collaboration regarding the FATF and FSRBs has been highly effective over the past 5 years and that Treasury’s ability to effectively lead the U.S. delegation has been greatly strengthened by the participation of a wide variety of regulatory, law enforcement, and other agencies. The Deputy Assistant Secretary for Terrorist Financing and Financial Crimes added that during this time, there have been no major disagreements between agencies regarding the positions the United States should take in such international forums. TFI officials also stated that interagency collaboration runs smoothly and that they were unaware of any significant concerns regarding the quality of interagency collaboration. Officials from State and Justice, however, indicated that the quality of interagency collaboration regarding the FATF and FSRBs has declined substantially over the past 5 years. These officials expressed two types of concerns regarding TFI’s collaboration with other agencies regarding participation in international forums: (1) the exclusion of non-Treasury personnel in key situations and (2) the extent to which TFI makes unilateral decisions regarding the U.S. government position. With regard to TFI’s exclusion of non-Treasury personnel in key situations, TFI and other agencies differ. State and Justice officials cited several examples of situations they believe undermined U.S. effectiveness at combating illicit financing networks. For example, according to State officials, a State official who has taken the necessary training has not been allowed to participate as a member of the U.S. team conducting FATF mutual evaluations. According to these officials, this results in the exclusion of senior staff with significant experience and expertise that could benefit the evaluation teams. In response, TFFC officials indicated that they have included other agencies in the mutual evaluation process. For example, they indicated that officials from Justice and other agencies participated in at least six mutual evaluations from 2004 through 2009. According to TFI, it encourages and attempts to facilitate such participation by other agency officials who have attended the necessary 1- week training course and whose agencies will pay for their travel to foreign countries to conduct and defend their evaluations. Additionally, Justice officials stated that when TFI allows other agencies to review and comment on U.S. policy proposals related to anti-money laundering and counterterrorist financing, it consistently provides too little time for review. Specifically, Justice officials told us that TFI regularly provides agencies 24 hours to review and provide comments on policy proposals, which may make it impossible for agencies to conduct an appropriate review and effectively excludes them from the process. According to TFI officials, they distribute materials as soon as possible; for FATF materials this occurs within 24 hours of receiving them, though they acknowledge that they often are provided short deadlines by the FATF Secretariat. According to TFI officials, they sometimes request an extension of the deadline or submit the U.S. response late in order to obtain interagency views. With regard to concerns about TFI’s unilateral decision making, TFI and other agencies also differed. State and Justice officials cited a situation related to the U.S. position on how to treat the European Union (as a single entity or as separate countries) for the purposes of cash-smuggling regulations. According to State and Justice officials, during interagency meetings prior to the FATF working group session at which the issue was to be discussed, a consensus U.S. position was developed. However, State and Justice officials said that at the FATF plenary meetings, Treasury officials advocated a position that was different from the consensus U.S. position agreed to in advance of the meeting. A Treasury official told us that the agency did not deviate from the consensus position agreed to before the meeting. Justice, State, and Treasury officials said that there is no guidance specifying how the interagency process should operate to develop U.S. positions in advance of FATF meetings. Specifically, there is no guidance regarding the process or time frames for circulating or approving U.S. policy statements to be made at international meetings to discuss anti- money laundering and counterterrorist financing issues. In addition, there is no formal mechanism for monitoring, evaluating, or reporting on the results of agencies’ collaborative efforts. According to State and Justice officials, the inconsistent quality of interagency collaboration may undermine some efforts to combat illicit financing networks through international forums. State officials suggested that the exclusion of non-Treasury personnel may mean that expertise available within the U.S. government is not effectively utilized, thus potentially weakening the United States’ ability to influence international partners’ actions. In addition, they suggested that unilateral action by Treasury in international forums may cause confusion among international partners regarding the nature of the U.S. position on key issues. On the basis of comments they received from foreign officials, Justice and State officials concluded that such confusion might weaken the United States’ ability to influence the activities of international partners. TFFC responded that it has not observed any confusion among its international partners in FATF regarding the U.S. position on key issues. Justice and State officials did not raise similar concerns concerning FinCEN’s collaboration when participating with them on issues related to the Egmont Group. In contrast, Justice officials expressed some criticisms of more recent collaboration with OFAC on issues such as information sharing. OFAC responded that it has regular contact with Justice with respect to enforcement matters and that the two agencies have an ongoing dialogue regarding information sharing. OFAC also noted that only a small subset of its enforcement cases involve the type of knowing conduct that is appropriate for referral to criminal authorities. While TFI has conducted strategic planning activities at different levels within the organization, TFI as a unit has not fully adopted certain key practices. In particular, TFI has not clearly aligned its resources with its priorities. TFI’s strategic planning documents do not consistently integrate discussion of the resources needed to achieve TFI’s strategic objectives. In addition, TFI’s resource levels for each component cannot be clearly linked to its workload. Also, while some TFI components have taken the initiative to conduct some workforce planning activities, TFI management has not developed a process for conducting comprehensive strategic workforce planning. Our review of TFI’s and its components’ strategic planning documents and discussions with TFI officials showed that TFI has not clearly aligned its resources with its priorities. TFI officials indicated that priorities could be identified in TFI’s strategic plan. TFI identified four relevant strategic plans: one for TFI as a whole and one each for FinCEN, OIA, and TEOAF. Strategic plans are used to communicate what an organization seeks to achieve in the upcoming years, according to Treasury instructions. The goals and strategies presented in the plan provide a road map for both the organization and its stakeholders. Strategic plans should guide the formulation and execution of the budget as well as other decision making that shapes and guides the organization. These plans are a tool for setting priorities and allocating resources consistent with these priorities, according to Treasury. Our previous work has shown that strategic plans should clearly link goals and objectives to the resources needed to achieve them and are especially important in those cases where agencies submit a strategic plan for each of their major components and a strategic overview that under the guidance is to show the linkages among these plans. Government Performance and Results Act guidance also establishes six key elements of successful strategic plans, and Treasury’s instructions suggest plan formats. However, we found that TFI’s and its components’ strategic plans do not consistently integrate discussion of the resources necessary to achieve TFI objectives. Specifically, we found that FinCEN’s and TEOAF’s strategic plans contain some discussion of the resources needed to achieve their objectives. TFI’s and OIA’s strategic plans do not contain discussion of the resources needed to achieve their objectives. OFAC and TFFC do not currently have strategic plans. While TFI’s strategic plan includes a mission statement; a list of threats, goals, and objectives; and means and strategies, it does not include any discussion or analysis of TFI’s resource needs. Moreover, TFI’s strategic plan lists all four of its goals, and each of its means and strategies under each goal as equivalent: it does not indicate any prioritization among its various goals, means, and strategies. The Under Secretary for TFI said that he uses the annual budget process to align resources with priorities. However, two reasons suggest why the results of the budget process do not necessarily reflect TFI’s strategic priorities. First, there are many other factors that affect the budget process that are unrelated to TFI’s priorities. The amount of resources TFI seeks is integrated into a larger Treasury budget request, which may entail modifying TFI’s request. Congress, then, may choose to provide more or less than the amount of resources to TFI that Treasury requested. Second, the annual budget process reflects priorities only for a given year, unlike strategic plans, which are intended to be multiyear documents and thus reflect longer-term priorities. Further, the linkage between the resources allocated to each TFI component and its workload is unclear. Estimated workload measures for each of TFI’s components show a growth in workload since 2005, but it is unclear how this growth relates to resource increases. For example, one measure of FinCEN’s workload—the number of SARs it must analyze— has increased 50 percent and the number of employees in FinCEN has increased 3 percent. In addition, TEOAF has seen an 83 percent increase in the value of seized assets it manages and the number of FTEs has grown 10 percent. Further, the number of OFAC licensing actions increased 56 percent while the number of FTEs grew 18 percent. Additionally, OIA experienced a more than 500 percent increase in intelligence taskings from 2006 to 2008 and has received a 200 percent increase in FTEs. Finally, TFFC estimates that its workload related to developing policy papers, legislative and rulemaking papers, trips, and public outreach events increased between 100 and 200 percent from 2005 to 2009; its FTEs grew nearly 80 percent from 2005 to 2008. According to TFI officials, their ability to allocate resources to their highest priorities is constrained in some circumstances. The Under Secretary and other TFI officials identified activities related to Iran and North Korea as persistent priorities. However, OFAC officials noted that in spite of the importance of Iran- and North Korea-related activities, they must expend a significant amount of resources on implementing the Cuba embargo. With regard to acting on specific licensing requests for exports and travel to Cuba, according to OFAC officials, they have little flexibility under the law. OFAC is required to process all license applications that it receives. For 2005 through 2008, this amounted to more than 200,000 licensing actions—more than 95 percent of which related to the Cuba program. In 2008 alone, OFAC responded to nearly 60,000 licensing requests related to the Cuba travel program. OFAC officials characterized this situation as a resource burden. In contrast, according to OFAC officials, they have some flexibility regarding how they enforce the Cuba sanctions program, for example, through the assessment of civil penalties for violations. According to OFAC officials, for many years (through 2005), OFAC assessed a large number of civil penalties related to the Cuba travel regulations. As violations of these regulations have a relatively small financial penalty associated with them, the average penalty amount was relatively low. Since 2006, according to OFAC officials, they have consciously utilized the flexibility they are allowed in order to dedicate their enforcement resources to higher-value areas (e.g., those related to trade with Cuba, Iran, and North Korea). As a result, the number of penalties assessed annually related to the Cuba sanctions program has dropped significantly, from 498 in 2005 to 46 in 2008. At the same time, the average value of OFAC’s civil penalties for violations of all sanctions programs has increased significantly, from approximately $2,400 in 2005 to nearly $31,000 in 2008. Despite efforts by some components, TFI management has not yet conducted comprehensive activities to address the key principles of strategic workforce planning. According to the Under Secretary, TFI’s workforce is its greatest asset, and ensuring that it is the right size and includes the right skills is critical to TFI’s future ability to achieve its mission. Prior GAO work has identified key principles to assist agencies in conducting strategic workforce planning. Among these principles are (1) involving top management, employees, and other stakeholders in developing, communicating, and implementing the strategic workforce plan, and (2) monitoring and evaluating the agency’s progress toward its human capital goals and the contribution that human capital results have made toward achieving programmatic results. According to TFI officials, some TFI components have taken the initiative individually to perform some strategic workforce planning activities. Specifically, as a Treasury bureau, FinCEN has an internal human resources group that, among other things, performs some strategic workforce planning activities. For example, according to FinCEN officials, they undertook an effort to identify mission critical occupations, which resulted in designating three positions as mission critical. As a result, FinCEN developed plans to address human capital challenges related to these occupations and regularly reports to Treasury’s Office of the Deputy Assistant Secretary for Human Resources and Chief Human Capital Officer on its progress. In addition, OIA has taken a variety of steps to address human capital challenges. For example, according to OIA officials, to address challenges in recruiting and retaining intelligence analysts, OIA cataloged the human capital flexibilities available to provide recruiting and retention incentives. As a result, OIA officials indicated that they have identified and are now able to utilize a variety of human capital flexibilities, such as student loan repayment to attract and retain staff and the Pat Roberts Intelligence Scholarship Program to pay for the continuing educational needs of its analysts. Nonetheless, TFI management has not yet conducted comprehensive activities to address the key principles of strategic workforce planning for TFI as a whole. TFI top management has not set the overall direction and goals of workforce planning or evaluated progress toward any human capital goals. The Under Secretary for TFI told us that since the creation of TFI, growing OIA’s human capital has been one workforce planning priority. He also stated that he has conducted additional targeted workforce planning in consultation with the heads of the largest TFI components, such as FinCEN. However, neither TFI officials nor Treasury human capital officials were aware of any explicit workforce planning goals set by TFI management. In addition, TFI officials were unaware of any formal reviews or reports that evaluated the contribution of human capital results to achieving programmatic goals. Moreover, TFI currently lacks an effective process for conducting comprehensive strategic workforce planning. According to the Under Secretary for TFI, most workforce planning takes place as a part of the annual budget process. TFI has not established a separate, comprehensive strategic workforce planning process led by TFI management. According to an official from Treasury’s Office of the Deputy Assistant Secretary for Human Resources and Chief Human Capital Officer, the office has provided targeted workforce planning assistance to OIA and, in spring 2009, began discussing how they could assist TFI in broader workforce planning efforts. In particular, they cited the need to conduct an overall workforce analysis and succession planning. According to TFI’s Senior Resource Manager, TFI’s workforce planning mainly occurs as a component of the annual budget preparation process. As a part of this process, individual components can request additional staff resources for priority initiatives they identify. TFI management then evaluates these individual proposals and determines what will be included in TFI’s budget request. Without the benefit of comprehensive strategic workforce planning to assist in identifying solutions, it is unclear whether TFI will be able to effectively address persistent workforce challenges. These include the following: Lack of comprehensive training needs assessment. While some TFI components have assessed the training needs of their staff, there has been no similar TFI-wide effort. Without such an assessment, it is unclear whether TFI staff are being prepared to address the challenges posed by illicit financing in the future. Obstacles to hiring intelligence analysts. According to officials from OIA and Treasury’s Office of the Deputy Assistant Secretary for Human Resources and Chief Human Capital Officer, OIA continues to be at a competitive disadvantage relative to other agencies in the Intelligence Community regarding recruiting. Specifically, according to Treasury officials, most other agencies in the Intelligence Community can hire intelligence analysts into the excepted service, thus bypassing the need for competitive selection of candidates. In addition, OIA lacks direct hire authority for its intelligence analysts. According to OIA officials, these challenges make OIA’s hiring process more complicated and lengthier than those of other agencies in the Intelligence Community. TFI has not yet developed an appropriate set of performance measures, but continues to attempt to improve its efforts. Since TFI was formed, its individual performance measures have varied substantially in number and the extent to which they address attributes of successful performance measures that GAO has identified. For fiscal year 2008, the performance measures of TFI’s components vary in the extent to which they address attributes of successful performance measures identified by GAO. TFI’s performance measures address many, but not all, of these attributes. According to Treasury officials, TFI recognizes the need to improve its performance measures, and is developing a new set of measures to assess its performance. However, our review of a draft version of these revised measures suggests that some concerns would remain if they are implemented as proposed. As shown in figure 4, since its formation in 2004, TFI’s performance measures have varied over time. TFI reported on 11 total measures in fiscal year 2005, 9 measures in fiscal year 2006, 10 measures in fiscal year 2007, and 20 measures in fiscal year 2008. The number and content of performance measures have varied within components over time, as well. For example, FinCEN had 6 measures in fiscal year 2007 and 16 in fiscal year 2008. Components have frequently introduced new measures only to discontinue them in subsequent years. For instance, OFAC reported 4 measures in fiscal year 2005, and then discontinued 3 for fiscal year 2006. OIA, newly formed in 2004, reported 1 performance measure in fiscal year 2006 and none the following years. The extent of inconsistency in TFI’s performance measures creates challenges for managers to using performance data in making management decisions. According to TFI officials, the sharp increase in the number of performance measures reported in fiscal year 2008 was a response to the evaluation and recommendations of the Office of Management and Budget’s (OMB) Program Assessment Rating Tool (PART) in 2005 and 2006. The PART process identified potential enhancements to FinCEN’s performance measures, leading to the inclusion of new measures for FinCEN. FinCEN officials said that Treasury performance officials asked that the newly developed measures be added to FinCEN’s contribution to the fiscal year 2008 performance and accountability report. According to officials in Treasury’s Office of Strategic Planning and Performance Management (OSPPM), the nature of FinCEN’s work is operational, making it easier to evaluate the bureau’s performance. TFI’s policy-making components, such as TFFC, have found it more difficult to develop meaningful performance metrics. The performance measures TFI currently has in place also vary in the degree to which they exhibit the attributes of successful performance measures. Prior GAO work has identified nine attributes of successful performance measures. Table 1 shows the nine attributes, their definitions, and the potentially adverse consequences of not having the attribute. TFI’s performance measures address many of these attributes of successful performance measures, but do not fully address other attributes. Figure 5 represents our assessment of TFI’s 20 performance measures versus the key attributes of successful performance measures. According to our analysis, TFI’s 20 measures have many of the attributes of successful performance measures, including the following. Measurable target. All 20 of TFI’s measures have measurable, numerical targets in place. Numerical targets allow officials to more easily assess whether goals and objectives were achieved because comparisons can be made between projected performance and actual results. Limited overlap. We found limited overlap among TFI’s 20 measures, that is, little or no unnecessary or duplicate information provided by the measures. Objectivity. We found all of TFI’s measures to be objective, or reasonably free from significant bias. Governmentwide priorities. We also determined that TFI’s 20 measures are linked to broader priorities such as cost-effectiveness, quality, and timeliness. However, the measures did not fully satisfy the following attributes. Linkage. Six TFI measures are not clearly linked to Treasury goals. For example, TEOAF measures the proportion of its forfeitures that come from high-impact cases. However, it is unclear why high-impact cases in particular are measured as opposed to all cases. Our analysis could not link TEOAF’s measure to broader agencywide goals related to removing or reducing threats to national security. Core program activities. Seven TFI measures do not sufficiently cover core program activities. For example, OFAC has three main responsibilities related to the administration of sanctions: (1) issuing licenses, (2) designation programs, and (3) enforcement through civil penalties. However, OFAC’s one performance measure only assesses cases involving civil penalties resulting from sanctions violations. Balance. We found that TFI’s set of performance measures is not balanced. In fiscal year 2008, TFI reported on 20 measures, 16 of which related to FinCEN’s programs and activities, 1 that related to OFAC, 1 that related to TEOAF, 2 that related to TFFC, and none that related to OIA. As a result, a disproportionate number of measures (16) relate to administering and enforcing the BSA and none to the analysis of financial intelligence. An emphasis on one priority at the expense of others may skew the overall performance and preclude TFI’s managers from understanding the effectiveness of their programs in supporting Treasury’s overall mission and goals. In addition, the lack of balance exhibited by TFI’s measures may give the impression that administering the BSA is prioritized over other functions, such as the analysis of financial intelligence or administration of licensing and designations programs. Treasury officials acknowledge the limits of TFI’s current performance measurement and have been working to enhance its measures, by replacing them with a single new TFI-wide measure. According to OSPPM officials, they began an initiative to overhaul TFI’s performance measurement in 2007. OSPPM officials stated that TFI’s performance measures did not effectively reflect the impact of TFI’s activities. After consultation with each TFI component, OSPPM decided to design a new composite measure that will provide a way to assess how TFI is performing overall as a unit. The new measure would outline the roles and functions of TFI’s components and evaluate the outcomes of their activities. However, the process of reforming TFI’s performance measurement has not been completed. The implementation of the new measure is still uncertain, although TFI management approved its use in May 2009 and components finalized the measures they will contribute. According to a Treasury official, OSPPM decided on the format of the new composite measure after researching other federal agencies’ approaches to performance measurement, as well as those of management consultancies in the private sector. The composite measure takes a similar form to the measure implemented for Treasury’s Office of Technical Assistance (OTA), first reported in Treasury’s fiscal year 2008 performance and accountability report. The measure aims to provide a more comprehensive snapshot of the outcome of OTA’s activities by measuring impact and traction. The composite measure for TFI will align the two Treasury outcomes that relate to their activities with TFI’s performance goals and focus areas, according to Treasury. Each focus area corresponds with a TFI component (OFAC, OIA, TFFC, and FinCEN). The components will track 3 to 6 performance measures and will assign a numeric score to the performance at the end of the year. Each component’s measures will be combined to reach an overall score for the component. In the end, an overall score for TFI will be determined by averaging the individual scores of the components. All TFI components except TEOAF have been involved in the process of developing the composite measure. Both OSPPM and TEOAF officials stated that TEOAF would not be included, since its work did not logically fit in one of the focus areas. OIA, TFFC, and OFAC have developed new measures to assess the impact of their activities. FinCEN will use 5 of its existing measures for its contribution to the composite measure. TFI faces significant challenges in developing and implementing the new composite measure. There is an inherent difficulty in creating quantitative measures for policy organizations, whose activities may not be easily represented with numbers. Many TFI managers pointed to the difficulty of making qualitative information measurable for performance reporting. While the initiative to improve TFI’s performance measurement is a positive step, our preliminary analysis raises concerns regarding the extent to which the new TFI composite measure will allow full and accurate assessment of TFI’s performance. For example, we identified the following concerns: Objectivity and reliability of survey-based measures. OIA has developed surveys to measure the timeliness, relevance, and accuracy of its intelligence support, all-source analysis, and security and counterintelligence. The survey respondents are internal customers of OIA’s products within Treasury such as the Deputy Secretary, Under Secretaries, Assistant Secretaries, Deputy Assistant Secretaries, and senior staff. The objectivity of the surveys is not clear given that respondents’ answers may be biased because they have a vested interest in the outcome, as it is a reflection on their performance. The reliability of the measures is also questionable, as only between 7 and 13 internal customers—rather than external customers in the Intelligence Community—will be asked to complete the survey. TFI believes that while there is no perfect method for evaluating OIA’s performance, the surveys are an effective means for Treasury policymakers to assess OIA’s performance. They also noted their plan to survey customers in other parts of the Intelligence Community in 2010. Lack of validation for some components’ self-assessment-based measures. Some components’ performance measures rely exclusively on self-assessments by component managers and lack external verification. For example, TFFC has 4 measures for which management will compile supporting information and assign a high, medium, or low rating for TFFC’s performance in that area. Treasury and TFI acknowledge (but have not yet addressed) a lack of a process to independently verify TFFC’s self- assessment. OTA’s composite measure, which OSPPM officials cited as similar to TFI’s, also uses elements of self-assessment, but those results are independently validated by an external source and reviewed by Treasury. Calculation of overall TFI score. According to TFI, to calculate the composite measure, individual components’ results will be averaged into a single TFI measure. Since the components are not all contributing the same number of measures to the overall composite measure, averaging components’ scores means components’ individual performance measures are not weighted equally in TFI’s overall measure. Since its creation in 2004, TFI has undertaken a variety of activities to address a broad range of national security threats, such as enhancing the use of financial intelligence against terrorism and the proliferation of weapons of mass destruction. In addition, TFI and its components have taken some steps toward more effective management of TFI as an organization. For instance, TFI and some components have developed strategic plans and have performed workforce planning activities. Nonetheless, TFI has not fully utilized some management tools to create an integrated organization with a consistent, well-documented approach to planning and managing its operations. As a result, additional opportunities for improvement exist. First, despite the critical role interagency collaboration plays in many of TFI’s functions and general approval by key interagency partners, such collaboration may not be as effective as it could be in certain respects. TFI and some of its interagency partners had strikingly different perceptions about the quality of collaborative efforts involving multilateral forums. Lacking clearly documented policies and procedures for collaboration in this area, interagency partners were unsure how to resolve their differences. Without a mechanism to monitor and report on the results of such interagency collaboration, TFI officials were generally unaware that differences existed or what impact they might be having, and thus saw no need to take steps to understand or address them. Second, TFI management has not clearly aligned its resources with its priorities. Without clear, consistent objectives and an understanding of how resources are aligned with them, it may be unclear to Congress, TFI’s interagency partners, or even TFI staff what TFI’s priorities are and whether TFI has sufficient resources to address them. In addition, while some components have undertaken workforce planning activities, TFI management has yet to implement a comprehensive strategic workforce planning process for TFI as a whole. As a result, TFI may be at risk of not having the workforce required to address future national security threats. Finally, TFI’s performance reporting has been uneven. Though TFI has been working to improve its ability to effectively measure its performance as a unit, TFI has not yet developed a set of performance measures that embody the attributes of successful performance measures. Without a set of effective performance measures, it is difficult to judge how well TFI is achieving its mission. To help strengthen Treasury’s ability to achieve its strategic goal of preventing terrorism and promoting the nation’s security through strengthened international financial systems, we recommend that the Secretary of the Treasury direct the Under Secretary for Terrorism and Financial Intelligence to take the following four actions: 1. develop and implement, in consultation with interagency partners participating in international forums related to anti-money laundering and counterterrorist financing issues, (a) compatible policies, procedures, and other means to operate across agency boundaries and (b) a mechanism for monitoring, evaluating, and reporting on interagency collaboration; 2. develop and implement policies and procedures for aligning resources with TFI’s strategic priorities; 3. develop and implement a TFI-wide process, including written guidance, that addresses the key principles of strategic workforce planning; and 4. ensure that TFI’s performance measures exhibit the key attributes of successful performance measures. We provided a draft copy of this report to the Departments of the Treasury, State, and Justice. Justice and State declined to provide comments. Treasury provided comments, which are reprinted in appendix IV. Treasury’s comments highlighted what it views as TFI’s significant contributions since 2005. Treasury said that TFI has helped reduce the threat of terrorist financing, stating that al Qaeda is in its worst financial position in at least 3 years. In addition, Treasury highlighted TFI’s efforts to counter the financing of proliferation, for example, using Executive Order 13382 to isolate banks, companies, and individuals tied to North Korean, Iranian, and Syrian proliferation. Treasury’s comments also discussed ongoing or planned actions related to our four recommendations: With regard to our recommendation that TFI develop and implement policies and procedures to operate across agency boundaries and develop a mechanism for monitoring, evaluating, and reporting on interagency collaboration, the Under Secretary for Terrorism and Financial Intelligence indicated that his counterparts in other agencies have never expressed concerns about process or substance to him regarding TFI’s collaboration. Nonetheless, Treasury stated that it would redouble its efforts to coordinate with other agencies, but did not identify specific steps it plans to take. As discussed in our report, we recommend that such steps include developing clear policies for conducting and monitoring the results of interagency collaboration. In response to our recommendation to develop and implement policies and procedures for aligning resources with TFI’s strategic priorities, Treasury indicated that TFI is working to improve its processes in this area. While Treasury stated that its use of the annual budget process has worked well to match resources to strategic goals, we have concluded that the annual budget process does not necessarily reflect TFI’s strategic priorities, in part because it reflects priorities for only a given year and not longer- term priorities. In relation to our recommendation to develop and implement a TFI-wide process to address the key principles of strategic workforce planning, Treasury commented that it is working with Johns Hopkins University’s Capstone Consulting to develop a workforce planning model for Treasury. As a part of this effort, TFI plans to develop and disseminate written guidance establishing a process to align resources with TFI and Treasury strategic goals in the next 12 months. Finally, Treasury stated that it will work to implement our recommendation to ensure that TFI’s performance measures exhibit the key attributes of successful performance measures. At the same time, Treasury contends that TFI’s true performance will often be best conveyed through briefings to those who possess the appropriate security clearances. To ensure that such briefings provide systematic evidence regarding TFI’s performance, they should include assessments based on performance measures that exhibit the key attributes of successful performance measures discussed in this report. Further, we would note that using classified information to help assess TFI’s performance does not preclude TFI from developing unclassified performance measures or from producing an unclassified assessment of its performance. In fact, Treasury’s statements about the financial condition of al Qaeda referenced in its response to this report provide Treasury’s assessment of TFI’s impact on al Qaeda without disclosing classified information. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees as well as the Secretaries of the Treasury, State, and Justice. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-4347 or YagerL@gao.gov. GAO staff who contributed to this report are included in appendix V. To analyze the Office of Terrorism and Financial Intelligence’s (TFI) use of its tools to address national security threats, we reviewed Treasury reports and documents related to its efforts since 2004. For example, we reviewed all of Treasury’s performance and accountability reports and FinCEN’s annual reports since TFI was formed. We also reviewed other documents discussing activities involving TFI, including the National Money Laundering Strategy and the National Strategy for Combating Terrorism. To identify practices for enhancing interagency collaboration, we reviewed prior GAO reports. We then interviewed officials from Treasury and its key interagency partners (the Departments of State and Justice) to understand TFI’s processes for interagency collaboration. To analyze TFI’s efforts to conduct strategic resource planning, we reviewed a variety of Treasury documents. To identify TFI’s priorities, we reviewed documents such as Treasury’s performance and accountability reports, congressional testimony by the Under Secretary for Terrorism and Financial Intelligence, and TFI’s Web site. In addition, we reviewed documentation from TFI and its components related to strategic planning, including the current strategic plans for TFI and each component. Further, we reviewed TFI data regarding the number of staff (full-time equivalents or FTE) in each TFI component for fiscal years 2005 through 2008. We then obtained data from TFI components to illustrate how their workload has changed over time. We determined that these data are sufficiently reliable for the purpose of this report. Additionally, we reviewed prior GAO work related to principles of effective strategic workforce planning. To determine the extent to which TFI’s practices reflect these principles, we interviewed TFI management, including the Under Secretary for Terrorism and Financial Intelligence and managers from TFI components. Further, we interviewed officials from Treasury’s Office of the Deputy Assistant Secretary for Human Resources and Chief Human Capital Officer. To analyze the extent to which TFI’s performance measures provide an effective assessment of TFI’s performance, we reviewed Treasury’s reporting on TFI’s performance. Specifically, we analyzed the performance measures contained in Treasury’s performance and accountability reports for fiscal years 2005 through 2008. We also evaluated TFI’s performance measures for fiscal year 2008 against key attributes of successful performance measures. To perform this evaluation, two analysts independently assessed each of the performance measures against the nine attributes identified in the specifications for each attribute included in that report. Those analysts then met to discuss and resolve any differences in the results of their analyses. A supervisor then reviewed and approved the final results of the analysis. To obtain information on TFI’s process to improve its set of performance measures, we interviewed officials from each TFI component and Treasury’s Office of Strategic Planning and Performance Management. We also obtained a copy of draft TFI performance measures that will be presented to the Office of Management and Budget for its review. We then interviewed officials from each TFI component and Treasury’s Office of Strategic Planning and Performance Management regarding how the data for these draft performance measures would be obtained and how the overall TFI composite measure would be developed. We also present data on TFI staffing and budget for fiscal years 2005 through 2008. As these data are presented for background purposes, we did not assess their reliability. Appendix II: Current U.S. Sanctions Programs Office of Foreign Assets Control country-based sanctions programs Office of Foreign Assets Control list-based sanctions programs Liberia (former regime of Charles Taylor) In addition to the individual named above, Jeff Phillips (Assistant Director), Jason Bair, Lisa Reijula, Katherine Brentzel, Martin de Alteriis, and Mary Moutsos made key contributions to this report. Elizabeth Curda, Karen Deans, Cardell Johnson, Barbara Keller, and Hugh Paquette also contributed to the report.
In 2004, Congress combined preexisting and newly created units to form the Office of Terrorism and Financial Intelligence (TFI) within the Department of the Treasury (Treasury). TFI's mission is to integrate intelligence and enforcement functions to (1) safeguard the financial system against illicit use and (2) combat rogue nations, terrorist facilitators, and other national security threats. In the 5 years since TFI's creation, questioned have been raised about how TFI is managed and allocates its resources. As a result, GAO was asked to analyze how TFI (1) implements its functions, particularly in collaboration with interagency partners, (2) conducts strategic resource planning, and (3) measures its performance. To conduct this analysis, GAO reviewed Treasury and TFI planning documents, performance reports, and workforce data, and interviewed officials from Treasury and its key interagency partners. TFI undertakes five functions, each implemented by a TFI component, in order to achieve its mission. TFI officials cite the analysis of financial intelligence as a critical part of TFI's efforts because it underlies TFI's ability to utilize many of its tools. They said that the creation of OIA was critical to Treasury's ability to effectively identify illicit financial networks. To achieve its mission, TFI's five components often work with each other, other U.S. government agencies, the private sector, or foreign governments. Officials from TFI and its interagency partners cited strong collaboration in many areas, such as effective information sharing between FinCEN and the Justice Department (Justice). Officials differed, however, about the quality of interagency collaboration involving international forums. Treasury officials who led this collaboration stated that it runs smoothly and that they were unaware of any significant concerns, while Justice and State officials reported declining collaboration and unclear mechanisms to enhance or sustain it. While TFI and some of its components have conducted selected strategic resource planning activities, TFI as a unit has not fully adopted key practices that enhance such efforts. For example, TFI and its components have produced multiple strategic planning documents in recent years, but the objectives in some of these documents are not clearly aligned with resources needed to achieve them. As a result, it may be unclear whether TFI has sufficient resources to address its objectives. Also, though TFI has undertaken some workforce planning activities, it lacks a process for performing comprehensive strategic workforce planning. Thus, it is unclear whether TFI is able to effectively address persistent workforce challenges. Also, TFI has not yet developed appropriate performance measures, changing their number and substance each year. Though TFI's current measures fully address many attributes of effective performance measures, they do not cover all TFI core program activities. TFI officials acknowledge the need for improvement and have worked since 2007 to develop one overall performance measure to assess TFI. Yet questions remain about when TFI will implement its new measure and whether it will effectively gauge TFI's performance.
From fiscal year 2002 to fiscal year 2008, the U.S. government provided approximately $16.5 billion for the training and equipping of Afghan National Security Forces. State and Defense officials told us they will request over $5.7 billion to train and equip the Afghan army and police in fiscal year 2009. The goal of these efforts is to transfer responsibility for the security of Afghanistan from the international community to the Afghan government. As part of this effort, from June 2002 through June 2008, CSTC-A obtained about 380,000 small arms and light weapons from the United States and other countries for the Afghan army and police. The United States purchased over 240,000 of these weapons for about $120 million and shipped them to Afghanistan beginning in December 2004. Also, CSTC-A reported that it coordinated the donation of about 135,000 additional weapons from 21 countries, which valued their donations at about $103 million (see app. II). Figure 1 illustrates the number of weapons obtained for ANSF by USASAC, Navy IPO, and international donors since June 2002. The United States and international donors have provided rifles, pistols, machine guns, grenade launchers, shotguns, rocket-propelled grenade launchers, and other weapons. About 80 percent of the U.S.-procured weapons were “non-standard” weapons, which are not typically supplied by Defense. Many non-standard weapons, including about 79,000 AK-47 rifles, were received from former Warsaw Pact countries or were obtained from vendors in those countries. (See fig. 2 for details on U.S.- procured weapons shipped to Afghanistan for ANSF.) USASAC and Navy IPO procured most of the 242,000 weapons for ANSF through an adaptation of the Foreign Military Sales (FMS) program referred to by Defense as “pseudo-FMS.” As in traditional FMS, pseudo-FMS procurements are overseen by DSCA. However, in contrast to traditional FMS procurements, for Afghanistan, Defense primarily used funds appropriated by the Congress for the Afghanistan Security Forces Fund to purchase weapons to train and equip ANSF. USASAC procured about 205,000 (85 percent) of these weapons, including about 135,000 non-standard weapons purchased from four U.S.-based contractors. Navy IPO provided the remaining 37,000 (15 percent) M-16 rifles for the Afghan National Army. After procuring weapons for ANSF, Defense or its contractors transported them to Afghanistan by air, and CSTC-A received the weapons at Kabul International Airport. The Afghan National Army transported the weapons from the airport to one of two central storage depots in Kabul— one for the Afghan National Army and another for the Afghan National Police. Due to the limited operational capacity of the Afghan army and police and the extremely hostile environment in which they operate, CSTC-A retains control and custody of the weapons provided by the United States and international donors during storage at the central depots until the weapons are issued to ANSF units. In addition to maintaining the security and control of weapons stored at the central depots, CSTC-A trains ANSF in inventory management and weapons accountability. To this end, the central depots are staffed by U.S. and coalition military personnel, U.S. contractors, contract Afghan staff, and ANSF personnel. According to DSCA officials, equipment provided to ANSF is subject to end use monitoring, which is meant to provide reasonable assurances that the ANSF is using the equipment for its intended purposes. CSTC-A serves as the security assistance organization (SAO) for Afghanistan, with responsibility for monitoring the end use of U.S.-procured weapons and other equipment provided to ANSF, among other security assistance duties. DSCA’s Security Assistance Management Manual provides guidance for end use monitoring, which is classified as either “routine” or “enhanced,” depending on the sensitivity of the equipment and other factors, as follows: Routine end use monitoring. For non-sensitive equipment provided to a trusted partner, DSCA guidance calls for SAOs to conduct routine monitoring in conjunction with other required security assistance duties. As such, according to DSCA officials, DSCA expects SAOs to record relevant end use monitoring observations made during interactions with host country military and defense officials, such as visits to defense facilities, meetings or telephone conversations, military ceremonies, and dignitary visits. Enhanced end use monitoring. For sensitive defense articles and technology transfers made within sensitive political situations, DSCA guidance calls for more intensive and formal monitoring. This includes providing DSCA with equipment delivery records with serial numbers, conducting routine physical inventories of the equipment by serial number, and quarterly reporting on inventory results. Figure 3 illustrates the accountability process for weapons that CSTC-A provides to ANSF. Defense did not establish clear guidance on what accountability procedures apply when it is handling, transporting, and storing weapons obtained for ANSF through U.S. procurements and international donations. As a result, our tests and analysis of inventory records show significant lapses in accountability for these weapons. Such accountability lapses occurred throughout the weapons supply process. First, when USASAC and CSTC-A initially obtained weapons for ANSF, they did not record all the corresponding serial numbers. Second, USASAC and CSTC-A did not maintain control or visibility over U.S.-procured weapons during transport to the two ANSF central storage depots in Kabul. Third, CSTC-A did not maintain complete and accurate inventory records or perform physical inventories of weapons stored at the central depots. Finally, inadequate U.S. and ANSF staffing at the central depots along with poor security and persistent management challenges have contributed to the vulnerability of stored weapons to theft or misuse. These lapses have hampered CSTC-A’s ability to detect weapons theft or other losses. CSTC-A has recently taken steps to correct some of the deficiencies we identified, but CSTC-A has indicated that its continued implementation of the new accountability procedures is not certain, considering staffing constraints and other factors. Defense did not clearly establish what accountability procedures applied to the physical security of weapons intended for ANSF. As a result, the Defense organizations involved in providing weapons for ANSF, including DSCA, USASAC, Navy IPO, U.S. Central Command, and CSTC-A, did not have a common understanding of what accountability procedures to apply to these weapons while they were in U.S. control and custody. Defense guidance on weapons accountability lays out procedures for Defense organizations to follow when handling, storing, protecting, securing, and transporting Defense-owned weapons. These procedures include (1) serial number registration and reporting and (2) 100 percent physical inventories of weapons stored in depots by both quantity and serial number at least once annually. The objective of serial number registration and reporting procedures, according to Defense guidance, is to establish continuous visibility over weapons through the various stages of the supply process, including “from the contractor to depot; in storage.” However, Defense did not specifically direct U.S. personnel to apply these or any alternative weapons accountability procedures for the weapons in their control and custody intended for ANSF, and CSTC-A officials we spoke to were uncertain about the applicability of existing Defense guidance. In August 2008, the Under Secretary of Defense for Intelligence emphasized the importance of safeguarding weapons in accordance with existing accountability guidance until they are formally transferred to ANSF, stating that “the security of conventional [arms, ammunition, and explosives] is paramount, as the theft or misuse of this material would gravely jeopardize the safety and security of personnel and installations world-wide.” However, in October 2008, Defense’s Inspector General reported that U.S. Central Command had not clearly defined procedures for accountability, control, and physical security of U.S.-supplied weapons to ANSF, and as a result, misplacement, loss, and theft of weapons may not be prevented. The Inspector General recommended, among other things, that U.S. Central Command issue formal guidance directing the commands and forces in its area of responsibility, including CSTC-A, to apply existing Defense weapons accountability procedures. U.S. Central Command and the Office of the Under Secretary of Defense for Intelligence concurred with this recommendation. Nonetheless, U.S. Central Command officials we spoke to in December 2008 did not have a common understanding of when formal transfer of the weapons to ANSF is considered to have occurred, and hence up to what point to apply Defense accountability procedures, if at all. As of December 2008, U.S. Central Command had not decided what new guidance to issue. In July 2007, we made Defense and the Multinational Force-Iraq aware that they had not specified which accountability procedures applied for weapons provided to Iraq under the train-and-equip program in that country. To help ensure that U.S.-funded equipment reaches the Iraqi security forces as intended, we recommended that the Secretary of Defense determine which accountability procedures should apply to that program. In January 2008, the Congress passed legislation requiring that no defense articles may be provided to Iraq until the President certifies that a registration and monitoring system has been established and includes, among other things, the serial number registration of all small arms to be provided to Iraq, and a detailed record of the origin, shipping, and distribution of all defense articles transferred under the Iraq Security Forces Fund or any other security assistance program. On the basis of our data analysis and tests of weapons inventory records, we estimate that USASAC and CSTC-A did not maintain complete records for about 87,000 weapons—about 36 percent of over 242,000 weapons they procured for ANSF and shipped from December 2004 through June 2008. For about 46,000 weapons, USASAC did not maintain serial number records—information fundamental to weapons accountability—and for an estimated 41,000 weapons, CSTC-A did not maintain documentation on the location or disposition, based on our testing of a random sample of available serial number records. Weapons for which CSTC-A could not provide complete accountability records were not limited to any particular type of weapon or a specific shipment period. Records were missing for six of the seven types of weapons we tested and from shipments made during every year from 2004 to 2008. In addition, CSTC-A did not maintain complete or reliable records for the weapons it reported it had obtained from international donations from June 2002 through June 2008. According to CSTC-A, this totals about 135,000 weapons. USASAC and Navy IPO records indicate that they procured over 242,000 weapons and shipped them to Afghanistan from December 2004 through June 2008. However, USASAC did not record and maintain the serial numbers for over 46,000 of the weapons it purchased. USASAC’s records were incomplete because it did not require contractors to submit serial numbers for non-standard weapons they provided—a standard practice in traditional Foreign Military Sales. In July 2008, USASAC indicated that it would begin recording serial numbers for all weapons it procures for ANSF. (See app. III for a timeline of key events relating to accountability for ANSF weapons and other sensitive equipment.) However, as of December 2008 USASAC had not yet included provisions in its procurement contracts requiring the vendors of nonstandard weapons to provide these serial numbers. Furthermore, CSTC-A did not record the serial numbers for the weapons it received from international donors and stored in the central depots in Kabul for eventual distribution to ANSF. In a July 2007 memorandum, the Commanding General of CSTC-A noted that for international donations there was “no shipping paperwork to confirm receipt, and equipment was not inventoried at arrival for validation.” By not recording serial numbers for weapons upon receipt, USASAC and CSTC-A could not verify the delivery and subsequent control of weapons in Afghanistan. In July 2008, CSTC-A began to record serial numbers for all the weapons it received, including U.S.-procurements and international donations. However, CSTC-A had indicated that its continued recording of serial numbers was not certain. In the standard operating procedures it established in July 2008, CSTC-A indicated that it would record these numbers “if conditions are favorable with enough time and manpower allotted to inventory.” In December 2008, CSTC-A officials told us that to date they were fully implementing these new procedures. USASAC, Navy IPO, CSTC-A, Defense shippers, and contractors have been involved in arranging the transport of U.S.-procured weapons into Kabul by air. However, these organizations did not communicate adequately to ensure that accountability was maintained over weapons during transport. In particular, according to CSTC-A officials: USASAC and Navy IPO did not always provide CSTC-A with serial number records for weapons shipped to Afghanistan against which CSTC-A could verify receipt. Defense shippers sometimes split weapons shipments among multiple flights, making it difficult for CSTC-A to reconcile partial shipments received at different times with the information the suppliers provided for the entire order. Suppliers did not always label weapons shipments clearly, leading to confusion over their contents and intended destinations. CSTC-A did not always send confirmation of its receipt of weapons to the supplying organizations. Without detailed information about weapons shipments it was difficult for USASAC, Navy IPO, and CSTC-A to detect discrepancies, if any, between what weapons suppliers reported as shipped and those CSTC-A received. According to CSTC-A, when weapons arrived at the Kabul Afghanistan International Airport, CSTC-A personnel typically identified and counted incoming pallets of weapons but did not count individual weapons or record serial numbers. CSTC-A then temporarily gave physical custody of the weapons to the Afghan National Army for unescorted transport from the airport to the central depots in Kabul. Because CSTC-A did not conduct physical inventory checks on weapons arriving at the airport, due to security concerns at that facility, CSTC-A had limited ability to ensure that weapons were not lost or stolen in transit to the depots. After the Afghan National Army transported weapons to the central depots, CSTC-A did not document the transfer of title for weapons to ANSF. Since no Afghan officers were present at the depots to take possession of the weapons, CSTC-A personnel received the weapons and processed them into inventory for storage. Although Defense did not provide direction to CSTC-A on how and when to transfer title to ANSF, CSTC-A officials told us they considered title transfer to have occurred, without any formal documentation, when information about the weapons was typed into computer inventory systems at the central depots. In February 2008, a revision to DSCA’s Security Assistance Management Manual called for U.S. government officials delivering equipment to a foreign nation under pseudo-FMS to keep documentation showing when, where, and to whom delivery was made and report this information to the military organization responsible for procurement. CSTC-A officials told us that they were not certain whether this revised guidance applied to ANSF weapons and therefore have not provided any title transfer documentation to USASAC or Navy IPO, the procuring organizations. However, regardless of how and when title passed to ANSF and how title transfer was documented, because ANSF officials were not present at the depots to take possession, CSTC-A retained control and custody of the weapons at the depots. CSTC-A did not maintain complete and accurate inventory records for weapons at the central storage depots and allowed poor security to persist. Until July 2008, CSTC-A did not track all weapons at the depots by serial number or conduct routine physical inventories. Moreover, CSTC-A could not identify and respond to incidents of actual or potential compromise, including suspected pilferage, due to poor security and unreliable data systems. Specific gaps in accountability controls include the following: Incomplete serial number recording. For over 5 years, CSTC-A stored weapons in the central depots and distributed them to ANSF units without recording their serial numbers. In August 2007, nearly 10 months after CSTC-A’s Commanding General mandated serial number control, CSTC-A began registering weapons by serial number as they were issued to ANSF units. While this established some accountability at the point of distribution, thousands of weapons under CSTC-A control had no uniquely identifiable inventory record. CSTC-A initiated comprehensive serial number tracking in July 2008, recording the serial numbers of all weapons in inventory at that time and beginning to register additional weapons upon receipt at the central depots. Nonetheless, CSTC-A officials told us that staff shortages made serial number recording challenging. Lack of physical inventories. CSTC-A did not conduct its first full inventory of weapons in the central depots until June 2008. Without conducting regular physical inventories, it was difficult for CSTC-A to maintain accountability for weapons at the depots and detect weapons losses. Poor security. CSTC-A officials have reported concerns about the trustworthiness of Afghan contract staff and guards at the central depot that serves the Afghan National Army. We also observed deficiencies in facility security at this depot, including Afghan guards sleeping on duty and missing from their posts. Demonstrating the importance of conducting physical inventories, in June 2008, within 1 month of completing its first full weapons inventory, CSTC-A officials identified the theft of 47 pistols from this depot. CSTC-A officials also told us that a persistent lack of CSTC-A and responsible ANSF personnel at the central depots had increased the vulnerability of inventories to pilferage. Unreliable inventory information systems. The information systems CSTC-A uses for inventory management at the central depots are rudimentary and have introduced data reliability problems. CTSC-A officials told us that for items received before 2006, they had only “limited data” from manual records at the Afghan National Army central depot. In 2006, CSTC-A’s contractor installed a commercial-off-the-shelf inventory management database system. However, the system permits users to enter duplicate serial numbers, allowing data entry mistakes to compromise critical data. Furthermore, due to a limited number of user licenses, multiple users enter information using the same account, resulting in a loss of control and accountability for key inventory management records. CSTC-A also established an Excel spreadsheet record-keeping system in 2006 for the central depot where Afghan National Police weapons are stored. However, training of Afghan National Police personnel at that depot has not yet begun, and training Afghan National Army personnel in the use of depot information systems has been problematic due to illiteracy and a lack of basic math skills. In a report about operations at the central depot that serves the Afghan National Army, CSTC-A’s logistics training contractor noted that only “one in four [Afghan National Army personnel] have the basic education to operate either the manual or automated systems.” According to CSTC-A, inadequate staffing of U.S. and Afghan personnel at the central storage depots along with persistent management concerns have contributed to the vulnerability of stored weapons to theft or misuse. Although CSTC-A originally envisioned that ANSF would assume responsibility for the majority of central depot operations, ANSF has not asserted ownership of the central depots as planned, leaving U.S. personnel to continue exercising control and custody over the stored weapons. In addition, CSTC-A officials told us this resulted in ambiguities regarding roles and responsibilities and increased risk to stored weapons supplies. Specific challenges in this area include the following. Difficulty providing adequate U.S. staff to maintain full accountability. CSTC-A officials told us that the increasing volumes of equipment moving through the central depots had compounded the management challenges they faced, which included insufficient U.S. personnel on site to keep up with the implementation of equipment accountability procedures. They specifically cited staff shortages as having limited CSTC-A’s capacity to conduct full depot inventories, maintain security, and invest in the training of ANSF personnel. Lack of accountable Afghan officers and staff. CSTC-A accountability procedures call for ANSF officers to be on site at the central depots to take responsibility for ANSF property. However, according to CSTC-A, the Afghan ministries did not consider the central depots to be ANSF facilities, given the high level of CSTC-A control. Thus, ANSF was reluctant to participate in central depot operations and did not post any officers or sufficient Afghan staff to the depots. According to CSTC-A officials, these problems resulted in ambiguities regarding roles and responsibilities at the central depots and placed an increased burden on limited U.S. forces to fulfill mandatory accountability and security procedures. Difficulties raising the capacity of ANSF depot personnel. According to CSTC-A officials, efforts to develop the capabilities of ANSF personnel to manage the central depots have been hampered by the lack of basic education or skills among ANSF personnel and frequent turnover of Afghan staff. As of December 2008, no Afghan National Police personnel have been trained at the police depot. Contractors responsible for Afghan National Army equipment accountability training told us that their efforts have been hampered by the Afghans’ reluctance to attend training and by a lack of basic literacy and math skills needed to carry out depot operations. CSTC-A officials also told us that their embedded military trainers were frequently unable to focus on training and mentoring at the Afghan National Army depot, given their operational imperatives. CSTC-A and State have deployed hundreds of U.S. military trainers and contract mentors to help ANSF units, among other things, establish and implement equipment accountability procedures. Although CSTC-A has instituted a system for U.S. and coalition military trainers to assess the logistics capacity of ANSF units, they have not always assessed equipment accountability capabilities specifically. However, as part of their reporting to CSTC-A and State, contract mentors have documented significant weaknesses in the capacity of ANSF units to safeguard and account for weapons. As a result, the weapons CSTC-A has provided are at serious risk of theft or loss. Furthermore, CSTC-A did not begin monitoring the end use of sensitive night vision devices until about 15 months after issuing them to Afghan National Army Units. CSTC-A has recognized the critical need to develop ANSF units’ capacity to account for weapons and other equipment issued to them. In February 2008, CSTC-A acknowledged that it was issuing equipment to Afghan National Police units before providing training on accountability practices and ensuring that effective controls were in place. In June 2008, Defense reported to the Congress that it was CSTC-A’s policy not to issue equipment to ANSF units unless appropriate supply and accountability procedures were verified. As of June 2008, CSTC-A employed over 250 U.S. military or coalition personnel and contractors to advise ANSF on logistics matters, including establishing and maintaining a system of accountability for weapons. CSTC-A has also helped the Afghan Ministries of Defense and Interior establish decrees, modeled after U.S. regulations, requiring ANSF units to adopt accountability procedures. These procedures include tracking weapons by serial number using a “property book” to record receipt and inventory information, and conducting routine physical inventories of weapons. CSTC-A and State, with support from their respective contractors, MPRI and DynCorp, have conducted training for Afghan National Army and Afghan National Police personnel on the implementation of these decrees. CSTC-A has also assigned contract mentors and U.S. and coalition embedded trainers to work closely with property book officers and other logistics staff in ANSF units to improve accountability practices. In addition, State assigns contract mentors to monitor Afghan National Police units that have received accountability training. These mentors visit the units and evaluate, among other things, the implementation of basic accountability procedures and concepts, such as maintenance of property books and weapons storage rooms. We previously reported that Defense has cited significant shortfalls in the number of fielded embedded trainers and mentors as the primary impediment to advancing the capabilities of the Afghan Security Forces. According to information provided by CSTC-A officials, as of December 2008, CSTC-A had only 64 percent of the 6,675 personnel it required to perform its mission overall and only about half of the 4,159 mentors it required. While CSTC-A has established a system for assessing the logistics capacity of ANSF units, it has not consistently assessed or verified ANSF’s ability to properly account for weapons and other equipment. Afghan National Army. As Afghan National Army units achieve greater levels of capability, embedded U.S. and coalition military trainers are responsible for assessing and validating their progress. Trainers used various checklists in 2008 to assess and validate Afghan National Army units. One checklist we reviewed addressed seven dimensions of logistics capacity and performance, but did not specifically mention accountability for weapons or other equipment. The assessment category in the checklist most relevant to equipment accountability was a rating on whether a unit “understands the logistical process and utilizes it with reasonable effectiveness.” Another checklist we reviewed addressed 15 dimensions of “sustainment operations” and was used to assess units’ overall demonstration of logistics management capacity and “ability to effectively receive, store, and issue supplies.” However, the checklist did not address weapons or equipment accountability specifically. Furthermore, more detailed notes accompanying the completed checklists we reviewed provided virtually no information on equipment accountability as a factor in the logistics ratings the CSTC-A training team assigned to the unit. Afghan National Police. CSTC-A has also introduced a monthly assessment tool to be used by its mentors to evaluate Afghan National Police capability and identify strengths and weaknesses. Prior to June 2008, CSTC-A did not specifically evaluate the capacity of police units to account for weapons and other equipment. CSTC-A changed the format of its police assessment checklist to specifically address four dimensions of equipment accountability. According to the reformatted assessments we reviewed, as of September 30, 2008, some trained and equipped Afghan National Police units had not yet implemented accountability procedures required by the Afghan Ministry of Interior. These assessments indicated that of the first seven police districts to receive intensive training and weapons under CSTC-A’s Focused District Development Program, which began in November 2007, two districts were not maintaining property accountability, including property books, and one was not conducting audits and physical inventories periodically or when directed. Contract mentors employed by CSTC-A and State have reported extensively on weaknesses they observed in ANSF units’ capacity to safeguard and account for weapons and other equipment. Reports we reviewed, prepared by MPRI and DynCorp mentors between October 2007 and August 2008, indicated that ANSF units throughout Afghanistan had not implemented the basic property accountability procedures required by the Afghan Ministries of Defense and Interior. Although these reports did not address accountability capacities in a consistent manner that would allow a systematic or comprehensive assessment of all units, they did highlight common problems relating to weapons accountability, including a lack of functioning property book operations and poor physical security. Lack of functioning property book operations. Mentors reported that many Afghan army and police units did not properly maintain property books, which are fundamental tools used to establish equipment accountability and are required by Afghan ministerial decrees. In a report dated March 2008, a MPRI mentor to the property book officer for one Afghan National Army unit stated, “for 3 years, the unit property books have not been established properly” and that “a lack of functionality existed in every property book operation.” Another report, from March 2008, concluded, “equipment accountability and equipment maintainability is a big concern; equipment is often lost, damaged, or stolen, and the proper procedures are not followed to properly document and/or account for equipment.” In a 2008 MPRI quarterly progress report on Afghan National Police in Kandahar, a mentor noted that property book items were issued but not posted to any records, because personnel did not know their duties and responsibilities. The report further states that “at present the property managers are not tracking any classes of supplies at all levels” and that “ANSF is very basic in its day to day function,” exhibiting no consideration for property accountability. Poor security. MPRI reports also indicated that some Afghan National Police units did not have facilities adequate to ensure the physical security of weapons and protect them against theft in a high-risk environment. For example, a March 2008 MPRI report on Afghan National Police in one northern province stated that the arms room of the police district office was behind a wooden door and had only a miniature padlock, and that this represented “basically the same austere conditions as in the other districts.” Defense and State contractor reports identified various causes of ANSF accountability weaknesses, including illiteracy, corruption, desertion, and unclear guidance from Afghan ministries. Illiteracy. Mentors reported that widespread illiteracy among Afghan army and police personnel had substantially impaired equipment accountability. For instance, a March 2008 MPRI report on an Afghan National Army unit noted that illiteracy was directly interfering with the ability of supply section personnel to implement property accountability processes and procedures, despite repeated training efforts. In July 2008, a police mentor in the Zari district of Balkh province stated that, “a lack of personnel [at the district headquarters] who can read and write is hampering efficient operations,” and added that there is currently one literate person being mentored to take charge of logistics. In addition, an August 2008 DynCorp report on the Afghan National Police noted that in Kandahar, “concerns expressed over maintaining control over the storage facility keys. He cannot read or write, does not record anything that is being given out or have a request form for supplies filled out. is the same individual that was handing out automatic weapons to civilians the previous week.” Corruption. Reports of alleged theft and unauthorized resale of weapons are common. During 2008, DynCorp mentors reported multiple instances of Afghan National Police personnel, including an Afghan Border Police battalion commander in Khost province, allegedly selling weapons to anti-coalition forces. In a March 2008 report, mentors noted that despite repeated requests, the Afghan National Police Chief Logistical Officer for Paktika province would not produce a list of serial numbers for weapons on hand. The DynCorp mentors suggested this reluctance to share information could be part of an attempt to conceal inventory discrepancies. In addition, a May 2008 DynCorp report on police cited corruption in Helmand as that province’s most significant problem, noting that the logistics officer had been named in all allegations of theft, extortion, and deceit reported to mentors by their Afghan National Police contacts. Desertion. DynCorp mentors also reported cases of desertion in the Afghan National Police, which resulted in the loss of weapons. For instance, in July 2008, mentors reported that when Afghan Border Police officers at a Faryab province checkpoint deserted to ally themselves with anti-coalition forces, they took all their weapons and two vehicles with them. Another DynCorp mentor team training police in Ghazni province reported in July 2008 that 65 Afghan National Police personnel had deserted and would not be coming to the base to be processed. The police officers that did arrive came without their issued weapons. Unclear guidance. MPRI mentors reported that Afghan ministry logistics policies were not always clear to Afghan army and police property managers. A MPRI report dated April 2008 stated that approved Ministry of Interior policies outlining material accountability procedures were not widely disseminated and many logistics officers did not recognize any of the logistical policies as rule. Additionally, a MPRI mentor to the Afghan National Army told us that despite the new decrees, Afghan National Army logistics officers often carried out property accountability functions using Soviet-style accounting methods and that the Ministry of Defense was still auditing army accounts against those defunct standards. Senior Afghan Ministry of Defense officials we met with also described similar accountability weaknesses. In a written statement provided in response to our questions about Afghan National Army weapons accountability, the ministry officials indicated that soldiers deserting with their weapons had a negative effect on the Afghan National Army and reduced supplies on hand in units. They also indicated that Afghan National Army units in the provinces of Helmand, Kandahar, and Paktika have been particularly vulnerable to equipment theft. According to DSCA officials, U.S.-procured weapons and sensitive equipment provided to ANSF are subject to end use monitoring, which is meant to provide reasonable assurances that ANSF is using the equipment for intended purposes. Under DSCA guidance, weapons are subject to routine end use monitoring, which, according to DSCA officials, entails making and recording observations on weapons usage in conjunction with other duties and during interactions with local defense officials. For specified sensitive defense items, such as night vision devices, DSCA guidance calls for additional controls and enhanced end use monitoring. This includes providing equipment delivery records with serial numbers to DSCA, conducting routine physical inventories, and reporting on quarterly inventory results. For night vision devices this also includes the establishment of a physical security and accountability control plan. In July 2007, CSTC-A began issuing 2,410 night vision devices to Afghan National Army units without establishing the appropriate controls or conducting enhanced end use monitoring. According to U.S. Central Command, these devices pose a special danger to the public and U.S. forces if in the wrong hands. DSCA did not ensure that CSTC-A followed the end use monitoring guidance because CSTC-A purchased these devices directly and without the knowledge or involvement of DSCA officials. To address this, DSCA and CSTC-A established procedures in April 2008 to prohibit CSTC-A’s procurement of weapons and sensitive equipment in- country without DSCA involvement. In May 2008, CSTC-A first developed an end use monitoring plan that established both routine and enhanced monitoring procedures. The plan calls for the use of U.S. trainers and mentors embedded in ANSF units to provide reasonable assurances that the recipients are complying with U.S. requirements on the use, transfer, and security of the items. CSTC-A informed us that it began implementing the plan in July 2008, but noted it did not have sufficient staff or mentors to conduct the monitoring envisioned. CSTC-A officials told us they started to conduct and document routine end use monitoring for weapons provided to the Afghan police in 31 of Afghanistan’s 365 police districts. CSTC-A had not been able to undertake any monitoring in the remaining 334 police districts due to security constraints. During the course of our review, CSTC-A began following DSCA’s enhanced end use monitoring guidance for the night vision devices it had issued. CSTC-A started conducting inventories of these devices in October 2008, about 15 months after it began issuing them, and plans to conduct full physical inventories by serial number quarterly. As of December 2008, CSTC-A had accounted for all but 10 of the devices it had issued. DSCA and CSTC-A attributed this limited end use monitoring to a shortage of security assistance staff and expertise at CSTC-A, exacerbated by frequent CSTC-A staff rotations. Defense’s Inspector General similarly reported in October 2008 that CSTC-A did not have sufficient personnel with the necessary security assistance skills and experience and that short tours of duty and different rotation policies among the military services hindered the execution of security assistance activities. We also noted these problems in 2004, when we reported that the Office of Military Organization-Afghanistan, CSTC-A’s predecessor, did not have adequate personnel trained in security assistance procedures to support its efforts and that frequent personnel rotations were limiting Defense’s efforts to train key personnel in defense security assistance procedures and preserve institutional knowledge. CSTC-A officials told us that the addition of a USASAC liaison to the CSTC-A staff in Kabul had helped to offset some of these challenges, as the liaison was knowledgeable in security assistance procedures and had been able to provide some basic training for CSTC-A staff. Oversight and accountability for weapons is critical in high-threat environments, especially in Afghanistan, where potential theft and misuse of lethal equipment pose a significant danger to U.S. and coalition forces involved in security, stabilization, and reconstruction efforts. Because Defense organizations throughout the weapons supply chain have not had a common understanding of what procedures are necessary to safeguard and account for weapons, inventory records, including serial numbers, are not complete and accurate. As a result, Defense cannot be certain that weapons intended for ANSF have reached those forces. Further, weapons stored in poorly secured central depots are significantly vulnerable, and the United States has limited ability to detect the loss of these weapons without conducting routine inventories. Although CSTC-A established new weapons accountability procedures during the course of our review, it is not yet clear that, without a mandate from Defense and sufficient resources, CSTC-A will consistently implement these procedures. Because Afghan army and police units face significant challenges in controlling and accounting for weapons, it is essential that Defense enhance its efforts in working with ANSF units in this area. Systematically assessing ANSF’s ability to implement required weapons accountability procedures is particularly important for gaining reasonable assurances that ANSF units are prepared to receive and safeguard weapons as well as for evaluating overall progress in developing ANSF’s accountability capacity. Moreover, adequately monitoring night vision devices and other sensitive equipment after it is transferred to ANSF will help to ensure that such equipment is used for its intended purposes. As development of the Afghan security forces continues, it is vital that clear oversight and accountability mechanisms are in place to account for weapons and other sensitive equipment. To help ensure that the United States can account for weapons that it procures or receives from international donors for ANSF, we recommend that the Secretary of Defense establish clear accountability procedures for weapons in the control and custody of the United States, and direct USASAC, CSTC-A, and other military organizations involved in providing these weapons to (1) track all weapons by serial number and (2) conduct routine physical inventories. To help ensure that ANSF units can safeguard and account for weapons and other sensitive equipment they receive from the United States and international donors, we recommend that the Secretary of Defense direct CSTC-A to (1) specifically and systematically assess the ability of each ANSF unit to safeguard and account for weapons in accordance with Afghan ministerial decrees and (2) explicitly verify that adequate safeguards and accountability procedures are in place, prior to providing weapons to ANSF units, unless a specific waiver or exception is granted based on due consideration of practicality, cost, and mission performance. We also recommend that the Secretary of Defense devote the necessary resources to address the staffing shortages that hamper CSTC-A’s efforts to train, mentor, and assess ANSF in equipment accountability matters. Defense provided written comments on a draft of this report (see app. IV). Defense concurred with our recommendations and provided additional information on its efforts to help ensure accountability for weapons intended for the ANSF. Defense also provided technical corrections, which we incorporated into the report as appropriate. State did not provide comments. Defense concurred with our recommendation to establish clear accountability procedures for weapons intended for ANSF. It noted that Defense requirements and procedures exist for small arms tracking by serial number. However, Defense went on to state that DSCA, in conjunction with U.S. Central Command, has been directed to implement in Afghanistan congressionally-mandated controls that Defense is implementing in Iraq. These include (a) the registration of serial numbers of all small arms, (b) an end-use monitoring program for all lethal assistance, and (c) the maintenance of detailed records for all defense articles transferred to Afghanistan. As we indicated in our report, Defense organizations did not have a common understanding of whether existing accountability procedures applied to weapons obtained for ANSF, underscoring the importance of these controls. Defense did not state when these measures would be implemented; however, if Defense follows through on these actions and, in addition, clearly requires routine inventories of weapons in U.S. custody and control, our concerns will be largely addressed. Defense also concurred with our recommendation to systematically assess each ANSF unit’s capacity to account for and safeguard weapons and to ensure that adequate procedures are in place prior to providing weapons. Defense indicated that embedded mentors and trainers are assessing ANSF units’ accountability capacity. It also stated that for the Afghan National Army, weapons are only issued with coalition mentors present to provide oversight at all levels of command; and for the Afghan National Police, most weapons are currently being issued to selected units that have received focused training, including instruction on equipment accountability. We note that at the time of our review, ANSF unit assessments did not systematically address the units’ capacity to safeguard and account for weapons in its possession. We also note that Defense has cited significant shortfalls in the number of personnel required to train and mentor ANSF units. Unless these matters are addressed, we are not confident the shortcomings we reported will be adequately addressed. Finally, Defense also concurred with our recommendation that it address the staffing shortfalls that hamper CSTC-A’s efforts to train, mentor, and assess ANSF in weapons accountability matters. Defense commented that it is looking into ways to address the shortages, but did not state how or when additional staffing would be provided. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretaries of Defense and State and interested congressional committees. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-7331 or johnsoncm@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To determine whether Defense and the Combined Security Transition Command-Afghanistan (CSTC-A) could account for weapons obtained, transported, stored, and distributed to Afghan National Security Forces (ANSF), we conducted the following work. We sought to determine which Defense accountability procedures are generally applicable to Defense equipment by reviewing documents and meeting with officials from U.S. Central Command in Tampa, Florida; Defense Security Cooperation Agency (DSCA) and Defense’s Office of Inspector General in Arlington, Virginia; and CSTC-A in Kabul, Afghanistan. We also reviewed relevant Defense regulations, instructions, and manuals. We compiled detailed information on 242,203 weapons the United States procured for ANSF and shipped to Afghanistan from December 2004 through June 2008. We identified the types, quantity, shipment dates, and cost of these weapons by reviewing and analyzing pseudo-FMS case documentation provided by DSCA and data provided by the U.S. Army Security Assistance Command (USASAC) in New Cumberland, Pennsylvania, and Navy IPO in Arlington, Virginia. To ensure we had a complete record of all weapons ordered and shipped during this time period, we checked USASAC and Navy shipment details against line-item details in Letters of Offer and Acceptance provided to us by DSCA. For each shipment of weapons we isolated in the USASAC and Navy International Programs Office (IPO) files, we compiled lists of serial numbers or determined the total number of weapons for which no serial number records were available. We identified 195,671 weapons for which USASAC and Navy IPO could provide serial numbers and 46,532 for which they could not. In some cases, quantities of weapons required by the Letter of Offer and Acceptance differed from those recorded as shipped; we followed up on these discrepancies with officials at USASAC, who explained that such differences were due to changes in market pricing between the time of the request and the time of purchase. We determined that the data were sufficiently reliable for the purposes of this report. To assess Defense’s ability to account for the location or disposition of weapons, we selected a stratified random probability sample of 245 weapons from the population of 196,671 U.S.-procured weapons for which Defense could provide serial numbers. The sample population of weapons included all years in which U.S.-procured weapons had been shipped to ANSF and seven specific categories of weapons obtained. Our random sample did not include certain miscellaneous weapon types, which we categorized as “other.” Each weapon in the population had a known probability of being included in our probability sample. We divided the weapons into two strata, based on the format of the weapons lists we obtained. About half of the serial numbers were available to us in electronic databases, allowing us to select a simple random sample of 96 weapons from those records. The remaining 98,462 serial numbers were provided to us in paper lists or electronic scans of paper files. From those records we selected a random systematic sample of 149 weapons by choosing a random start and selecting every subsequent 679th serial number. Each weapon selected in the sample was weighted in the analysis to account statistically for all the weapons in the population, including those that were not selected. In Afghanistan, we attempted either to physically locate each weapon in our sample or obtain documentation confirming that CSTC-A had recorded its issuance to ANSF or otherwise disposed of it. We used the results of our work to generalize to the universe of weapons from which we drew our sample and derive an estimated number of weapons for which CSTC-A cannot provide information on location or disposition. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our results as a 95 percent confidence interval (e.g. plus or minus 5 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals we have reported will include the true values in the study. We compiled detailed information on the approximately 135,000 weapons CSTC-A obtained for ANSF from international donors. We identified the estimated dollar values, types, quantities, and sources of these weapons by analyzing records from the office of CSTC-A’s Deputy Commanding General for International Security Cooperation. We assessed the reliability of these data by interviewing CSTC-A officials knowledgeable about the data and by analyzing the records they provided to identify problems with completeness or accuracy. CSTC-A officials told us the dollar amounts they track for the value of weapons donations had been provided by the donors and were of questionable accuracy, as they had not been independently verified by CSTC-A. We also reviewed CSTC-A’s records on the types, quantities, and sources of weapons donations. CSTC-A officials told us that due to a long-standing lack of accountability procedures for handling weapons donations received at the central storage depots, they had been unable to independently verify the quantities reported by donors. After our visit to Kabul, we continued to work closely with CSTC-A officials to identify additional data concerns. When we found discrepancies, such as data entry errors, we brought them to CSTC-A’s attention and worked with its officials to correct the discrepancies, to the extent that we could, before conducting our descriptive analyses. While CSTC-A’s procedures for ensuring the accuracy of these data have improved during the past year, documentation on procedures was lacking prior to March 2007, which made it impossible for us to independently assess the data’s accuracy. Because we still have concerns about the reliability of these data, we are only reporting them as background information and in an appendix to provide a sense of who donated the weapons and when. We documented weapons accountability practices and procedures by examining records and meeting with officials from DSCA, USASAC, Navy IPO, and CSTC-A—the organizations directly involved with obtaining, transporting, storing, and distributing weapons for ANSF. In Afghanistan, we observed weapons accountability practices at the Kabul Afghanistan International Airport and the two ANSF central storage depots in Kabul where weapons intended for the Afghan National Army and the Afghan National Police are stored before distribution to ANSF units. While at the central depots, we discussed weapons management with CSTC-A officials and mentors employed by MPRI, Defense’s ANSF development contractor. We also observed depot operations, including security procedures, storage conditions, and inventory information systems. In addition, we examined weapons inventory records at CSTC-A headquarters in Kabul. We met with officials from Defense’s Inspector General to discuss two audits it conducted during 2008 relating to weapons accountability in Afghanistan and reviewed a related report it issued in October 2008. To assess the extent to which CSTC-A has ensured that ANSF can properly safeguard and account for weapons and other sensitive equipment issued to it, we conducted the following work. We obtained information on ANSF weapons accountability practices by meeting with cognizant officials from the Afghan Ministries of Defense and Interior. The Ministry of Defense also provided written responses to our questions on this subject. In addition, we reviewed ministerial decrees documenting equipment accountability requirements applicable to the Afghan National Army and Afghan National Police and discussed the development and the implementation status of those decrees with CSTC-A and MPRI. We obtained information on CSTC-A’s efforts to train, mentor, and assess ANSF units on accountability for weapons and other equipment by reviewing documents and meeting with officials from CSTC-A and MPRI in Kabul. We reviewed all available weekly, monthly, quarterly, and ad hoc reports submitted by MPRI logistics mentors from October 2007 to August 2008 that included observations regarding ANSF equipment accountability practices. We also reviewed all available reports, including checklists and assessment tools, prepared by CSTC-A’s embedded military trainers to assess the logistics capabilities of Afghan army and police units. To gain a better understanding of ANSF weapons accountability practices and challenges, we visited an Afghan National Army commando unit near Kabul that had received weapons and night vision devices from CSTC-A and met with the unit’s property book officer and MPRI mentors assigned to that unit. Due to travel restrictions imposed by CSTC-A based on heightened security threats during our visit to Afghanistan, we were unable to travel outside of the Kabul area, as planned, to visit other ANSF units in the country that had also received weapons from CSTC-A. We obtained information on State’s efforts to ensure accountability for weapons provided to the Afghan National Police by reviewing documents and meeting with officials from the U.S. Embassy in Kabul and State’s Bureau of International Narcotics and Law Enforcement Affairs in Washington, D.C. We also reviewed all weekly reports submitted to State between January and August 2008 by State’s Afghan National Police development contractor, DynCorp, that included observations regarding police equipment accountability practices. We determined the end use monitoring procedures generally applicable to weapons transferred to foreign countries under Foreign Military Sales by reviewing DSCA’s Security Assistance Management Manual. We sought clarification on this guidance and views on its applicability to U.S. procured weapons and internationally donated weapons in Afghanistan from officials at DSCA and U.S. Central Command. We determined end use monitoring policies and practices in Afghanistan by reviewing documents and meeting with officials from U.S. Central Command, CSTC-A, DSCA, and State. In Afghanistan, we met with officials in CSTC-A’s Security Assistance Office, including the USASAC liaison to CSTC-A. We reviewed all available documentation of the end use monitoring CSTC-A had conducted as of December 2008 for weapons and other sensitive equipment, including night vision devices, provided to ANSF. Since June 2002, CSTC-A’s office of the Deputy Commanding General for International Security Cooperation has vetted, tracked, and coordinated the delivery of weapons donated to ANSF by the international community. CSTC-A officials reported to us that they had obtained about 135,000 weapons for ANSF in this manner, though we were unable to independently confirm that figure. CSTC-A officials also told us they had not evaluated the reliability of the dollar values assigned by donors for these weapons and noted that some quantities may be overstated, as many of the donated weapons were damaged or unusable. (See table 1 for a summary of the data CSTC-A reported to us on weapons provided by international donors.) While CSTC-A’s procedures for ensuring the accuracy of these data have improved during the past year, documentation was lacking prior to March 2007, which made it impossible for us to independently assess the data’s accuracy. Because we have concerns about the reliability of these data, we are only reporting them here to provide a sense of who donated the weapons. Included in CSTC-A’s records were details indicating that weapons donations have included rifles, pistols, light and heavy machine guns, grenade launchers, rocket-propelled grenade launchers, and mortars. According to this information, about 79 percent of the weapons donated were AK-47 assault rifles. Since international donors began providing weapons for ANSF in June 2002, CSTC-A and others have taken a variety of steps to improve accountability. Many of these steps occurred during the course of our review. Figure 4 provides a timeline of key events relating to accountability for ANSF weapons and other sensitive equipment. Charles Michael Johnson, Jr., (202) 512-7331 or johnsoncm@gao.gov. Key contributors to this report include Albert H. Huntington III, Assistant Director; James B. Michels; Emily Rachman; Mattias Fenton; James Ashley; Mary Moutsos; Joseph Carney; Etana Finkler; Jena Sinkfield; and Richard Brown.
The Department of Defense (Defense), through its Combined Security Transition Command-Afghanistan (CSTC-A) and with the Department of State (State), directs international efforts to train and equip Afghan National Security Forces (ANSF). As part of these efforts, the U.S. Army Security Assistance Command (USASAC) and the Navy spent about $120 million to procure small arms and light weapons for ANSF. International donors also provided weapons. GAO analyzed whether Defense can account for these weapons and ensure ANSF can safeguard and account for them. GAO reviewed Defense and State documents on accountability procedures, reviewed contractor reports on ANSF training, met with U.S. and Afghan officials, observed accountability practices, analyzed inventory records, and attempted to locate a random sample of weapons. Defense did not establishclear guidance for U.S. personnel to follow when obtaining, transporting, and storing weapons for the Afghan National Security Forces, resulting in significant lapses in accountability. While Defense has accountability requirements for its own weapons, including serial number tracking and routine inventories, it did not clearly specify whether they applied to ANSF weapons under U.S. control. GAO estimates USASAC and CSTC-A did not maintain complete records for about 87,000, or 36 percent, of the 242,000 U.S.-procured weapons shipped to Afghanistan. For about 46,000 weapons, USASAC could not provide serial numbers, and GAO estimates CSTC-A did not maintain records on the location or disposition of about 41,000 weapons with recorded serial numbers. CSTC-A also did not maintain reliable records for about 135,000 weapons it obtained for ANSF from 21 other countries. Accountability lapses occurred throughout the supply chain and were primarily due to a lack of clear direction and staffing shortages. During our review, CSTC-A began correcting some shortcomings, but indicated that its continuation of these efforts depends on staffing and other factors. Despite CSTC-A's training efforts, ANSF units cannot fully safeguard and account for weapons and sensitive equipment. Defense and State have deployed hundreds of trainers and mentors to help ANSF establish accountability practices. CSTC-A's policy is not to issue equipment without verifying that appropriate supply and accountability procedures are in place. Although CSTC-A has not consistently assessed ANSF units' ability to account for weapons, mentors have reported major accountability weaknesses, which CSTC-A officials and mentors attribute to a variety of cultural and institutional problems, including illiteracy, corruption, and unclear guidance. Further, CSTC-A did not begin monitoring the end use of sensitive night vision devices until 15 months after issuing them to Afghan National Army units.
The FAR requires that contracting officers provide for full and open competition in soliciting proposals and awarding government contracts. However, the FAR also recognizes that full and open competition is not always feasible, and authorizes contracting without full and open competition under certain conditions. Situations for which the FAR provides exceptions include only one responsible source and no other supplies or services will unusual and compelling urgency; industrial mobilization; engineering, developmental, or research capability; or expert services; international agreement; authorized or required by statute; national security; and public interest. The national security exception allows agencies to limit competition for a contract when the disclosure of the agency’s needs would compromise national security—not merely because the acquisition is classified or because access to classified materials is necessary. Further, the national security exception requires that agencies request offers from as many potential sources as practicable, although sole-source awards are permitted. DOD is the largest user of the national security exception, and a variety of entities within the department use the exception. In September 2010, DOD launched its Better Buying Power initiative, which among other goals, aims to promote effective competition in government contracting. As a result, promoting competition is a focus at DOD, according to a Defense Procurement and Acquisition Policy (DPAP) official, the office within OSD responsible for tracking DOD-wide procurement and competition metrics. As part of these efforts, DPAP holds quarterly meetings with competition advocates, who are officials designated to promote competition within DOD components. Generally, noncompetitive contracts must be supported by written justification and approval documents that contain sufficient facts and rationale to justify the use of the specific exception to full and open competition that is being applied to the procurement. These justifications must include, at a minimum, 12 elements specified by the FAR, as shown in table 1. The level of the official who must approve a justification is determined by the estimated total dollar value of the contract or contracts to which it will apply, as outlined in the FAR. The approval levels range from the local contracting officer for relatively small contract actions up to the agencywide senior procurement executive for contracts worth more than $85.5 million. The justifications can be made on an individual or class basis; a class justification generally covers programs or sets of programs and has a dollar limit and time period for all actions taken under the authority. The approval levels for the class justification are the same as those for an individual justification and are determined by the total estimated value of the class. Approval of individual contract actions under a class justification requires the contracting officer to ensure that each action taken under it is within the scope of the class justification. Based on data from FPDS-NG, DOD dollar obligations under the national security exception during fiscal years 2007 through 2010 were small relative to other exceptions to full and open competition. Out of the nearly $1.5 trillion that DOD obligated for all contracts during this period, 41 percent ($606.3 billion), were based on other than full and open competition, primarily through the seven FAR exceptions. However, only about $13 billion—or about 2 percent of DOD’s other than full and open competition obligations—were obligated under the national security exception. As figure 1 shows, the most common FAR exception used by DOD is “only one responsible source,” while other exceptions are used much less frequently. The three military departments were the largest users of the national security exception during fiscal years 2007 through 2010, according to the data reported in FPDS-NG, obligating about $12.7 billion. The Air Force made up 73.5 percent of all of DOD’s obligations under the exception, despite only accounting for about 18 percent of DOD’s total contract obligations during the same time period, as figure 2 illustrates. By contrast, non-military-department components accounted for about 4 percent of DOD’s use under the exception. During the same 4-year period, over 40 percent of DOD’s total obligations under the national security exception were for services, 37 percent for supplies and equipment, and about 22 percent for research and development, as shown in figure 3. Based on our analysis of FPDS-NG data, the military departments’ use of the exception varied both in the extent of use and the types of goods and services acquired. During fiscal years 2007 through 2010, the Air Force obligated $9.7 billion using the national security exception, nearly all by the Air Force Materiel Command. About half of the Air Force’s obligations under the national security exception were for services, such as logistical support and professional services, and the other half was primarily for supplies and equipment, such as communication equipment and aircraft components. The second largest user, the Army, obligated $2.5 billion, mostly by the Army Materiel Command and the Space and Missile Defense Command. More than 80 percent of the Army’s obligations under the exception were for research and development, mainly in space and missile systems and electronics and communication equipment. Finally, the Navy obligated almost $0.5 billion over the 4 fiscal years under the exception, mostly under Space and Naval Warfare Systems Command contracts. More than half of the Navy’s obligations under the exception were to procure services, such as transportation and repair services. Figure 4 shows the percent of obligations represented by each category of procurement within the military departments. DOD intelligence agencies often use the national security exception when contracting for supplies and services, but generally do not report contracting data to the OSD or to FPDS-NG. Two of the four DOD intelligence agencies—the National Reconnaissance Office (NRO) and the National Security Agency (NSA)—report using the exception for all their contracting activities. The other two intelligence agencies—the National Geospatial-Intelligence Agency (NGA) and the Defense Intelligence Agency (DIA)—reported using the exception for less than 10 percent of their total contracted obligations. Three of the intelligence agencies, NGA, DIA, and NSA, are exempt from reporting to FPDS-NG based on a memorandum from OSD. NRO is not covered by the memorandum, but also does not appear in FPDS-NG data. However, some of these agencies report overall competition statistics to OSD and participate in DOD-wide competition advocate meetings. In addition to the intelligence agencies, DOD Special Access Programs (SAP) use the national security exception, but generally do not report data to FPDS-NG. These are specially classified programs within the military departments and other DOD components that limit information to individuals with an explicit need to know. These programs impose safeguarding and access measures beyond those typically taken for information with the same classification level, such as secret and top secret. Most officials told us that, in general, these programs do not report data to FPDS-NG. Therefore, determining the extent to which these entities use the national security exception is not feasible due to the limited access these programs allow. However, like the DOD intelligence agencies, officials at one military department told us that they report overall competition statistics for SAP contracts to DOD. Specifically, Army Contracting Command officials who oversee SAP programs reported that they use the national security exception for nearly all contracting activity and they provide overall obligation totals and competition data to OSD. Classified data on contracts, agreements, and orders are excluded from being reported in FPDS-NG. However, DOD does not have a clear policy for excluding sensitive contracting data from being reported in FPDS- NG. While the memorandum from OSD exempts three of DOD’s intelligence agencies (NGA, DIA, and NSA) from reporting procurement data to FPDS-NG because of the sensitive nature of their procurement data, OSD and military department officials were not aware of a specific policy basis for excluding sensitive programs outside of the intelligence agencies. In addition to the exclusion of SAP procurement data, some DOD officials told us that contracts outside of SAP do not appear in FPDS-NG due to security concerns. Nevertheless, it appears based on the contracts in our review that the information that is in FPDS-NG on contracting activities using the national security exception is generally from programs that are sensitive but not fully classified programs. Some DOD officials, including at the OSD level, were unaware that some individual contracts could be excluded from FPDS-NG. By contrast, other officials expected all contracts using the national security exception to be excluded from FPDS due to the sensitive nature of the procurements. As a result, it is unclear the extent to which contracting information on SAPs and other highly sensitive contracting activities in DOD are included in FPDS-NG. Based on our review, it appears that most information on such programs is excluded. Further, according to DOD officials, decisions are made on a case-by-case basis to exclude individual contracts from FPDS-NG, but they were unsure of the policy basis for these exclusions. For most contracts we reviewed, DOD entities used a single justification and approval document that applies to multiple contracts—referred to in the FAR as a class justification—for national security exception contract actions. Of the 27 contracts we reviewed at the military departments, all 18 Air Force contracts cited class justifications, as did 4 of the 6 Army contracts. The 2 remaining Army contracts and all 3 Navy contracts we reviewed cited individual justifications. Among the contracts we reviewed, $3.3 billion in obligations during the period of fiscal years 2007 through 2010 used class justifications, while less than $0.1 billion was obligated during that period under individual justifications. Figure 5 shows the relationship between the individual contract files we reviewed and the type of justification used to support the national security exception, as well as the obligation amounts associated with each during this period. The Air Force Materiel Command (AFMC) comprises the majority of the Air Force’s use of the national security exception—about 72 percent of DOD’s total contract obligations under the exception as reported in FPDS-NG compared to 73.5 percent for the Air Force overall. Officials at the two AFMC centers that make up the majority of the command’s contracting under this exception reported that they cite class justifications for the vast majority of their national security exception contracting. The Air Force justifications we reviewed confirmed this, each covering contracts related to multiple systems within a program office. For example, one Air Force class justification had an obligation ceiling of about $8.7 billion for a 7-year period. The Army’s class justifications also covered multiple contracts, but were more focused on an individual system within the program office, and two of the three we reviewed had much lower obligation ceilings. Some of the intelligence agencies also use class justifications for the national security exception. NSA and NRO have class justifications that cover all of their contracting activity. NGA and DIA, by contrast, reported using individual justifications for contracts where they cite the national security exception. Class justifications reduce the steps required to proceed with individual contract actions that are not fully competitive. Each justification, individual or class, must be approved through the same process, with levels of approval specified by the FAR based on dollar value. However, once a class justification has been approved, the process for individual contract actions changes—an individual contract within the scope of the class justification can generally be approved for limited competition or sole- source award by the local procuring activity, as long as the amount is within the obligation ceiling of the justification. For instance, the Air Force obligated $915 million under an indefinite delivery/indefinite quantity contract for support and modification services on an existing aircraft and its related systems. Because this procurement was within the scope of a national security exception class justification, under the processes established in the FAR, the program office did not have to obtain approval for this noncompetitive acquisition from the Air Force’s senior procurement executive. According to contracting officials at an Air Force program office that has a class justification in place under the national security exception, the increased flexibility of their national security exception class justification helps them meet mission needs. In the absence of a class justification, approval of an individual justification for a noncompetitive contract award takes time; officials with one program office cited an instance of an individual justification under a different FAR exception that was not yet approved 7 months after it was initiated. Figure 6 illustrates the review process for contract awards of $85.5 million or more under class and individual justifications. In some cases, the class justifications we reviewed included a list of firms authorized to participate, as well as anticipated obligation amounts for each firm over the applicable time period. For instance, one of the Air Force class justifications we reviewed listed about 40 firms, each with anticipated contract obligations of several million to several billion dollars during the 7-year time frame of the class justification. Despite the number of firms listed in the class justification, competition among them for a given contract award was rare—the contracts we reviewed under this justification typically stated that only one of the firms was capable of meeting the government’s requirements. Officials at one Air Force center said that amending their existing class justification to add new firms had proved difficult in the past, and noted that this can reduce competition by limiting ability to work with new entrants to the market. Some Air Force officials also noted that concerns about the level of review of individual contracts that are awarded without full and open competition under class justifications have led to efforts to revise the review process for activity under class justifications. The Air Force revised its process in a recently approved national security class justification for an intelligence, surveillance, and reconnaissance program office, requiring individual contract actions over $85.5 million be submitted to the According Air Force senior procurement executive for expedited review.to an Air Force General Counsel official, the Air Force has not yet determined what type of documentation will be required as part of that review, but it believes the increased review may identify additional opportunities for competition. This is the first Air Force class justification to include this new process, and officials were not aware of any similar processes at other DOD entities. According to Air Force officials, the new class justification also includes a mechanism for adding new firms after the initial approval of the justification. Officials in the affected program office said that they anticipate an increase in competition rates as a result of this new flexibility. Regardless of whether the military departments used class or individual justifications, all those we reviewed met FAR standards. We reviewed justification and approval documents for the use of the exception for 27 different contracts awarded by the Army, Navy, and Air Force, and all met the standards established in the Federal Acquisition Regulation for approving the justification. In addition, we reviewed the justifications and approval documents for one national security exception contract each at NGA, NRO, and DIA, and two such contracts at NSA, and all generally met the requirements of the FAR. According to officials from all DOD components we met with, the national security exception should be used in limited circumstances where full and open competition would compromise national security. These officials were not aware of other authorities that could be used in its place, nor were they aware of any such proposed authorities. In some justifications and approval documents, DOD components may cite other exceptions in addition to the national security exception. For example, the entities that reported using the national security exception for all or nearly all contracting—NSA, NRO, and some Air Force SAPs—reported citing additional exceptions when making sole-source contract awards. According to policy documents and officials with these organizations, it is standard practice to list more than one exception when applicable. For example, one NSA contract for computer security equipment that we reviewed cited the “only one responsible source and no other supplies or services will satisfy agency requirements” FAR exception alongside the national security exception, because contracting officials had determined that only one firm was capable of meeting the government’s requirements. Likewise, in awarding a satellite contract, NRO used the “only one responsible source” exception in addition to the national security exception. The military departments generally do not cite additional exceptions when using the national security exception. According to federal procurement data, the military departments typically did not achieve competition on national security exception contracts. Of the more than 11,300 DOD military department contract actions citing the national security exception from fiscal years 2007 through 2010, DOD received only one proposal for $10.6 billion of its obligations—about 84 percent of the total $12.7 billion in obligations under this exception. About 4 percent of contract actions, which account for 16 percent of the military departments’ obligations, received two or more proposals, as shown in figure 7 below. By department, nearly 100 percent of Air Force and 95 percent of Navy contract obligations received only one proposal, whereas about 80 percent of Army obligations were made under contracts that received more than one proposal. DOD’s Better Buying Power Initiative includes a goal of decreasing instances where only one proposal is received, which DOD has noted fails to provide the full benefits of competition. We have previously reported that about 13 percent of all contract obligations governmentwide were made on contracts awarded with competitive procedures that only received one proposal. Contracts receiving only one proposal are considered competitively awarded if the solicitation was open to multiple potential offerors, so contracts reported in FPDS-NG that received only one proposal may have been awarded using competitive procedures. While data on the extent that national security exception contracts were awarded competitively were not sufficiently reliable, the available data confirmed that competition is infrequent—indicating that less than a quarter of military department obligations under this exception were competitively awarded. Furthermore, our data reliability assessment indicated that the errors in these data tend to overstate the level of competition, so the actual level may be lower. Likewise, few of the military department national security exception contracts we reviewed achieved competition. Of the 27 contracts we reviewed for the Air Force, Army, and Navy, only one received multiple proposals. For the remaining 26 contracts, only one proposal for each was received. Military department officials said that they make efforts to provide competition to the greatest extent practicable, as required by the FAR. However, they reported three obstacles to obtaining more competition in contract awards: the existence of a small number of firms able to meet the security requirements for the goods and services being procured; constraints on soliciting new vendors, including proprietary data and reliance on incumbent contractor expertise; and not having tools to increase market research and solicit vendors in a secure environment. For example, Air Force contracting officials reported that restrictions on time and expertise make it difficult for many new vendors to meet requirements. A senior Air Force contracting official told us that not having access to technical data—such as engineering drawings and other information needed to have another vendor meet the eligibility requirements—is a major barrier to competition. According to this official, one vendor often controls the data as proprietary information, and buying or recreating it would be cost-prohibitive for potential new vendors. The military departments generally continue to use the same exception for follow-on contract actions to national security exception contracts, as well as the same vendor, based on our analysis of the contracts in our sample. Contracting officials noted that these contracts must go through the same approval process as the initial contract, requiring justification for the national security exception. Of the 27 contracts in our sample, we identified 14 follow-on contracts, 12 of which were awarded to the incumbent contractor. Contracting officials confirmed that follow-on contracts typically are not competed and are usually awarded to the same vendor due to proprietary data rights and expertise of the incumbent contractors, as well as the time required to initiate work with a new vendor. We have previously reported that incumbent contractors have important advantages in follow-on contract awards. Contracting officials told us that the tools that are used to solicit competition generally cannot be used in a security sensitive contracting environment. FedBizOpps.com is the military departments’ primary tool for soliciting potential offerors. unclassified solicitations for goods and services, but it cannot accept classified material. Even though national security exception contract documents are often unclassified, synopsizing the requirements may pose a security risk. Instead, contracting officials identify potential sources based on market research and provide the solicitations to those firms directly. The site allows agencies to upload For the 27 contracts in our sample, the market research reflected in the contract files frequently did not have adequate documentation on how it was used to identify potential offerors. Specifically, no evidence of market research was present in 12 of the 27 contract files we reviewed; it was present in 15 of the files we reviewed. The market research in those 15 contracts often broadly outlined the means by which the contracting office conducted the market research, but in some cases did not include details and evidence to document the research. In some cases, the contracting officials relied upon their own collective experience with, and knowledge of, vendors capable of delivering goods and services in accordance with sensitive contract requirements. Nevertheless, even in cases in which market research identified multiple firms that could meet requirements, it did not always result in multiple proposals on a given contract. The FAR requires contracting officers to synopsize proposed contract actions expected to exceed $25,000 in the Government Point of Entry (GPE), FedBizOpps.com. FAR § 5.101(a)(1) The GPE may be accessed via the Internet at https://www.fbo.gov/. FAR § 5.201(d). NSA and NRO, which reported that they use the national security exception for all or nearly all contracting, showed high levels of competition compared to the DOD military departments. As illustrated in figure 8 below, according to data provided by the agencies, annual competition rates ranged from 27 percent to nearly 70 percent of total obligations at NSA and NRO. Because data on contracting at intelligence agencies are typically classified at highly restrictive levels, we did not have sufficient access to independently validate the data provided. NRO and NSA have both developed tools to help increase competition in procuring sensitive goods and services and have made these tools available for other intelligence agencies. These tools bring together a large number of potential offerors and help the agencies solicit and evaluate vendors, and competitively award the contract, while taking measures to limit the risk to national security. The NRO Acquisition Research Center, developed for intelligence community procurements, limits potential contractors to about 1,200 registered firms that are already cleared to perform in a secure environment and have a workforce with security clearances. An NRO senior procurement official described this system as a proprietary classified version of FedBizOpps. The NSA’s Acquisition Resource Center is the NSA’s business registry database that provides industry with a central source for acquisition information. This system also serves as a market research tool for NSA personnel, as well as a means for distribution of acquisition documents to its industry partners. All companies that wish to do business with NSA must be registered in the system. As of October 2010, the database included about 9,300 companies. An NSA Inspector General report found that this system improved the agency’s ability to conduct market research and solicit competition. The inspector general found that it improved competition by making the process more systematic. The other two DOD intelligence agencies, DIA and NGA, have made arrangements to use one or both of the NSA and NRO systems. For example, our review of a DIA contract under the national security exception showed that the agency solicited 11 companies and received five proposals by using NSA’s Acquisition Resource Center. Additionally, NGA has a memorandum of agreement with NRO to use its Acquisition Research Center. None of the 27 military department contracts we reviewed used the NSA or NRO systems to conduct market research. However, contracting officials at one Air Force center said that they were aware of NRO’s system, and although they do not currently have access, they would like the opportunity to use it for their procurements. DOD’s use of the national security exception is necessary in certain situations when disclosing the government’s needs in a full and open competition would reveal information that would harm national security. The exception requires that agencies pursue limited competition by requesting proposals from as many potential sources as is feasible. DOD departments may not have a complete understanding of the extent of competition, given that DOD lacks clear policy on when sensitive contract actions should be excluded from the FPDS-NG, the database it uses to track this information. However, the available data show that the military departments have achieved relatively little competition in their national security exception procurements. Obtaining competition on new procurements is especially important, because our findings and previous reports have shown that once a contractor receives an award, historically that contractor is likely to receive any follow-on contract. There are obstacles to competition in sensitive procurements, including a limited number of firms that can meet security requirements. Because of these obstacles, program offices may find it easier to forego competition when a class justification is already in place. However, more competition is possible. The recent changes that Air Force made to its process, which introduced a new high-level review of contract actions under a class justification, may help increase the extent of competition. Further, while the military departments face challenges in conducting market research for sensitive contracts, the DOD intelligence agencies, which face similar challenges, have created tools to increase their ability to identify multiple potential sources and obtain competition when using the national security exception. The use of such tools could enhance the ability of the military departments to obtain competition on their national security exception procurements. We recommend that the Secretary of Defense take the following three actions: Issue guidance establishing the circumstances under which security sensitive contracting data are required to be reported to OSD and in FPDS-NG, including the decision authority for excluding a given program or contract from the database. Evaluate the effect of the Air Force’s new review process on competition and management oversight of national security exception actions under a class justification; if the changes are found to be beneficial, consider implementing similar changes across DOD. Assess the feasibility of providing contracting officials in military department programs that routinely use the national security exception with access to tools that facilitate market research and competitive solicitation in a secure environment, either through development of new tools or access to existing intelligence community systems. We provided a draft of this report to DOD. In written comments, DOD concurred with the report’s last two recommendations and partially concurred with the first recommendation. DOD also provided technical comments, which we incorporated as appropriate. DOD’s comments are reprinted in appendix II. In commenting on the draft report, DOD agreed to evaluate the Air Force’s new review process for national security exception actions under class justifications and implement a similar process across the department if it found it beneficial. DOD also agreed to explore deploying existing intelligence community market research and solicitation tools to organizations in the military departments that frequently use the national security exception. DOD partially concurred with our recommendation to clarify guidance on the exclusion of data from FPDS-NG citing a pending revision to the FAR that will clarify that classified data should not be reported to FPDS-NG. We did not encounter any ambiguity on this point—contracting officials we met with were clear that classified data should not be entered into the system. However, we found that DOD policy was not clear on if and when sensitive, but unclassified, contract data should be excluded from FPDS-NG. We continue to believe that additional guidance is needed to clarify if and when any such data should be excluded (outside the existing intelligence agency waiver), and if so, outline the criteria and decision authority for doing so. We are sending copies of this report to interested congressional committees and the Secretary of Defense. This report will also be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me at (202) 512-4841 or martinb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff acknowledgments are provided in appendix III. Our mandate required us to review (1) the pattern of usage of the national security exception by acquisition organizations within the Department of Defense to determine which organizations are commonly using the exception and the frequency of such usage; (2) the range of items or services being acquired through the use of such exception; (3) the process for reviewing and approving justifications involving such exception; 4) whether the justifications for use of such exception typically meet the requirements of the Federal Acquisition Regulation applicable to the use of such exception; (5) issues associated with follow-on procurements for items or services acquired using such exception; and (6) potential additional instances where such exception could be applied and any authorities available to DOD other than such exception that could be applied in such instances. To respond to these objectives, this report (1) identified the pattern of DOD’s use of the national security exception, including the range of goods and services acquired; (2) assessed DOD’s process for using this exception; and (3) determined the extent to which DOD obtained competition on selected contracts using the national security exception. To conduct our work we met with DOD officials at the Office of the Secretary of Defense (OSD), the three military departments, and DOD intelligence agencies. Within OSD we met with Defense Procurement and Acquisition Policy officials, including a Federal Procurement Data System-Next Generation (FPDS-NG) subject-matter expert. We also met with FPDS-NG experts in the three military departments. In addition, across DOD we met with officials from the following offices: Office of the Deputy Assistant Secretary for Contracting and Policy Air Force Materiel Command, Special Programs Division Air Force Materiel Command, Implementation Branch General Counsel Office of the Assistant Secretary of the Army for Acquisition, Logistics Army Materiel Command Army Contracting Command Army Space and Missile Defense Command General Counsel Office of the Assistant Secretary of the Navy, Research, Development Naval Sea Systems Command Space and Naval Warfare Systems Command General Counsel Office of Contracting Office of the Inspector General General Counsel Acquisition and Contracts Office Office of the Inspector General General Counsel Acquisition Organization Office of Contracting Office of the Inspector General General Counsel Office of Contracting Office of the Inspector General General Counsel Based on discussions with FPDS-NG subject-matter experts at OSD and the three military departments, we determined that the data available prior to fiscal year 2006 were not sufficiently reliable for our purposes. Therefore, our review focused on the most current reliable data from FPDS-NG, fiscal years 2007 through 2010. We conducted legal research and interviewed DOD officials to identify other uses of the exception and alternative authorities. To identify the DOD components to include in our review, we used FPDS-NG data to determine those with the most obligations under the national security exception during fiscal years 2007 through 2010. These included the three military departments—the Air Force, Army, and Navy. Within the departments, we identified the commands with the highest obligations under the exception—the Air Force Materiel Command (AFMC), Army Materiel Command / Army Contracting Command (AMC/ACC), Army Space and Missile Defense Command (SMDC), and Navy’s Space and Naval Warfare Systems Command (SPAWAR). For entities that do not report data to FPDS-NG we relied on knowledgeable DOD officials to identify the frequent users of the national security exception. These included the four DOD intelligence agencies—the Defense Intelligence Agency (DIA), National Geospatial- Intelligence Agency (NGA), National Security Agency (NSA), and National Reconnaissance Office (NRO), as well as Special Access Programs within the DOD military departments. Due to the security limitations at the intelligence agencies, we employed different methodological approaches to assess the uses and processes at the intelligence agencies and the military departments, as described below. To assess the pattern of use of the exception and the range of items or services being acquired at the DOD military departments, we obtained data from FPDS-NG. We included contracts and orders coded as using the national security exception under the field “reason not competed” from fiscal years 2007 through 2010. We analyzed obligations data and the types of goods and services based on product code fields. To compare use of the national security exception versus other FAR exceptions, we conducted an analysis of the other values listed under the “reason not competed” field. Because the Air Force makes up 73.5 percent of all obligations under the national security exception, we selected 18 contracts from the Air Force, 6 from the Army, and 3 from the Navy. We selected the individual contracts based on several criteria. First, we selected high-dollar contracts. Based on our analysis of commonly procured goods and services from FPDS-NG data, we selected contracts with a mix of these types of purchases. FPDS-NG data do not indicate whether a contract is a follow-on procurement, therefore we selected both older and newer contracts. DOD officials also identified contracts to select to capture follow-on activities. However, based on other selection criteria, these contracts had already been included. The 27 contracts we reviewed represented about $3.4 billion—about 27 percent—of the $12.7 billion in obligations under the national security exception across the military departments in fiscal years 2007 through 2010. We analyzed the justification and authorization documents for these selected contracts and determined whether they met the requirements of the FAR Sections 6.302-6 and 6.303-2. In addition, we reviewed pre- award documentation to determine the extent to which the services obtained competition under the exception and to review market research documents. Further, we reviewed the contract files to determine whether the contract was a follow-on contract. We met with officials to discuss efforts the military departments make to obtain competition when using the national security exception to limit competition. We conducted assessments of both the completeness and the reliability of the FPDS-NG data. To assess how complete the FPDS-NG data are, we interviewed agency officials at OSD and the three military departments to identify instances when individual contracts or entire programs are excluded from FPDS-NG to protect classification or security sensitive information. OSD officials provided us with the directive from the Director of National Intelligence that exempts all DOD intelligence agencies from FPDS-NG. We met with officials who oversee Special Access Programs in the Army and Air Force to discuss any policies and procedures related to the inclusion or exclusion of contract information from FPDS-NG. Our assessment of the reliability of FPDS-NG data involved several stages. First, we interviewed FPDS-NG subject-matter experts at OSD and the three military departments. We discussed issues with miscoding and results of any anomaly reports. After identifying the sample for our file review, we asked officials at the contracting offices to verify if the contracts did use the national security exception as they were coded in the “Reason not Competed” field in FPDS-NG. After identifying coding errors in that field for five of the contracts, we compared the “Extent Competed” and “Number of Bids” (proposals) fields with the documentation in the contract files for the 27 contracts in our review. We found four errors in the “Extent Competed” field and one error in the numbers of proposals. We also drew upon prior GAO findings regarding FPDS-NG data reliability. Based on this initial data reliability assessment, we selected a second random, non-generalizable stratified sample of 36 contracts to assess the same three fields in FPDS-NG. We stratified based on the military department (Air Force, Army, and Navy); whether it was identified as an indefinite delivery, indefinite quantity contract in FPDS-NG; and whether it was listed as not competed or competed after exclusion of sources in FPDS-NG. We asked DOD officials to review contract files to determine 1) if the contract cited the national security exception, 2) whether the contract was competed, and 3) how many proposals the contract received. In addition, in discussions with the Navy, they identified contracts that were incorrectly coded as using the national security exception. After three Air Force contracts fell out of our sample due to nonresponse, we found errors in the “Extent Competed” field for about a third of the contracts. However, we found only two of the contracts (6 percent) had errors in the “Reason not Competed” field and only one contract (3 percent) with an error in the number of proposals. These data reliability assessments indicate that the “Reason not Competed” and “Number of Offers” fields in FPDS-NG are sufficiently reliable for our analyses. To assess the extent of DOD intelligence agencies’ use of the national security exception, we obtained data from the four agencies, as these agencies do not report data to FPDS-NG. Specifically, we obtained data on the percentage of total obligations under the national security exception and the percentage of total obligations competed at the four agencies. We reviewed five contract files at four DOD intelligence agencies. We analyzed the justification and authorization documents for these selected contracts and determined whether they met the requirements of the FAR Sections 6.302-6 and 6.303-2. Because we did not have a list of contract numbers from which to choose, we relied on the agencies to select the contracts for review. In addition, we reviewed pre-award documentation to determine the extent to which the agencies obtained competition under the exception and to review market research documents. Further, we reviewed the contract files to determine whether the contract was a follow-on contract. We met with officials to discuss efforts the intelligence agencies make to obtain competition when using the national security exception to limit competition. DOD entities for which little or no use of the exception appeared in federal procurement data were not included in our file review. To assess the use of the exception at these entities, we met with officials at OSD, as well as officials knowledgeable about Special Access Programs at the Army, Air Force, and Navy. We obtained information from an Air Force official on the extent of use and competition within the Air Force Materiel Command’s Special Programs Division. To assess the reliability of data received from the DOD intelligence agencies, we solicited information from officials on the data. Specifically, we asked cognizant officials about the type of database systems used to track contracting activity; how these systems are used; what procedures are in place to ensure consistency and accuracy; if there have been issues with the system that may compromise data; what limitations exist in tracking CICA exceptions; and what data reliability assessments have been conducted on these systems. We conducted this performance audit from March 2011 to January 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, John Neumann, Assistant Director; Laura Greifner; Julia M. Kennon; John A. Krump; Caryn E. Kuebler; Teague Lyons; Jean McSween; Kenneth Patton; Roxanna T. Sun; Sonya Vartivarian; and C. Patrick Washington made key contributions to this report.
Competition is a critical tool for achieving the best return on the government’s investment. Federal agencies are generally required to award contracts competitively, but they are permitted to use other than full and open competition in certain situations, such as when open competition would reveal information that would harm national security. GAO examined DOD’s use of this provision, known as the national security exception. It requires the use of competition to the greatest extent practicable. GAO assessed (1) the pattern of DOD’s use of the national security exception; (2) DOD’s processes for using the exception; and (3) the extent to which DOD achieved competition under the exception. GAO analyzed federal procurement data; reviewed a selection of 27 contract files and justifications citing the exception from the Army, Navy, and Air Force, based on largest obligations, frequent users, and a range of procurement types, as well as five contracts from DOD intelligence agencies; and interviewed DOD contracting and program officials. DOD’s use of the national security exception is small—about 2 percent of the dollar value of its total use of exceptions to full and open competition, but gaps in federal procurement data limit GAO’s ability to determine the full extent of DOD’s use. DOD procures a range of goods and services under this exception, and according to federal procurement data, the Air Force accounted for about 74 percent of DOD’s use during fiscal years 2007 through 2010. However, DOD intelligence agencies and special access programs frequently use the exception, but are generally excluded from reporting procurement data. While an Office of the Secretary of Defense memorandum exempts three of the intelligence agencies from reporting such data, DOD policy on reporting sensitive procurements for other military department programs is not clear. For most national security exception contract actions GAO reviewed, DOD used a single justification and approval document that applies to multiple contracts—known as a class justification. Among those reviewed, $3.3 billion of $3.4 billion was obligated under contracts that used class justifications, which reduce the steps required to proceed with individual contract actions that do not use full and open competition. According to contracting officials, the increased flexibility of national security exception class justifications helps meet mission needs. However, in the Air Force, concerns about the reduced management review of these contracts have led to changes in the process for approving individual contract actions using class justifications. Nevertheless, all of the justifications GAO reviewed met Federal Acquisition Regulation requirements. GAO’s analysis of federal procurement data on about 11,300 contract actions found that, from fiscal years 2007 through 2010, only 16 percent of all obligations under those actions by the military departments under the national security exception received more than one proposal. Contract files and contracting officials cited a limited pool of companies with the right capabilities, the difficulty of changing from an established vendor, and limited tools for soliciting competitive bids as reasons for their inability to obtain more competition. Twelve of the 27 military department contract files GAO reviewed did not include a record of market research, and others included few details on the results. Two intelligence agencies that reported using the national security exception for all contracting reported achieving comparatively high levels of competition. Both have systems that catalogue firms, capabilities, and solicitations that are used to facilitate security sensitive market research. GAO recommends that DOD issue guidance clarifying when security sensitive contracting data must be reported, monitor the impact of new Air Force class justification processes, and consider using tools that facilitate market research in a secure environment. DOD concurred with two recommendations and partially concurred with the recommendation on clarifying guidance, citing pending revisions to regulations. GAO continues to believe that clarifying guidance is needed.
The F/A-18E/F program is the successor to prior unsuccessful attempts to modernize the Navy’s tactical aviation fleet. The Navy’s initial focus was on replacing its high-end A-6 attack aircraft. The programs that were initiated in that regard—the A-12 and then the A/F-X—were eventually canceled. The Navy also initiated studies to upgrade its multirole F/A-18 low-end tactical aircraft. The upgraded F/A-18 effort was designated the F/A-18E/F. At a projected total program cost of $63.09 billion (fiscal year 1996 dollars)/$89.15 billion (then-year dollars) the F/A-18E/F program is one of the Department of Defense’s (DOD) most costly aviation programs. In January 1988, the Navy awarded a fixed-price incentive contract to McDonnell Douglas Aerospace and General Dynamics Corporation to develop the Advanced Tactical Aircraft, later designated the A-12. In June 1988, the Navy and McDonnell Douglas also completed a study, known as Hornet 2000, to study upgrade options to the F/A-18 because of the long development cycle of planned future fighter aircraft. The A-12 was to begin replacing A-6Es in the mid-1990s. The Air Force was also considering a version of the A-12 to replace its high-end F-15E, and F-111 strike aircraft. On January 7, 1991, after making almost $2.7 billion (then-year dollars) in progress payments, the Navy terminated the A-12 program because of technical and cost reasons. Almost immediately after terminating the A-12 program, the Navy requested funding to modernize the F/A-18. A new joint Air Force and Navy program—designated A-X and later A/F-X—was also initiated to replace their high-end attack/strike aircraft with more advanced stealthy aircraft. The A/F-X was to begin fielding a more affordable Navy A-6E replacement aircraft around 2008. The A/F-X program office estimated it would cost $22.8 billion (then-year dollars) to develop the A/F-X and $50 million to $100 million to procure each aircraft. In 1993, DOD’s Bottom-Up Review concluded that DOD had too many new aircraft programs and that future defense budgets would not support both the F/A-18E/F and the A/F-X program. Therefore, in accordance with the review’s recommendations, the Secretary of Defense announced that the A/F-X advanced tactical aviation program would be canceled, the F/A-18E/F program would continue, and the services’ efforts to field a next generation joint strike fighter aircraft would be pursued through a Joint Advanced Strike Technology (JAST) program. The family of three common aircraft that is to ultimately result from the JAST effort is called the Joint Strike Fighter (JSF). The three JSF variants are intended to be (1) a first-day-of-the-war, survivable strike fighter aircraft to complement the F/A-18E/F for the Navy, (2) an advanced short-takeoff and vertical-landing aircraft to replace the AV-8B and F/A-18 for the Marine Corps, and (3) a multirole aircraft (primary air-to-ground) to replace the Air Force F-16 and A-10 aircraft. In May 1992, the Under Secretary of Defense for Acquisition approved the Navy’s Milestone IV, Major Modification F/A-18E/F. A $5.783 billion (fiscal year 1996 dollars)/$5.803 billion (then-year dollars) F/A-18E/F development estimate was based on the combined cost to develop the airframe and the engine and to pay other government costs. The airframe development contract was awarded to McDonnell Douglas Aerospace, with Northrop Grumman Corporation as the prime subcontractor. McDonnell Douglas makes the forward fuselage, the wings, and the aft wing/horizontal stabilizers. Northrop Grumman makes the forward center fuselage, the aft center and aft fuselage sections, and the aft fuselage vertical tail sections. The Navy has contracted with General Electric Corporation to develop the F/A-18E/F’s engine. The engine will be provided to McDonnell Douglas Aerospace as a government-furnished item. Most of the avionics development costs for F/A-18E/F are not included in the E/F’s development cost estimate. As of December 31, 1995, the Navy had spent about $3.75 billion on the development phase of the F/A-18E/F program. Initial operational capability of the F/A-18E/F is scheduled for 2000, and fielding of the first operational carrier-based squadron is scheduled for 2003. Procurement of 1,000 aircraft for the Navy and the Marine Corps is planned through 2015. We initiated this review because of the magnitude of funds involved in the F/A-18E/F program. We included the F/A-18C/D, F/A-18E/F, and JSF in our review to determine whether continued development of the F/A-18E/F is the most cost-effective approach to modernizing the Navy’s tactical aircraft fleet. In conducting our work, we evaluated data used to justify the F/A-18E/F program. We reviewed various documents, including the Hornet 2000 study; Navy documents such as acquisition reports; the Operational Requirements Document; and related cost, engineering, and test data supporting the decision to develop the F/A-18E/F. This data showed that the F/A-18E/F was approved to correct deficiencies in current F/A-18s that the Navy said existed or were projected to materialize. The F/A-18 deficiencies cited were in range, carrier recovery payload, and survivability. Improvements in F/A-18E/F growth space and payload over the F/A-18C/D were also cited by the Navy in seeking E/F approval. Our specific objectives were to determine whether the operational deficiencies in the F/A-18C/D that the Navy cited in justifying the E/F program have materialized and, if they have, the extent to which the F/A-18E/F would correct them; ascertain whether the F/A-18E/F will provide an appreciable increase in operational capability over the F/A-18C/D; and review the reliability of the cost estimates for the F/A-18E/F and compare those estimates with the costs of potential alternatives to the E/F program. To accomplish these objectives, we acquired data on the current operational capabilities of the F/A-18s and the status of the F/A-18E/F development effort from the Naval Air Systems Command (NAVAIR) and the builders of the F/A-18s: McDonnell Douglas Aerospace, Northrop Grumman Corporation, and General Electric Corporation. We obtained various studies, test results, performance data reports and interviewed Navy and contractor officials. Using these data, we conducted various analyses and calculations, which are explained in the appropriate sections of our report, to verify the deficiencies in range, carrier recovery payload and survivability predicted for the C/D, and to ascertain the probability that the E/F would correct those deficiencies. To ascertain whether the F/A-18E/F will provide an appreciable increase in operational capability over the F/A-18C/D we focused on payload capacity and growth potential. These areas were also cited by the Navy in justifying the E/F program. We interviewed Navy and contractor officials and reviewed data from contractor studies, system specifications, and Navy reports. We evaluated the Navy’s projections that indicated that the C/D would have no growth potential to accommodate future avionics requirements. We also compared the weapons capacity of the C/D with the potential capacity of the E/F. Additional information concerning F/A-18C/D operational deficiencies and the need for the E/F was obtained from documents and interviews with officials from the Center for Naval Analysis and the Defense Intelligence Agency. To evaluate the validity of the F/A-18E/F procurement cost estimates, we examined the assumptions on which the estimates were based in terms of numbers of aircraft to be procured and the number of aircraft to be produced each year. We made these analyses because the Congress and DOD have expressed concerns in the past that the Navy’s assumptions were not realistic, given the probable limited availability of annual funding. To make this evaluation, we acquired data and interviewed officials in the Naval Warfare’s Aviation Requirements and Aviation Inventory directorates, and the Office of the Deputy Chief of Staff For Aviation within the Marine Corps. We obtained procurement cost data provided to the Congress in the annual Selected Acquisition Report and aircraft inventory data used by the Navy to calculate the E/F’s projected procurement cost, which is based on a combined Navy and Marine Corps buy of 1,000 aircraft. From this data, we developed and then compared F/A-18C/D and E/F recurring flyaway cost projections. We also compared projected E/F operational and cost projections with those of the JAST JSF. This information was acquired from the JAST program office, the Advanced Research Projects Agency (their Marine Corps Short-Takeoff Vertical Landing Strike Fighter effort was combined with JAST), and the contractor teams working on the JSF effort. The contractors are a consortium of McDonnell Douglas Aerospace, Northrop Grumman Corporation, and British Aerospace; Boeing Corporation; and Lockheed Martin Corporation. We obtained the contractors’ and the JAST program office’s estimates for the future JSF and calculated the cost of continuing procurement of the F/A-18C/D in lieu of proceeding with the F/A-18E/F program. Our methodology for calculating comparative costs for the C/D and E/F programs is explained in detail in appendix I where we present those cost comparisons. DOD provided written comments on a draft of this report. The comments are presented and evaluated in their entirety in appendix III. We conducted our review from December 1994 through December 1995 in accordance with generally accepted government auditing standards. The F/A-18E/F is intended to replace current F/A-18C/D aircraft and to perform Navy and Marine Corps fighter escort, strike, fleet air defense, and close air support missions. The current F/A-18C/Ds have proven their value to the battle commander by providing the capability to perform diverse missions and excellent payload flexibility under dynamic wartime conditions. However, the Navy stated that in order to maintain a superior level of combat performance into the 21st century, the F/A-18 will require increased range, increased carrier recovery payload, and improved survivability. Our review determined that: The Navy’s F/A-18 strike range requirements can be met by either the F/A-18E/F or F/A-18C/Ds. The increased range of the E/F is achieved at the expense of aerial combat performance, and even with increased range, each aircraft will still require aerial refueling for low-altitude missions against most targets. F/A-18C carrier recovery payload deficiency has not occurred as the Navy predicted. F/A-18Cs operating in support of Bosnian operations routinely return to the carrier with operational loads that exceed the Navy’s stated carrier recovery payload capability. Although survivability improvements are planned for the F/A-18E/F, the aircraft was not justified to counter threats that could not be countered with existing or improved F/A-18C/Ds. Also, the effectiveness of a survivability improvement planned for the E/F is questionable and might better be attained at less cost with the next generation JSF. The Navy is reporting that F/A-18E/F strike ranges are significantly greater than the specifications require. Those E/F strike range projections are based on a high-altitude mission, which results in increased fuel efficiency and range, whereas the E/F contract stipulates specifications for a low-altitude strike mission. McDonnell Douglas Aerospace data show that the F/A-18C/D can also achieve the E/F’s low-altitude strike range specification if it carried the larger external fuel tanks that are planned to be used on the E/F. Navy data also shows that the C/D, without the larger external tanks, could exceed the target distances stipulated in the E/F system specifications by flying the same high-altitude mission as the E/F. Also, we found that the design changes needed to achieve the F/A-18E/F’s range improvements will adversely affect its aerial combat performance relative to the F/A-18C/D. Should the Navy not be able to fly the more fuel-efficient, high-altitude mission profiles, both the E/F and the C/D will need aerial refueling to reach a majority of targets in many of the likely wartime scenarios that either aircraft would be employed. In justifying the F/A-18E/F, the Navy cited, among other factors, the F/A-18C/D’s inability to perform long-range unrefueled missions against deep, high-value targets. The Navy incorporated major airframe modifications to the F/A-18E/F to increase its long-range strike capability. However, we found that the F/A-18C/D can achieve greater ranges without making modifications to its airframe. These ranges will exceed the F/A-18E/F’s low-altitude range specifications. F/A-18E/F specifications call for the aircraft to have a range of 390 nautical miles while performing low-altitude bombing with four 1,000-pound gravity bombs and using two 480-gallon external fuel tanks. This strike range is 65nm longer than the reported 325nm low-altitude strike range of the F/A-18C/D using two smaller 330-gallon external fuel tanks and carrying four 1,000-pound gravity bombs. The F/A-18E/F will achieve its greater strike range primarily from its greater internal fuel capacity and larger wings, and its larger 480-gallon external fuel tanks. In total, F/A-18E/Fs will carry 980 gallons more fuel (450 gallons external/ 530 gallons internal) than F/A-18C/Ds. The 480-gallon tank planned to be used on the F/A-18E/F uses new filament-winding technology and a toughened resin system to produce a lightweight external fuel tank. It carries 45 percent more fuel than the 330-gallon tank, but its diameter is only 3.1 inches greater and it has the same empty weight as the 330-gallon tank. F/A-18 E/F program officials informed us that the 480-gallon tanks planned for the E/F cannot be carried by the C/D. Furthermore, current Navy operational documents will not allow 480-gallon external tanks on the C/Ds. However, we have identified McDonnell Douglas and Navy studies that state that the larger 480-gallon external fuel tanks can be used on existing F/A-18C/D aircraft. The 1988 Hornet 2000 study, prepared by a team led by the Naval Air System Command with the Center for Naval Analyses and McDonnell Douglas assisting, addressed the issue of carrying larger 480-gallon external fuel tanks on existing F/A-18C/Ds. The study reports that “Range/radius improvements can be achieved with larger external fuel tanks. The 480 gallon fuel tank rather than the 330 gallon can be accommodated on inboard wing stations of all configurations, including the baseline.” “The 480-gallon fuel tank was initially designed for carrier use, but the production version has been modified for use on the Canadian CF-18. Additional testing must be completed to requalify the fuel tank for carrier use and the aft pylon attach point will require strengthening for the carrier environment. The modifications appear to be low risk.” A 1991 McDonnell Douglas report, “480 Gallon External Fuel Tank,” concluded that the 480-gallon external fuel tank can be carried on the F/A-18C/D inboard wing stations for carrier operations. According to the report, use of the 480-gallon tank on the C/D does not require any structural changes to the aircraft and the 480-gallon tank can be used with all weapons qualified for the F/A-18C/D. The report also stated that the new 480-gallon tank increases the multimission capability and flexibility of the F/A-18 fighter. As shown in figure 2.1, the 480-gallon fuel tank extends the C/D strike interdiction range flying low-altitude missions with two external tanks from 325nm to 393nm. This increased range exceeds the 390nm specification range for the F/A-18E/F flying the low-altitude strike mission profile. Range (in nautical miles) E/F strike range specification (390nm) Additionally, the McDonnell Douglas report stated that the 480-gallon tanks increase the deck cycle time of the F/A-18C/Ds configured for a fighter escort mission, to over 3 hours. Also, the report noted that two 480-gallon tanks on the C/D effectively replace three 330-gallon tanks. This gives the mission planner the option to have the C/Ds carry additional weapons, sensors, or fuel on the centerline station. Recent Navy range predictions show that the F/A-18E/F is expected to have a 683nm strike range, carrying two 2,000-pound precision-guided bombs. The Navy plans to achieve this significant range, a range that approaches that planned for the canceled A/F-X program and the Navy’s JAST variant, by flying F/A-18E/F strike missions with the larger 480-gallon tank and using a more fuel-efficient, survivable, and lethal high-altitude mission profile rather than the specified low-altitude profile. However, as shown in figure 2.2, the same Navy predictions show that F/A-18C/D’s strike ranges also increase significantly when flying at high altitudes because of increased fuel efficiency at higher altitudes. According to Navy data, the F/A-18C/D flying at high altitudes with its normal configuration of three 330-gallon external fuel tanks has a range of 566nm—176nm more than the F/A-18E/F’s strike range specification. According to Navy and contractor documents, key factors in determining combat performance of an aircraft are thrust, turn rate, and acceleration. The Navy stated that to maintain the combat performance of the larger and heavier F/A-18E/F relative to the F/A-18C/D, it would develop and incorporate new higher thrust engines. However, program data shows that the range improvements sought by the larger and heavier F/A-18E/F will be achieved at the expense of the aircraft’s combat performance and that the F/A-18E/F’s aerial combat performance in key areas will be inferior to current F/A-18C/Ds. The F/A-18E/F’s larger fuel capacity, due to its larger size, allows the aircraft to achieve greater range than the F/A-18C/Ds. The F/A-18E’s empty weight without fuel and ordinance is about 6,100 pounds heavier than that of the C’s. The E is 4.3 feet longer than the C, and its wing area is 25 percent greater. The F/A-18E can carry about 6,600 more pounds of fuel than the F/A-18C. The F414-GE-400 engine being developed for the E/F by General Electric is designed to provide added thrust to compensate for the added weight of the aircraft and fuel. (See fig. 2.3.) According to program documents, the F414-GE-400 engine generates about 22,000 pounds of uninstalled thrust, a 37.5-percent increase over the F404-GE-400 engine used in the F/A-18A/B and some early F/A-18C/D aircraft. However, technical manuals show that the F/A-18E/F’s F414-GE-400 engine develops only 20,727 pounds of uninstalled thrust. Furthermore, the latest F/A-18C/Ds are equipped with an enhanced version of the F404 engine, known as the F404-GE-402 Enhanced Performance Engine. This new engine that was developed to meet foreign buyers’ requirements for better combat performance has been adopted for Navy use. The enhanced engine increased the uninstalled thrust from 16,000 to 17,754 pounds. Consequently, as shown in table 2.1, the F/A-18E/F has about a 17-percent improvement in uninstalled thrust over the C/Ds fitted with the F404-GE-402 Enhanced Performance Engine, rather than 37.5-percent reported in program documents. This limited improvement in uninstalled thrust, coupled with a much heavier operationally loaded F/A-18E/F, means that the E/F will have less air-to-air combat capability in its sustained turn rate, maneuvering, and acceleration than F/A-18C/Ds with the enhanced performance engines. Sustained turn rate, maneuvering, and acceleration contribute to an aircraft’s combat performance and survivability by increasing its ability to maneuver in either offensive or defensive modes. Navy data comparing the F/A-18C to the F/A-18E shows the following: At sea level, the F/A-18C’s sustained turn rate is 19.2 degrees per second, while the F/A-18E’s sustained rate is 18 degrees per second. The instantaneous bleed rate of the F/A-18C is 54 knots per second, whereas the F/A-18E will lose 65 knots per second in a turn. At 15,000 feet, the F/A-18C’s sustained turn rate is 12.3 degrees per second, while the F/A-18E’s sustained rate is 11.6 degrees per second. The instantaneous bleed rate of the F/A-18C is 62 knots per second, whereas the F/A-18E will lose 76 knots per second in a turn. Aircraft acceleration affects an aircraft’s combat performance in a number of ways, ranging from how quickly the aircraft can reach its area of operation to its ability to close the gap in air-to-air engagements or to evade air-to-ground missiles. Navy data shows the following: At 5,000 feet at maximum thrust, the F/A-18C accelerates from 0.8 Mach to 1.08 Mach in 21 seconds, whereas the F/A-18E will take 52.8 seconds. At 20,000 feet at maximum thrust, the F/A-18C accelerates from 0.8 Mach to 1.2 Mach in 34.6 seconds, whereas the F/A-18E takes 50.3 seconds. At 35,000 feet at maximum thrust, the F/A-18C accelerates from 0.8 Mach to 1.2 Mach in 55.80 seconds, whereas the F/A-18E takes 64.85 seconds. The F/A-18C accelerates from 0.8 Mach to 1.6 Mach in 2 minutes 12 seconds, whereas the F/A-18E takes 3 minutes and 4 seconds. In justifying the low-altitude 390nm strike range specification for the F/A-18E/F, the Navy cited the F/A-18C/D’s shorter strike range (325nm flying the low-altitude mission profiles) and its inability to perform long-range unrefueled missions. Current Navy modeling projects that the F/A-18E/F will have a strike range of 465nm when flying the specified low-altitude mission profile, or 75nm greater than the 390nm development specification. However, the Center for Naval Analysis reported that with these ranges, the F/A-18E/F and F/A-18C/D will both need aerial refueling to reach most targets in two of the most likely wartime scenarios if high-altitude mission profiles are not flown. A 1993 Center for Naval Analysis report indicates that the E/F, even with its range improvement over the F/A-18C/D, would require in-flight refueling to reach a majority of targets in many of the likely wartime scenarios in which the E/F would be employed. The Center’s 1993 report was consistent with its 1989 report that concluded that an upgrade to the F/A-18C/D (now identified as the F/A-18E/F) would probably retain its need for in-flight refueling. Therefore, according to the 1989 report, the desire for additional internal fuel should not be the driving force in the design of the F/A-18E/F. The Navy cited an anticipated deficiency in F/A-18C carrier recovery payload capacity as one of the primary reasons for developing the F/A-18E/F. In 1992, when seeking approval for the F/A-18E/F, the Navy stated that F/A-18Cs procured in fiscal year 1988 had a total carrier recovery payload capacity of 6,300 pounds. However, it projected that F/A-18C enhancements planned through the fiscal year 1993 procurement (delivery in fiscal year 1995)(Lot XVII) would increase the aircraft’s operating weight and decrease its total carrier recovery capacity to 5,785 pounds. It said this condition would constrain the ability of the carrier’s air wing to fulfill its full spectrum of training requirements—especially under the worse case scenario of conducting night training and carrying greater amounts of reserve fuel needed for a divert field landing. As shown in table 2.2, the F/A-18C carrier recovery payload capacity is substantially greater than the Navy projected it would be and, in fact, is greater than when the F/A-18C was introduced into the fleet in late 1987. As indicated in table 2.2, current F/A-18Cs have 7,013 pounds of carrier recovery payload capacity, rather than the 5,785 pounds the Navy predicted. The higher carrier recovery payload capacity calculation is the result of the Navy, in 1994, increasing the F/A-18C’s maximum allowable carrier landing weight from 33,000 to 34,000 pounds, thereby adding 1,000 pounds to the payload and (1) replacement of the canceled Advanced Self Protection Jammer with a lighter system, the ALQ-126 and (2) a prior overestimate of weight needed for contingencies. The F/A-18C’s better than projected carrier recovery payload is being demonstrated during actual flight experience of the F/A-18Cs flying military operations in Bosnia. (See fig. 2.4.) According to data provided by the F/A-18 program office, as shown in table 2.3, F/A-18Cs routinely bring back 7,156 pounds of recovery payload. The Navy achieved this recovery payload by increasing the F/A-18C’s maximum landing weight to 34,000 pounds and decreasing the reserve fuel level from 5,000 to 3,500 pounds. The Navy has stated that although it is currently able to bring back a full operational load of existing weapons, it will not be able to bring back the heavier, more expensive precision-guided munitions planned for the future. Because the Navy has demonstrated the ability to manage the recovery payload of the F/A-18C by increasing the maximum landing weight of the F/A-18C by 1,000 pounds for Bosnian operations, we attempted to determine whether the maximum landing weight could be further increased to compensate for future munitions. Navy program officials did not know whether the maximum landing weight could be increased further; however, the Hornet 2000 Technical Report states that the carrier landing design gross weight of the F/A-18C can be increased to 37,000 pounds with landing gear and other changes, thereby providing an additional 3,000 pounds of recovery payload. Adding this weight to the total carrier recovery payload shown in table 2.2 would result in a total recovery payload of 10,013 pounds for the F/A-18C. That amount of carrier payload recovery for the F/A-18C is greater than the 9,000 pounds of payload sought for the F/A-18E/F. The Navy is seeking to improve F/A-18E/F survivability compared to the current F/A-18C/D by reducing its detectability and the probability of it being destroyed. Although survivability improvements for the F/A-18E/F are planned, the F/A-18E/F was not justified to counter a particular military threat that could not be met with current F/A-18C/Ds or F/A-18C/Ds that will be enhanced by additional planned survivability features. In addition, the effectiveness of an F/A-18E/F survivability improvement is questionable. Moreover, the JSF represents an alternative, affordable next generation aircraft that is projected to surpass the survivability of the F/A-18E/F at less cost. In August 1993, we reported that the F/A-18E/F was not justified to counter a particular military threat that could not be met with current capabilities. In responding to our report, the Under Secretary of Defense for Acquisition disagreed with our conclusion that the F/A-18E/F decision was not threat based. He referred to the April 1993 “Report to Congress on Fixed-Wing Tactical Aviation Modernization,” which he stated included intelligence data on projected threats in the post-year 2000 period, which require improvements in the survivability of tactical fixed-wing aircraft. He stated that these improvements were part of the process for approving the modification of the F/A-18C/D to the F/A-18E/F. We reviewed this report and found that although this study discussed future threats, it was in system-to-system engagements, not as part of a force package where other assets are used to increase aircraft survivability. According to Navy officials, the F/A-18E/F will be operated as part of a force package—just as the F/A-18C/D currently operates. These aircraft will not operate alone as the stealthy F-22 and the Navy’s JSF are planned to be. (Chapter 4 discusses the JSF and its planned survivability features.) The relative importance of a threat-based justification for the E/F is also supported by a March 24, 1992, memorandum from the Vice Chairman of the Joint Chiefs of Staff to the Under Secretary of Defense for Acquisition. It said that the main consideration in the timing of buying the F/A-18E/F was not an emerging threat. This is consistent with statements contained in the May 1992 F/A-18E/F Cost and Operational Effectiveness Analysis Summary. According to the summary, the Navy’s current F/A-18 warfighting capability was expected to be adequate in dealing with the projected threat beyond the turn of the century. Further, the key components of potential threats have stabilized in response to East European political economic shifts. Also, the Commonwealth of Independent States’ emphasis on development and deployment of advanced air, ground, and naval weapons had greatly declined, particularly the anti-air warfare threat. According to the May 1992 F/A-18E/F Acquisition Plan, the aircraft’s weapon system architecture was to be essentially the same as the F/A-18C/D Night Attack aircraft. An October 1995 F/A-18 program brief and a more recent Naval Intelligence study on strike warfare state that the F/A-18C is survivable against all current air-to-air threats. The October brief further states that the F/A-18C Night Strike Hornet (compared with previous F/A-18s) increased the exchange rate against the MiG-29 by a factor of 4, increased survivability against surface threats, and is 23 percent more effective in strike warfare. Additional improvements have subsequently been made or are planned for the F/A-18C/D to enhance its survivability. For example, according to Navy program documents, improvements were made to reduce its radar detectability. Although these improvements are classified and cannot be discussed in this report, Navy and contractor officials agreed that the radar detectability has been reduced. Other improvements to the F/A-18C/D include the following: The F404-GE-402 Enhanced Performance Engine to provide increased combat performance and, therefore, increased survivability. The ALR-67(V)3 Advanced Special Warning Receiver and the ALE-47 Countermeasures Dispensing System (chaff and flares) will be installed on new F/A-18C/Ds to alert the aircrew of potential threats and automatically deploy countermeasures, thereby decreasing the probability of the aircraft being hit should it be fired on. Standoff weapons, such as the Joint Standoff Weapon (JSOW), Standoff Land Attack Missile-Expanded Response, improved Advanced Medium Range Air-to-Air Missile (AMRAAM), and AIM-9X to be installed on the F/A-18C/D will improve its standoff range from the threat and thus further improve its survivability. The Navy listed reduced aircraft radar signature as an objective and key measure of aircraft survivability when discussing F/A-18E/F survivability improvements. Navy and McDonnell Douglas officials said they have significantly reduced the F/A-18E/F’s frontal radar signature compared to the C/D model. The specifics of how radar signature reduction is achieved are classified. However, according to Center for Naval Analysis and Navy officials, the F/A-18E/F’s reduced radar signature only helps it penetrate slightly deeper than the F/A-18C/D into an integrated defensive system before being detected. When Navy officials referred to the F/A-18E/F’s reduced frontal radar signature, they cite low observability improvements made to the aircraft structure. However, because the F/A-18E/F will be carrying weapons and fuel externally, it will diminish the radar signature reduction improvements derived from the structural design of the aircraft. The need to carry weapons and fuel internally to maintain an aircraft’s low observability is consistent with low observability or stealthy aircraft designs, such as the F-117, the A-12, the A/F-X, the F-22, and the B-2, all designed to carry fuel and weapons internally. “While very beneficial in a one-on-one engagement, nose-on to the threat, treatments to enhance the survivability of a conventional aircraft by reducing the forward aspect observable level is not sufficient to successfully penetrate a typical threat environment. The long detection and engagement range of modern threat systems against the side sector of an Enhanced Conventional Aircraft will significantly decrease the likelihood of a successful mission.” “Further, the addition of external stores to enable an Enhanced Conventional Aircraft to accomplish a military objective, may well eliminate much of what is gained in reduced threat capability, even in the nose region.” This is further validated by the current JAST program commitment to designing its JSF to carry its weapons internally because carrying weapons externally does not meet the Navy’s reduced signature needs for first day survivability. The JAST office concluded that the treatment of external equipment, to limit their negative effect on radar signature reduction, would be expensive and would have a negative effect on aircraft performance, supportability, and deployability. In summary, the JAST office has concluded that the most cost-effective and overall operational beneficial solution if low observability is required, appears to be carrying weapons and other equipment internally. In December 1995, the F/A-18E/F program office asked McDonnell Douglas to define the work necessary to develop simple, affordable, low-observable treatments for certain equipment that will be carried externally on the E/F aircraft. The program office stated that the E/F program has produced a low-observable aircraft, but that low-observable externally carried equipment and weapons were outside the scope of the E/F program. The program office stated that this equipment, when installed on the E/F with low-observable compatible weapons, would be necessary to yield a low-observable weapon system. In addition to the operational capability improvements discussed in the preceding chapter, the Navy also stated that the E/F (1) was needed to provide critically needed space for avionics growth and (2) with its two additional weapons stations, would be more lethal. However, our review indicates that the decline in avionics growth space has not occurred as predicted, and weight limitations, problems when weapons are released from the aircraft, and the limited increase in weapons payload associated with the new weapons stations raises concerns about how much increased lethality the E/F will have. In justifying the need for the F/A-18E/F, the Navy stated that the additional space to be provided by the F/A-18E/F was critically needed because by the mid-1990s, the F/A-18C/Ds would not have space to accommodate some additional new weapons and systems under development without removing an existing capability. However, as previously discussed, an increased threat is not driving decisions to add new systems. Furthermore, the growth space deficiency anticipated for the F/A-18C/D has not occurred as predicted. According to 1992 Navy predictions, by fiscal year 1996, the ongoing program to upgrade the F/A-18C/D’s avionics would result in an aircraft with only 0.2 cubic feet of space available for future growth. However, in 1995, McDonnell Douglas representatives indicated that the F/A-18C had at least 5.3 cubic feet of space available for system growth. This additional space is available from the following two sources: Replacing the F/A-18C/D’s ammunition drum with a linear linkless feed system would provide 4 cubic feet of additional space in the gun bay. The right leading edge extension on the F/A-18C, which is an extension of the frontal aspect of the wing, has 1.3 cubic feet of space available for growth. Furthermore, indications are that technological advancements will result in additional avionics growth space. The effect of these advancements, which include such things as miniaturization, modularity, and consolidation, are indicated in some upgraded avionics systems employed on the F/A-18C/D. We reviewed the changes scheduled for the F/A-18C/D between fiscal years 1992 and 1996 and identified seven upgrade replacement systems that would be used in the latest versions of the F/A-18C/D and the F/A-18E/F. We found that because of the reduced size of modern avionics systems, in total, the new systems provided 3 cubic feet of additional space and reduced the total avionics systems’ weight by about 114 pounds. Table 3.1 shows the details of this calculation. The Navy also contends that the availability for growth on the F/A-18C/D is not possible due to the lack of sufficient power and cooling capability. However, according to McDonnell Douglas engineering representatives, the F/A-18C/D’s power and cooling needs have not been validated through an actual test. Rather, the statements that the C/D has no more growth capability are based on analysis using estimated and outdated data. Additionally, the Hornet 2000 study suggested options to increase power and cooling capacity within the current space/volume of the baseline F/A-18 aircraft. To increase the aircraft’s power capacity, the report suggested a new generator system with more than a 30-percent increase in power a monitored bus system capable of shedding selected loads when one generator becomes inoperative. To increase the F/A-18C/D’s cooling capacity, the Hornet 2000 report stated that the air cooling system could be modified to increase capacity by 47 percent. The F/A-18E/F is designed to have more payload capacity than current F/A-18C/Ds as a result of adding two new wing weapon stations—referred to as the outboard weapons stations. However, unless the current problems when weapons are released from the aircraft are resolved, the types and amounts of external weapons that the E/F can carry may be restricted. Also, while the E/F will provide a marginal increase in air-to-air capability, it will not increase its ability to carry the heavier air-to-ground weapons that are capable of hitting fixed-targets and mobile hard targets and the heavier stand-off weapons that will be used to increase aircraft survivability. As illustrated in figures 3.1 and 3.2, airframe modifications, such as larger geometrically shaped engine inlets and additional weapon stations, have reduced the critical distance between several F/A-18E/F weapon stations. A NAVAIR representative stated that it has been estimated that the distance between the inboard weapon stations and the engine inlet stations on the E/F has been reduced by about 5 inches compared to the C/D. The distance between the new outboard (stations 2 and 10) and mid-board stations (stations 3 and 9) is smaller than between the mid-board (stations 3 and 9) and inboard stations (stations 4 and 8), 35 inches versus 46 inches, respectively. The space reduction adversely affects the E/F’s capabilities. For example, wind tunnel tests show that an external 480-gallon fuel tank or a MK-84 2,000-pound bomb, carried on the inboard station, will hit the side of the aircraft’s fuselage or make contact with other weapons when released. Additionally, according to the representative, the limited distance between the new outboard and mid-board stations, coupled with outboard pylons that are shorter and closer to the wing, will cause problems when releasing large, finned weapons, such as the High-Speed Anti-Radiation Missile (HARM). F/A-18E/F airframe changes have also increased adverse airflows that exacerbate these problems. Wind tunnel testing shows that the F/A-18E/F is experiencing increased yaw and pitch motion of its external equipment. The increased yaw motion is the result of increased air outflow at the nose of a weapon and increased inflow at the tail of a weapon, causing the tail of the weapon to make contact with the aircraft. Similarly, the increased pitching results from the air sweeping over the nose of a store in a downward direction while an upward airflow causes the tail of the store to make contact with the aircraft. The Navy and McDonnell Douglas are studying a number of airframe fixes to correct the airflow problem. They are also studying options that place tactical restrictions on weapon deployments. These options include reducing the number of weapons the E/F carries and reducing the speed the aircraft is flying when the weapons are released. Our analysis showed that the F/A-18E/F will provide a limited increase in payload over the C/D model. In the air-to-air role, as shown in table 3.2, the F/A-18E/F will have a two-missile advantage over the F/A-18C/D. The F/A-18E/F’s new outboard stations are limited to carrying weapons weighing no more than 1,150 pounds per station. In the air-to-ground role, this precludes the F/A-18E/F from carrying a number of heavy precision-guided munitions such as the Harpoon, Standoff Land Attack Missile, Laser Guided MK-84, Guided Bomb Unit-24, and WALLEYE II that weigh more than the weapon station weight limit. Consequently, because of these limitations, the F/A-18E/F will carry the same number of these heavier precision-guided munitions as the F/A-18C/D. The JAST program office is developing technology for a family of affordable next generation JSF aircraft for the Air Force, Marine Corps, and Navy. (See app. II for a discussion of JAST program objectives and approach.) The Navy plans to procure 300 JSFs and use them as a stand alone, first-day survivable (stealthy) complement to the F/A-18E/F. The first Navy JSF aircraft is scheduled to be delivered in 2007. On the basis of contractor trade studies and a recent Naval Intelligence assessment, JSF is projected to have an overall combat effectiveness greater than the F/A-18E/F. JSF is also projected to have a lower unit flyaway cost than the E/F. Concept exploration and development trades studies from three major potential aircraft production contractors—Boeing Corporation; Lockheed Martin Corporation; and a consortium of McDonnell Douglas Aerospace, Northrop Grumman, and British Aerospace Corporations—indicated that an affordable family of stealthy strike aircraft could be built on a single production line with a high degree of parts and cost commonality. (See fig. 4.1 for JAST concept.) According to the JAST Joint Initial Requirements Document, the recurring flyaway cost of the Navy variant will range from $33 million to $40 million (in fiscal year 1996 dollars), depending on which contractor design is chosen. The JAST office projects that the Navy’s JSF variant will have operational capabilities, especially range and survivability, that will be superior to the F/A-18E/F. It is too soon to determine the extent to which the JSF cost and performance goals will be achieved. The driving focus of JAST is affordability. Contractor studies indicate that JAST has the potential to reduce total life-cycle cost by approximately 40 percent. Life-cycle cost is made up of research and development costs, production costs, and operations and support costs. According to a McDonnell Douglas study, their JAST proposal would have a flyaway cost 14 percent lower than the F/A-18E/F. To arrive at these goals, the contractor studies concluded that the family of aircraft would have to contain such features as: a single, common engine; use of advanced avionics and exploitation of off-board sensors; advanced diagnostics to reduce supportability costs; maximum commonality to include a common fuselage for all service variants that could be built on a common production line; and affordable requirements. According to the participating contractors and the JAST program office, tri-service commonality is the key factor in achieving JSF affordability goals, and if this commonality is to occur, the services must compromise on operational needs. The Navy’s JSF variant is expected to be the most costly of the three service variants due in part to carrier suitability features and the greater operational capability in range and internal payload proposed for the Navy’s variant. Current unit recurring flyaway cost objectives for the Navy variant range between $33 million and $40 million (fiscal year 1996 dollars), based on a total buy of 2,816 aircraft for the three services. This compares to $53 million per unit recurring flyaway (fiscal year 1996 dollars) for the F/A-18E/F based on total procurement of 660 E/F’s at 36 per year. According to the JAST office’s Joint Initial Requirements Document, the JSF cost objectives are based on projected budget constraints and service needs. The JAST program office projects that significant life-cycle savings for JSF are achievable through implementation of new acquisition processes, technologies, manufacturing processes, and maintenance processes being developed as part of the JAST program. Depending on the degree of commonality between the service variants and the ability to implement other cost-saving measures, the JAST office projects the total life-cycle cost could be as much as 55-percent less than if it used traditional acquisition and production processes. The participating contractors presented the results of their concept development studies to the JAST office and the Under Secretary of Defense (Acquisitions and Technology) in August 1995. The presentations outlined the latest design capabilities and projected costs for each of the services’ JSF designs. The JSF is expected to have an overall combat effectiveness greater than any projected threat and greater than the F/A-18E/F. The Navy’s JSF variant is also expected to have longer ranges than the F/A-18E/F to attack high-value targets, such as command and control bunkers, without using external tanks or tanking. Unlike the F/A-18E/F, which will carry all of its weapons externally, the Navy’s JSF variant will carry at least two air-to-ground and two air-to-air weapons internally. By carrying its weapons internally, the JSF will maximize its stealthiness and thus increase its survivability in the high threat early stages of a conflict. The Navy expects that its JSF variant will have the capability to go into high-threat environments without accompanying electronic warfare support aircraft in the first day or early phase of a conflict and be survivable. For example, the JSF would have the capability to attack these high-threat targets without jamming support from EA-6B aircraft that the F/A-18E/F would need to be survivable against integrated air defense systems and sophisticated aircraft that would still be operating during the early stages of a conflict. Combat range improvement was a primary objective of the F/A-18E/F program. JAST program contractor studies indicated that the Navy variant would have significantly greater range than the F/A-18E/F using internal fuel only and even greater range after the enemy threat is reduced and the aircraft can use external fuel tanks. The potential cost of the F/A-18E/F aircraft has been a source of debate among the Congress, DOD, and the Navy for many years, starting before the program was formally approved. Our review indicated that the Navy’s cost estimates to procure the F/A-18E/F are still questionable. The $43.6 million (fiscal year 1996 dollars) unit recurring flyaway costestimate for the F/A-18E/F is understated. The estimate is based on a 1,000-aircraft total buy that is overstated by at least one-third because the Marine Corps does not plan to buy the E/F and an annual production rate that the Congress has stated is probably not possible due to funding limitations. Reducing the total buy and annual production rate will increase the unit recurring flyaway cost of the F/A-18E/F from $43.6 to $53.2 million (fiscal year 1996 dollars). In May 1992, the Office of the Secretary of Defense approved the Navy’s request that the F/A-18E/F be approved as a Milestone IV, Major Modification program, even though some Defense Acquisition Board participants had the following concerns about the program: E/F development cost projections had increased from $4.5 billion to $5.8 billion (then-year dollars); the unit cost of the E/F was estimated to be 65 percent greater than F/A-18C/D unit cost; the projected development cost of $5.8 billion (then-year dollars) was underfunded by as much as $1 billion; the cost of E/F pre-planned product improvements are not included in either development or production estimates; and the E/F was considered an upgrade to the F/A-18C/D rather than a new start, even though the E/F airframe was projected to be only 15-percent common to the C/D. In evaluating the fiscal year 1993 DOD budget request, the Congress addressed its F/A-18E/F concerns and established a number of fiscal limits on the program. The $5.783 billion (fiscal year 1996 dollars)/$5.803 billion (then-year dollars) F/A-18E/F development estimate, presented to the Defense Acquisition Board, was established as a funding ceiling for development costs. Also, the Congress stated that F/A-18E/F unit flyaway costs should be no greater than 125 percent of the F/A-18C/D’s unit flyaway cost. Congressional concern about E/F unit cost projections was based in part on the high annual production rate that the Navy used in arriving at its per unit procurement estimates. The Navy projected that beginning in 2007, and continuing through 2015, it would procure 72 F/A-18E/Fs per year. The Congress believed this was unrealistic and directed that DOD calculate a range of unit costs based on production rates of 18, 36, and 54 aircraft per year. According to program officials, they are not required to report revised cost estimates based on the change to production rates until an early operational assessment is completed in the spring of 1996. DOD’s F/A-18E/F unit recurring flyaway cost estimate is $43.6 million (fiscal year 1996 dollars). This cost is understated because the total F/A-18E/F procurement levels and annual production rates that are essential for predicting acquisition unit costs are overstated and contract estimates for initial production aircraft are higher than projected. In calculating the F/A-18E/F unit acquisition costs, the Navy assumed it would procure 1,000 aircraft from 1997 through 2015—approximately 660 for the Navy and 340 for the Marine Corps at a high annual production rate of 72 aircraft. However, the Marine Corps does not plan to purchase any F/A-18E/Fs, and indications are that once the Navy’s JAST variant becomes available fewer F/A-18E/Fs will be procured annually. The Marine Corps Aviation Plan and the Marine Corps Deputy Chief of Staff for Aviation in a 1994 memorandum and in 1995 testimony before the Congress stated that the Corps plans to “neck down” to one aircraft in the future. It plans to replace all of its current F/A-18C/D and AV-8B aircraft with the Advanced Short-Takeoff and Vertical-Landing aircraft now under management of the JAST program. Because the Marine Corps does not plan to procure any F/A-18E/Fs—data from a Navy’s program cost analysis report and discussions with NAVAIR cost officials and confirmed by the Marine Corps identifies 340 aircraft as the programmed Marine Corps buy—the total F/A-18E/F buy would be reduced from 1,000 to 660 aircraft. The likelihood that fewer F/A-18E/Fs will be procured is possible once the JSF, projected to be more capable and less costly than the E/F, becomes available around 2007. Additionally, the E/F unit cost is affected by a lower-than-projected annual production rate. The Navy’s unit cost calculations assumed an annual peak production rate of 72 aircraft for 8 years, representing over half the production run. The Congress, in its fiscal year 1993 Authorization Conference Report, questioned whether an annual production rate of 72 aircraft was realistic and directed the Navy to provide cost-estimates for smaller production quantities (18, 36, and 54) with the results of the F/A-18E/Fs initial operational assessment, which is scheduled for the spring of 1996. However, data shows that E/F production rate is expected to be lowered to only 36 F/A-18E/Fs annually rather than 72. Historically, reductions in annual production rates have increased the per unit procurement cost of aircraft. The Navy has not provided us the increased unit cost based on reduced annual production rates. Therefore, we approximated what the unit cost increase would be based on a total procurement of 660 rather than 1,000 aircraft and an annual production rate of 36 rather than 72 aircraft. Using the A/F-X cost model to predict the effect of total buy and annual production rate changes on recurring flyaway cost, we calculated that the F/A-18E/F unit recurring flyaway cost would be $53.2 million (fiscal year 1996 dollars) rather than the $43.6 million (fiscal year 1996 dollars) estimated by DOD. The $53.2 million unit recurring flyaway cost for the F/A-18E/F indicates that the E/F would have a unit recurring flyaway cost that is 189 percent of the F/A-18C/D’s unit recurring flyaway cost ($53 million compared to $28 million). As shown in appendix I, this cost difference in unit recurring flyaway would result in a savings of almost $17 billion (fiscal year 1996 dollars) or savings of over $24 billion when expressed in then-year dollars, if the Navy were to procure 660 F/A-18C/Ds rather than 660 F/A-18E/Fs. Our estimated savings do not include the cost of C/D upgrades, such as the larger 480-gallon external fuel tanks for improved range nor the strengthened landing gear to increase carrier recovery payload. However, our estimated savings are conservative because they also do not include planned E/F upgrades and are based on recurring flyaway costs that do not include the other items that make up total procurement costs. (See app. I for a discussion of how unit costs are computed.) Additionally, our estimated savings do not include savings that would accrue from having fewer type model F/A-18 aircraft in the inventory. The cost benefits would result from having common aircraft spare parts, simplified technical specifications, and reduced support equipment variations, as well as reductions in aircrew and maintenance training requirements. Also, there are other indications that F/A-18E/F procurement costs could increase further. According to contractor estimates, the cost of LRIP for the E/F is currently projected to be 8.5-percent greater than estimates provided to the Congress. DOD faces funding challenges as it attempts to modernize its tactical aircraft fleet through the Air Force’s F-22 program, the Navy’s F/A-18E/F program, and the tri-service JSF program. Various DOD officials have recognized that funding for each of these programs may not be forthcoming. In that event, DOD will be forced to make some funding trade-offs among these three competing aircraft programs. In prior reports, we offered alternative procurement strategies for the Air Force’s F-22 program. Regarding the Navy’s F/A-18E/F program, DOD’s next major decision is whether to proceed into production. The Navy has spent about $3.75 billion (then-year dollars) on the E/F engineering and manufacturing development effort and plans to spend $57.31 billion (fiscal year 1996 dollars)/ $83.35 billion (then-year dollars) to procure 1,000 aircraft. This report demonstrates that the justification for the E/F is not as evident as perhaps it was when the program was approved in 1992 because the E/F was justified, in large part, on projected operational deficiencies in the C/D aircraft that have not materialized. This report also demonstrates that proceeding with the E/F program is not the most cost-effective approach to modernizing the Navy’s tactical aircraft fleet. Therefore, the information provided in this report should be fully considered before a production decision is made on the E/F. Such consideration should take into account the following. Operational deficiencies in the F/A-18C/D cited by the Navy in justifying the need for the F/A-18E/F—range, carrier recovery payload, survivability, and system growth—either have not materialized as projected or can be corrected with nonstructural changes to the F/A-18C/D. Furthermore, E/F operational capabilities will only be marginally improved over the C/D model. The E/F’s increased range is achieved at the expense of combat effectiveness and increased F/A-18E/F payload capability has created weapons release problems that, if not resolved, will reduce the F/A-18E/F’s payload capability compared to the F/A-18C/D. A more cost-effective approach to modernizing the Navy’s tactical aircraft fleet exists. In the short term, the Navy could continue to procure the F/A-18C/D aircraft. In the mid-term, upgrades could be made to the C/Ds to further improve the C/D’s operational capabilities. These upgrades could include such things as: using the larger 480-gallon external fuel tanks to achieve more range; modifying landing gear to increase carrier recovery payload; using advanced avionics that require less space, cooling and power; and incorporating add-on survivability features. For the long term, the Navy is considering JSF as a complement to the F/A-18E/F. DOD is predicting that the next generation strike fighter will provide more operational capability at less cost than the E/F. Therefore, the next generation fighter should be considered as an alternative to the F/A-18E/F. The F/A-18E/F will cost more to procure than DOD currently projects. The $43.6 million (fiscal year 1996 dollars) unit recurring flyaway cost estimate is based on a total buy of 1,000 aircraft—660 for the Navy and 340 for the Marine Corps—at a high annual production rate of 72 aircraft per year. However, the Marine Corps does not plan to buy the F/A-18E/F aircraft and the Congress has stated that an annual production rate of 72 aircraft is not realistic. Reducing the number of aircraft to be procured and the annual production rate to more realistic levels would reduce the total program cost but would increase the unit recurring flyaway cost of the aircraft to about $53 million (fiscal year 1996 dollars). In a related report on the F/A-18E/F, we stated that the Navy’s plan to procure the E/F appears to contradict the national military strategy, which cautions against making major new investments unless there is “substantial payoff.” We pointed out that Navy data show both the C/D and E/F are expected to hit the same ground targets with the same weapons. Pursuing other alternatives, rather than proceeding with the F/A-18E/F program, would save billions of dollars. Continued procurement of the Navy’s less expensive F/A-18C/D aircraft (the fiscal year 1996 unit recurring flyaway cost of F/A-18C/Ds is $28 million compared to $53 million for the F/A-18E/F) could be done only to the level needed to sustain inventories until the next generation strike fighter becomes available. Furthermore, reliance on the more affordable next generation strike fighter as the Navy’s primary tactical aircraft would help keep that aircraft affordable by increasing the total buy. Given the cost and the marginal improvements in operational capabilities that the F/A-18E/F would provide, we recommend that the Secretary of Defense reconsider the decision to produce the F/A-18E/F aircraft and, instead, consider procuring additional F/A-18C/Ds. The number of F/A-18C/Ds that the Navy would ultimately need to procure would depend upon when the next generation strike fighter achieves operational capability and the number of those aircraft the Navy decides to buy. In its comments on a draft of this report, DOD said that it is convinced that the fundamental reasons for developing the F/A-18E/F remain valid. Since DOD provided no data or information that we had not acquired and analyzed during our review, we have not changed our position that procuring the E/F is not the most cost-effective approach to modernizing the Navy’s tactical aircraft fleet. We recognize that the E/F will provide some improvements over the C/D. However, the C/D’s current capabilities are adequate to accomplish its assigned missions. Based on the marginal nature of the improvements and the E/F’s projected cost compared to the alternatives discussed in this report, we believe that our recommendation that DOD reconsider its decision to produce the F/A-18E/F aircraft and, instead, consider procuring additional C/D aircraft until the next generation strike fighter becomes operationally available represents sound fiscal planning. We formulated our position within the context of current budget constraints, the decreased military threat environment, and statements by DOD officials, such as the Chairman of the Joint Chiefs of Staff, that DOD’s current plans to upgrade its tactical aircraft fleet will not be affordable. Additionally, as we pointed out in our report, the national military strategy directs that major new investments should have substantial payoff. We do not believe that procuring the F/A-18E/F would meet this test. DOD’s entire comments and our evaluation are included in appendix III. DOD requested funding in its fiscal year 1997 budget request to begin procurement of the F/A-18E/F. The Congress may wish to direct that no funds may be obligated for procurement of the F/A-18E/F until it has fully examined the alternatives to the E/F program. In that regard, the House National Defense Authorization Act for Fiscal Year 1997 (H.R. 3230, sec. 220) directed such an examination, and a DOD deep strike study is expected to be completed by the end of 1996. Delaying the authority to begin procuring the E/F would allow DOD to complete its study and time for the Congress to asses the results of the DOD study and the information in this report as it decides whether DOD should be provided funding to proceed with the F/A-18E/F program.
GAO reviewed the Navy's plan to procure with F/A-18E/F aircraft, focusing on: (1) whether operational deficiencies in the F/A-18C/D cited by the Navy to justify the need for the F/A-18E/F have materialized and, if they have, the extent to which the F/A-18E/F would correct them; (2) whether the F/A-18E/F will provide an appreciable increase in operational capability over the F/A-18C/D; and (3) the reliability of the cost estimates for the F/A-18E/F and a comparison of those estimates with the costs of potential alternatives. GAO found that: (1) the F/A-18C/D could achieve strike ranges greater than required by the F/A-18E/F system specifications; (2) F/A-18C/D aircraft in service in Bosnian operations have achieved a carrier recovery payload capacity greater than the Navy's predicted carrier recovery payload capacity; (3) while the F/A-18E/F is predicted to have improved survivability over the F/A-18C/D, the F/A-18E/F was not justified on the basis that it was needed to counter a particular military threat that could not be met with current capabilities, and planned F/A-18E/F survivability might be better attained at less cost with the next-generation strike fighter; (4) despite the Navy's prediction, the F/A-18C/D has the additional space required for new avionics systems; (5) F/A-18E/F payload capability may not occur until air flow problems are corrected; (6) the next-generation Joint Strike Fighter is projected to cost less per aircraft, and be more capable than the F/A-18E/F; (7) reducing the total number of F/A-18E/F aircraft to be bought and the annual production rate to levels tat are more realistic than the Navy estimated will result in the F/A-18E/F costing about $9.6 million more per aircraft than originally estimated; and (8) the Navy would save $17 billion in recurring flyaway costs if it procured F/A-18C/D aircraft rather than F/A-18E/F aircraft.
According to IRS records from its master files on individual and business returns, IRS annually abated, on average, about 10 million tax, penalty, and interest assessments, totaling nearly $30 billion, in fiscal years 1995 through 1998. In fiscal year 1998, IRS abated 11.3 million assessments—3.1 million tax, 4.7 million penalty, and 3.5 million interest assessments. These abatements totaled about $22.2 billion in tax, $4.8 billion in penalty, and $2.1 billion in interest assessments. An assessment is a formal bookkeeping entry that IRS makes to record the amount of tax, penalty, or interest charged to a taxpayer’s account. An assessment establishes not only the taxpayer’s liability for amounts due and unpaid, but also IRS’ right to collect. Taxpayers essentially assess their taxes owed when they file their tax returns. IRS may assess additional taxes owed, as well as penalty and interest amounts, through an enforcement program. Taxpayers also may file an amended return or otherwise notify IRS of an error, which may lead to additional assessments. Section 6404 of the Internal Revenue Code (as well as various other sections) authorizes IRS to abate an assessment under certain conditions. For example, IRS abates an erroneous assessment, which can be caused by either IRS or the taxpayer. A taxpayer may make an error on the original tax return, such as not claiming deductions. Or, IRS may assess incorrect tax amounts as a result of errors made when matching income reported by taxpayers with income reported by third parties (such as employers) on payments made to the taxpayers. IRS abates a tax assessment of the correct amount for various reasons. For example, as a result of an audit, IRS may correctly assess an additional tax amount for one tax year, but a taxpayer can carry back net operating losses incurred in other tax years to reduce the additional tax liability in that prior tax year. Also, IRS abates an unpaid assessment that involved the correct amount of tax owed but that was made after the time period allowed for assessing additional tax liabilities. Further, IRS abates multiple assessments of the correct amount after that amount is paid. In these cases, IRS assesses multiple taxpayers (such as partners or officers in a business) for the amount owed by the business, knowing that the extra assessments are to be abated after full payment is received. Whenever IRS abates a tax assessment, any related penalty or interest assessments are also abated. Further, IRS abates certain penalties when a taxpayer provides a reasonable cause for failing to meet a responsibility, such as not paying a tax assessment or not filing a return on time. Reasonable causes can include illness or other nonfinancial hardships and reliance on incorrect written advice from IRS on technical issues. IRS also abates certain interest assessments that accrue, for example, because of an unreasonable error or delay by IRS in assigning staff to the case, sending notices to taxpayers, transferring a case to another IRS office, or performing other procedural or managerial actions and delays in filing income tax returns and paying taxes because of a natural disaster for those living in a presidentially-declared disaster area. IRS’ interactions with taxpayers on abatements and other types of adjustments to taxpayers’ accounts usually occur through IRS’ 10 service centers or 33 district offices. Major activities at the service centers include processing tax returns and related tax schedules from taxpayers and information returns from third parties that report payments, such as wages and interest. In addition, service centers provide services to taxpayers, such as responding to taxpayer inquiries, as well as perform enforcement functions, such as audits and collection, usually through correspondence or the telephone. District offices also offer services to taxpayers as well as perform enforcement functions. IRS staff at the district offices are more likely to interact with taxpayers through face-to-face meetings because the issues tend to be more complex or otherwise require more explanation. To describe IRS’ process for making abatements from initiation through final review in selected IRS locations, we talked to IRS National Office work groups that could make abatements and the IRS office that administers penalties. We reviewed the Internal Revenue Manual sections and other IRS documents that govern the abatement process. Our visits to two service centers and two district offices, as discussed below, also helped us describe the process. Specifically, we visited an IRS service center and district office in the Kansas-Missouri area and a service center and district office in California. We chose these four locations in consultation with your office. We were able to do more work in some of the four locations, depending on which ones we visited first, the availability of data, and the responsiveness of IRS staff within the time frame of our review. Given time constraints, we did not visit more service centers or district offices. As a result, we do not know how similar or different the process was at the other 8 service centers and 31 district offices. Nor did we attempt to determine whether (1) any differences in the abatement process at the two service centers and two district offices produced significant effects, particularly for similarly situated taxpayers; (2) IRS staff followed the process and related criteria in making abatement decisions; (3) the process was appropriate and adequately controlled; or (4) IRS should be collecting more quantitative data on its abatement process. During each visit, we interviewed officials from IRS work groups that made abatements. We identified these groups from information provided by IRS’ National Office and the four locations we visited. To confirm our understanding of the abatement process, we wrote summaries for each location and received comments on these summaries from officials at each location as well as IRS’ National Office. The summaries incorporating IRS’ comments can be found in appendixes I-IV. To describe IRS’ efforts to improve the abatement process, we interviewed officials in IRS’ National Office as well as selected staff in field offices who had worked on a task force or study that addressed abatements in some way. We also collected any available studies. As with our work on the abatement process, we asked the responsible IRS officials to comment on our summary of these efforts. We performed our audit at IRS headquarters offices in Washington, D.C.; the Kansas City Service Center in Kansas City, MO; the Fresno Service Center in Fresno, CA; the Kansas-Missouri District Office in St. Louis, MO; and the Northern California District Office in Oakland, CA. Our work was done between February and June 1999 in accordance with generally accepted government auditing standards. On June 22, 1999, we met with representatives of the IRS Commissioner from the Customer Service and Examination Divisions as well as the Taxpayer Advocate and Legislative Affairs Offices to obtain their comments on a draft of this report, which are discussed near the end of this letter. At the two service centers and two district offices we visited, IRS’ abatement process depended on the type, complexity, and source of the assessment being abated. The process varied in terms of (1) how abatements are initiated, (2) which IRS work groups make abatement decisions, (3) what level of staff makes the decisions, (4) what tools guide the decisions, and (5) how IRS reviews quality. We did not attempt to evaluate whether any variations in the process affected abatement decisions. Nor do we know how similar or different the process is at the other 8 service centers and 31 district offices. IRS did not have quantitative data on the details of the abatement process, even though IRS’ master files have data on the overall number and amount of abatements. An abatement is just one type of adjustment made to taxpayer accounts. IRS staff also adjust accounts to reflect the dates and amounts of various types of assessments, payments, and refunds. IRS cannot extract quantitative data about the abatement process from data on all types of adjustments. For example, IRS did not have data on the number of abatements made by type of IRS group and staff or reviewed by IRS supervisors. Determining the costs and benefits of collecting such data on the abatement process was beyond the scope of this report. The two service centers and two district offices we visited did not track how the abatements were initiated. IRS officials, however, identified three basic ways of initiating abatements. First, taxpayers can initiate a request (also known as claim) for an abatement by filing an amended tax return (e.g., Form 1040-X for individuals and Form 1120-X for corporations); by filing an IRS Form 843, Claim for Refund and Request for Abatement; by writing a letter; or by making a phone call. Taxpayers would generally initiate these requests to correct errors on the original return they filed or to seek relief from penalties when they have reasonable cause for failing to comply with tax requirements. Second, taxpayers can request an abatement of an additional assessment generated by an IRS enforcement program. In these cases, the taxpayer would be responding to a notice about the assessment, such as a collection notice. These taxes might have been assessed because the taxpayer had not responded adequately to notices about the recommended assessment of additional tax amounts. For example, an audit may recommend additional assessments that the taxpayer did not challenge when the audit closed. The taxpayer might later decide to challenge the assessment and ask IRS for an abatement. Similarly, after receiving a notice of additional assessments, taxpayers may claim a reasonable cause for not timely paying the correct tax amount and ask IRS to abate certain types of penalties. Third, in some cases, IRS staff can initiate the abatement. For example, an IRS auditor might find evidence that a taxpayer overstated the tax liability on the original tax return. This evidence could lead to an abatement, depending on the results from the rest of the audit. Various IRS work groups at the two service centers and two district offices we visited had the authority to abate assessments. According to IRS officials, the type, complexity, and source of the assessment being abated determine which IRS work group makes the abatement decision. Using these factors as well as IRS routing criteria, a work group that initially receives a requested abatement is to determine whether it should make the abatement decision. If the determination is to route the abatement to another work group, the abatement process to be followed is to be guided by the general IRS criteria for the type of abatement being reviewed and the specific review process of that group. Each work group can abate different types of tax, penalty, and interest issues. Table 1 shows examples of the most common types or sources of abatements made by work groups at the two service centers, as identified by IRS officials. As table 1 shows, each service center organized itself somewhat differently. As a result, each service center had branches that the other center did not have. For example, Fresno had a Joint Compliance Branch that did not exist in Kansas City. This branch dealt with, among other things, abatement requests (known as category A claims) that were complex, sensitive, or involved types of taxpayers prone to noncompliance. Kansas City usually handled category A requests in its Examination Branch. Also, Kansas City allowed clerical staff in its Document Perfection Branch to abate simple requests made on amended returns. Fresno usually handled this type of request in its Adjustments Branch. The two district offices generally relied on three divisions to abate assessments. Table 2 shows examples of the types or sources of abatements made by the divisions, as identified by officials at the district offices. As table 2 shows, each of the three divisions at both districts made similar types of abatements. Although none of the divisions tracked the number of abatements, IRS officials indicated that the Collection and Customer Service Divisions made the most abatement decisions, many of which involved penalties. The officials said that in the Examination Division, the abatements usually involved tax assessments related to a prior audit, which could also involve penalties, or some form of taxpayer claim. In addition, at the two service centers and two district offices, the Office of the Taxpayer Advocate had authority to make certain types of abatements. However, officials at most locations we visited said that the Advocate’s Office usually referred requests for abatements to other work groups (such as the Collection Division) and then tracked actions taken to help ensure the proper and timely resolution of a taxpayer’s problem. Because these work groups usually made any related abatements, the Office of the Taxpayer Advocate would make few abatements compared to the other work groups. Various types and grades of IRS staff, ranging from clerks to mid-level staff, can make abatement decisions, depending on the type, complexity, and source of the assessment being abated. Along with other duties, these IRS staff handled many types of adjustments to taxpayers’ accounts other than abatements. None of the work groups tracked how many of these adjustments involved abatements. Both service centers had a wide range of graded staff who made abatement decisions. The staff ranged from federal pay grades GS-3 to GS- 9 in one service center and GS-4 to GS-12 in the other. According to officials at the two service centers, staff known as customer service representatives and tax examiner assistants made most abatement decisions. In one service center, grades of customer service representatives ranged from GS-5 to GS-8 and tax examiner assistants ranged from GS-4 to GS-8. In the other service center, the grades were GS- 6 to GS-9 and GS-6 to GS-7, respectively. The two district offices also had various staff make abatement decisions in the three divisions. These staff ranged from federal pay grades GS-5 to GS- 13. According to officials at the two district offices, staff known as customer service representatives (GS-5 to GS-9) and revenue officers (GS- 7 to GS-12) made the majority of the abatements. The two districts we visited did not differ very much in who made abatement decisions. According to information provided by officials at both service centers and both district offices, staff making abatement decisions are to be guided by such tools as training, information provided by the taxpayer or IRS’ computers, criteria in IRS manuals, and supervisory involvement. Descriptions of these four tools follow. IRS is to provide training on the various duties, including abatements, assigned to staff. The training can include such things as basic tax laws, IRS forms, adjustments in general, amended returns, and other generic tax issues. Training also can cover specific types of adjustments, including abatements to be made to taxpayer accounts. The training is to include classroom and on-the-job instruction. Staff can use various types of information provided by taxpayers or through IRS computers to help make abatement decisions. For example, taxpayers can provide additional information that they did not provide during an audit when they ask IRS to abate assessments made in that audit. Or, staff can use information in taxpayers’ accounts to confirm taxpayer actions, such as the date a return was filed or a payment was made. To the extent that the computerized information is sufficient, IRS staff could use it to verify oral justifications from taxpayers for making the abatement. If the information is not sufficient, IRS may need to request additional support from the taxpayer. Criteria for making abatement decisions vary with the type and amount of the abatement and complexity of the issues. For example, the criteria for decisions about whether to abate a tax assessment would depend on the tax laws and regulations governing the specific tax issue (such as a dependency exemption) and the justification provided by the taxpayer in requesting the abatement. Also, for each type of penalty, IRS has various dollar tolerances that dictate the type of justification needed from the taxpayer to grant the abatement and whether that justification needs to be documented by the taxpayer rather than be provided orally. Because IRS labels the criteria and dollar tolerances as “official use only,” which means the information is sensitive, we cannot disclose the specific criteria or tolerances. Dollar tolerances also can affect whether supervisors are to review proposed decisions by staff to abate assessments. IRS did not have data on how many of the 11.3 million abatements in fiscal year 1998 had been reviewed by supervisors before the decision was finalized. However, supervisors are only required to review a few types of abatement decisions before they are finalized, such as those on certain types of penalty and interest abatements as well as on large-dollar abatements. Also, officials at some of the locations we visited said that few abatement decisions would be reviewed by supervisors before being finalized because of the large number of adjustments, including abatements, to be processed. The two service centers and two district offices each have a separate staff to review the quality of decisions, including those on abatements, after they are made. None of the locations we visited tracked the number of abatement decisions reviewed for quality as either a percentage of all adjustments reviewed or of all abatement decisions made. The officials we interviewed at all locations could not say what percentage of abatements might be subjected to such quality reviews. IRS had multiple review programs because each work group has a different set of quality standards. Each program is to use the quality standards established for the types of adjustments made in a specific work group. For example, standards for a collection work group would differ from standards for an examination group. As a result, in both service centers, the Service Center Collection Quality System is to be used to review the quality of the work in the Collection Branch. Across the two district offices, IRS also had a review program for each division (such as the Examination Division). Officials at both service centers and both district offices also told us that these programs usually are to select cases for review on the basis of random sampling, and the results are to be used to identify systemic quality problems in an IRS work group. Due to our time constraints, we did not evaluate these IRS quality review programs. IRS’ efforts to improve the abatement process have involved task force studies that IRS initiated in response to specific concerns. For example, since 1997, task forces have studied issues, such as penalty administration and taxpayer treatment, because of concerns about taxpayer burden and equity. Although they did not focus on abatements, the studies have produced some proposals that would affect the abatement process, such as the documentation required to support abatements. IRS officials said they initiated these task forces because a 1997 report on reinventing service at IRS called for IRS to, among other things, promote fair and consistent treatment of taxpayers in penalty administration. The report also called for IRS to comprehensively review all penalties and report to Congress on its findings as well as needed legislative changes. The report further recommended that the overall process for handling penalties be streamlined. For example, the report mentioned that one way to streamline would be for IRS employees to have expanded authority to abate certain types of penalties on the basis of oral requests from taxpayers. If done, this action could reduce the burden on taxpayers by requiring less documentation or supervisory approval. In September 1998, IRS released a study by its penalty task force, as recommended in the 1997 report. IRS reported that for fiscal year 1996, 34 million penalties totaling $13.2 billion were assessed, of which 12 percent and 43 percent, respectively, were abated. Several recommendations focused on the need for (1) consistent treatment of taxpayers; (2) viable management of information systems; (3) clearer penalty policy; and (4) improved penalty administration to allow, among other things, IRS telephone assistors to make abatements for larger amounts on the basis of oral testimony from taxpayers. IRS officials said that legislative changes would be necessary to implement some of the recommendations. These officials also said they are considering the first three recommendations and are planning to expand authority for abating certain penalties for reasonable cause on the basis of oral testimony from taxpayers. Another task force on taxpayer equity has been studying various IRS programs and issues and had finished several reports as of June 1999. These programs and issues, for example, address taxpayers that do not file required tax returns, make estimated tax payments on a quarterly basis, or make required federal tax deposits. Also included are studies of taxpayers that ask IRS to reconsider an assessment made in a previous audit. At least three of this task force’s reports have made proposals that could affect abatements. These proposals involve (1) easing interest abatement criteria through legislative change, (2) encouraging oral agreements on abatements between IRS and taxpayers over the phone to minimize documentation, and (3) clarifying a penalty abatement policy statement. As for other efforts, an IRS study of an automated program to create substitute returns for apparent nonfilers has suggested several steps to deal with unproductive or erroneous assessments that create additional work later, including abating these assessments. Further, IRS is testing an automated system to help make abatement decisions related to reasonable cause. IRS is also developing ways to help small business taxpayers meet their federal tax deposit requirements, which could reduce the need for abatements that are associated with these deposits. Finally, IRS’ Collection Division officials said they had a team working on redesigning the collection process, but the team was not focusing on abatements. Before initiating these task forces and efforts, IRS undertook a major study during 1993-94 of the causes of abatements. According to the director of the study, IRS used about 150 employees across its 10 service centers. IRS did this study because of concerns about the inventory of tax debts—about 25 percent of IRS’ accounts receivable were being abated. The study identified problems in various IRS processes that resulted in IRS and taxpayer errors that had to be corrected through abatements. Some examples of IRS errors included miscalculated interest, incorrect posting to accounts, and overlooked support for taxpayers’ reasonable cause claims. Examples of taxpayer errors included using the wrong form or coupon, omitting or misstating taxpayer identification numbers, and making math mistakes. The study identified 259 potential problems, of which 158 were referred to the responsible IRS functions for further analysis and implementation of changes, if necessary, to solve a problem. The study team determined that the other 101 problems did not merit such a referral because analysts at the service centers could address the problem or no further changes were needed. IRS never formally released a final report on the study results. On June 22, 1999, we met with representatives of the IRS Commissioner from the Customer Service and Examination Divisions as well as the Taxpayer Advocate and Legislative Affairs Offices. These representatives said that our report provided good information and appropriately described the abatement process at selected organizations in the district offices and service centers. We are sending copies of this report to Representative Charles B. Rangel, Ranking Minority Member, House Committee on Ways and Means, and Senator Daniel P. Moynihan, Ranking Minority Member, Senate Committee on Finance. We are also sending copies to the Honorable Robert E. Rubin, Secretary of the Treasury; the Honorable Charles O. Rossotti, Commissioner of Internal Revenue; the Honorable Jacob Lew, Director, Office of Management and Budget; and other interested parties. We will also send copies to those who request them. If you or your staff have any questions concerning this report, please contact me or Tom Short on (202) 512-9110, or Royce Baker on (913) 384- 3222. Other major contributors are acknowledged in appendix V. IRS’ Kansas City Service Center (KCSC) processes individual and business tax returns, payments, and taxpayer inquiries from Illinois, Iowa, Minnesota, Missouri, and Wisconsin. The service center also responds to taxpayer inquiries nationwide through toll-free numbers. In addition, KCSC conducts compliance activities, such as doing audits, securing delinquent returns, and collecting unpaid taxes. Different IRS work groups make abatements at the Kansas City Service Center. According to KCSC officials, three divisions—Compliance, Customer Service, and Processing—routinely make abatements. Each division has at least one work group that makes abatement decisions. Additionally, the Taxpayer Advocate’s Office can make abatements. According to IRS officials, the Quality Assurance Management Support Division has abatement authority, but due to the nature of its workload it rarely makes abatements. As a result, we did not summarize the division’s processes. Each of the work groups is responsible for making abatements of different types. Table I.1 shows these groups as well as examples of the types of abatements made by each. Type of abatement Substitute for return (SFR)SFR, and audit reconsiderationsAutomated Collection Systems Branch Erroneous SFR cases,penalties abated for reasonable cause Adjustments/Correspondence Branch Claims on amended returns, various types of penalties Customer Service Branch Taxpayer Relations Branch Document Perfection Branch Problem Resolution Program (PRP) Various types of penalties Taxpayer inquiries about assessments Simpler claims made by individuals Selected cases that meet PRP criteria (e.g., not resolved elsewhere in IRS) IRS files a substitute for return when a taxpayer has not filed a tax return but IRS receives third-party information that shows enough income to file a tax return. In the following, we briefly describe the types of abatements made at each of the work groups. We also identify the workload at each work group for all types of adjustments to taxpayers’ accounts, including abatements. None of the groups, however, had data on the portion of their workload that involved abatements. We did not independently verify the accuracy of workload data provided by IRS. According to IRS officials, the Compliance Division has two branches that routinely make abatements, as summarized below. The Collection Branch is responsible for collecting taxes from individuals who have filed but have not paid the total amount due or who have not filed their tax return. For these nonfilers, according to branch officials, a common abatement comes from substitute for return (SFR) cases. Taxpayers receiving an SFR may choose to file a return in order to reduce their tax liability. When the return is filed and accepted by IRS, the branch is to abate the difference between the taxes assessed on the SFR and the taxpayer’s return. If the taxpayer’s return meets certain criteria, such as a large difference between the taxpayer’s and SFR’s assessments, classification staff from the Examination Branch are to review the return and determine whether to audit it. For fiscal year 1998, Collection Branch officials reported that the branch closed 1.2 million nonfiler and late payment cases, but it did not track how many of them involved abatements. The Classification Section in the Examination Branch is the only section in the branch that routinely makes abatements in the course of its operations, according to Examination officials. The Classification Section is responsible for screening questionable claims, SFR returns, and requests for audit reconsideration. The Classification Section is to receive category A claims from the Adjustments/Correspondence Branch (also referred to as the Adjustments Branch) for acceptance or selection for audit. Category A claims are more complex, sensitive, or prone to noncompliance than other claims. For calendar year 1998, Classification Section officials reported that the section screened more than 26,000 claims, but it did not track the number that resulted in abatements. Further, the Classification Section is to review questionable returns filed by taxpayers to replace SFRs and requests for audit reconsiderations. When reviewing SFR cases and audit reconsiderations, Classification is to accept the data provided by the taxpayer in support of an abatement, or select the case for audit. If a case is selected, staff in the Examination Branch or a district office are to decide on any abatements. For calendar year 1998, Classification Section officials reported that the section received nearly 11,000 SFR, audit reconsideration, and other cases. These officials did not track how much of this work resulted in abatements. The Customer Service Division has the following branches that can make abatements. The Automated Collection Systems Branch operates an automated call system, which is used to collect taxes from taxpayers that have delinquent accounts and to secure delinquent returns. Abatements can arise when IRS staff talk over the telephone with taxpayers who have had levies placed on their assets because of delinquent taxes or have received a notice that IRS has no record of their return. In particular, the branch is supposed to make abatements in erroneous SFR cases—cases in which IRS erroneously believes that a taxpayer failed to file a required tax return, but the taxpayer had, in fact, filed. The branch also can abate penalties if the taxpayer has a reasonable cause for not complying originally. For calendar year 1998, IRS officials said that the Automated Collection Systems Branch worked more than 119,000 cases, but it did not track how many of them involved abatements. The Adjustments/Correspondence Branch receives inquiries from taxpayers by phone or correspondence. According to branch officials, most taxpayer contacts are in response to penalty notices. The officials stated that the branch often abates failure-to-file penalties for reasonable cause. The branch is also supposed to process claims that require additional information from the taxpayer or from IRS databases. For fiscal year 1998, the branch reported closing over 989,000 cases, but it did not track how many of them involved abatements. The Customer Service Branch receives telephone inquiries from taxpayers about their assessments. According to IRS officials, taxpayers usually call to resolve issues on their tax accounts after receiving an IRS notice. These officials stated that late payment and failure-to-file penalties are their most common abatement cases. For fiscal year 1998, the toll-free telephone site answered over 1.2 million telephone calls. Branch officials did not track how many of these calls involved abatements. Taxpayer Relations is to help taxpayers resolve a wide variety of problem cases. In addition to the taxpayers, the branch receives referrals from other IRS offices, congressional offices, and the White House. According to IRS officials, typical abatements involve penalties and incorrect tax assessments. For calendar year 1998, IRS officials reported that the branch worked nearly 100,000 cases, but it did not track how many of these cases involved abatements. Within the division, only the Document Perfection Branch made abatements. The Document Perfection Branch is to screen amended returns from individual taxpayers and identify those with issues that require additional information from a taxpayer or IRS computer systems. If additional information is not required, Document Perfection is to process the return, along with any resulting abatement. Returns that require information and amended returns from business taxpayers are forwarded to the Adjustments/Correspondence Branch. Document Perfection officials reported that the branch screened about 262,000 amended returns in calendar year 1998. The officials also reported that Document Perfection processed about 95,000 of these cases and routed the rest to other functions for processing. Branch officials did not track how many of these cases resulted in abatements. The Taxpayer Advocate’s Office was created to help taxpayers who have been unsuccessful in resolving their problems through normal channels of assistance. The office ultimately reports to the National Taxpayer Advocate. Its involvement with abatements can arise from two taxpayer assistance programs—hardship relief requests and the Problem Resolution Program (PRP), which assists taxpayers in resolving persistent problems. Taxpayers may submit Form 911 asking for hardship relief under the Application for Taxpayer Assistance Order, according to IRS officials. The Advocate’s staff makes a determination on these requests. PRP exists to help taxpayers that have been unsuccessful in resolving their tax problems through regular channels. The Kansas City Taxpayer Advocate has a small staff; thus, for most PRP cases, it relies on staff in the divisions, such as those in Customer Service, to try to resolve the problem. The Taxpayer Advocate monitors the progress of the cases until they are resolved. Staffing for some of the branches varies with the time of year, according to IRS officials. During peak periods (usually during the months when taxpayers are filing returns), staffing levels may temporarily increase, and the staff may sometimes be temporarily assigned to Customer Service to answer taxpayer inquiries about tax law or filing requirements. Table I.2 shows the peak staffing level by the type and grade (GS) level of staff who make abatement decisions for those work groups that make most of the abatements, according to IRS officials. Most service center staff making abatements were tax examining assistants, customer service representatives, or tax examining clerks, according to IRS officials. Tax examining assistants are responsible for obtaining returns and other taxpayer information necessary to adjust taxpayer accounts and examining selected tax issues on income tax returns. Customer service representatives provide assistance to individuals and businesses through telephone contact. Tax examining clerks have duties that are less complex. They review IRS documents and tax returns, prepare coded entries to amend records, and refer questionable returns and documents for review. Note 1: Staffing levels do not include managerial and support staff. Note 2: Branches with a consistent level of staffing will not have a nonpeak staffing level. Does not include 5 staff from the Taxpayer Advocate’s Office. In general, training to be given in the work groups that made abatements did not focus on abatements; rather, it focused on the range of duties for the various types of staff. Through our interviews and reviews of documents at KCSC, we learned the following about the training that each work group should provide. Before working on a program they have not worked before (such as SFR cases), tax examining assistants are to receive between 3 and 10 days of classroom training on that program. After completing the training, they are to be assigned on-the-job instructors to provide additional guidance on working cases. Staff doing abatements are to receive training on the basic types of adjustments associated with the income tax laws. They also are to receive on-the-job instruction as they do audits. New customer service representatives are to receive about 3 months of classroom training. The initial training is to address basic skills, such as entering data into IRS’ computer system. Afterward, the customer service representatives are to take classes on each of the programs handled in the branch. After a class on a particular program, the customer service representatives are to work that program for a few weeks to reinforce what they have learned. Then, they are to take a class on another program. Training is to cover the abatement of tax and penalties in some form. Customer service representatives responsible for business returns initially are to receive a 5-week training class on the Business Master File and employment tax forms. Later, they are to receive training on IRS forms and notices. Then, they are to receive several weeks of on-the-job training. Customer service representatives responsible for individual tax returns are to receive training on the Individual Master File as well as on amended returns, penalty abatements, and refunds for individuals. The training for abating selected types of interest assessments requires on-the-job coaching for several months. Before taking phone calls in which the taxpayer is asking for penalty abatements, a new staff member is to receive courses on determining penalty abatements. A staff member working on Business Master File cases is also supposed to attend courses on computing and abating penalties associated with the deposit of federal taxes. All staff working abatement cases are to receive training on comprehending computer transcripts, using IRS’ computer system, and adjusting accounts on the Individual and Business Master Files, including making abatements. In addition, staff are to receive specific training on programs such as Problem Resolution and Penalty Appeals, which can result in abatements. New hires are to receive a 10-day class on the Form 1040. Staff that have prior IRS experience are to receive between 4 and 9 days of training, depending on their duties. A class on editing amended returns is required for staff that review such returns and determine abatements and assessments. Staff making the abatement decision usually adjust the taxpayer’s account, according to IRS officials. The exceptions are in the Examination and Document Perfection Branches. Staff in these branches recommend abatements and forward the case to other staff who are responsible for adjusting a taxpayer’s account. IRS officials described two types of supervisory review of the abatement decisions made by staff: one occurring before the decision is made, and one after. The latter supervisory review of closed decisions is done to evaluate employee performance. IRS officials also described various types of quality review programs. Reviewers in these programs were to review cases closed in the work group to measure adherence to specific quality standards, which varied with each work group and type of case. According to staff we interviewed, the closed cases for review were to be selected using random sampling. Neither the supervisory nor quality reviews solely addressed abatements; rather, they covered all types of adjustments, including assessments and abatements. As a result, KCSC officials did not have data on the percentage of abatement decisions that were reviewed. IRS officials said that other than for a few specific types of cases, the first type of supervisory review, which would occur before the abatement decision, was usually not required. Exceptions were abatements that involved certain types of civil penalties over a specific dollar amount, certain types of Examination assessments, and a selected type of interest assessment. Specific criteria associated with these types of abatements are considered to be sensitive information and may not be disclosed. The work groups we visited had some data on the review of various types of closed cases by supervisors or quality review staff. Table I.3 shows the number of cases closed and reviewed for quality and the requirements for supervisory review for purposes of evaluating staff performance. As noted in table I.3, the number of closed cases subjected to supervisory review for performance evaluation purposes differed in each branch. For example, Customer Service required its supervisors to review eight telephone calls and four paper cases per month. According to officials in all the branches, these requirements meant that only a small percentage of all cases would be reviewed. For example, a supervisor would be likely to review about 1 percent of the phone calls that customer service representatives answer in a month. About 10-15 percent of Taxpayer Relations’ completed cases would be reviewed, according to branch officials. If supervisors were to find a performance deficiency, they should tell the staff member to correct any errors, and they might choose to review all cases closed by that staff member. Separate offices in the service center managed the quality review programs. For example: The Quality Assurance Branch was to do quality reviews of cases closed in the Adjustments/Correspondence, Customer Service, Taxpayer Relations, and Document Perfection Branches. This review program is called the Program Analysis System. Collection Branch correspondence cases are to be reviewed under the Service Center Collection Quality System. The Automated Collection Systems Branch has two review systems. Toll- free calls are to be reviewed under the Quality Management Information System. Correspondence cases are to be reviewed under the Written Products Data Collection Instrument program. Examination Branch closed cases are to be reviewed by Quality Measure Support staff. Taxpayer Advocate cases are to be reviewed at the Brookhaven Service Center Centralized Quality Location in New York. Quality review officials told us that they do not correct defective cases. Instead, they are to report any errors they find to the appropriate official in the branch where the case originated. The branch official then is to work with the employee who made the error to correct the mistake. The Fresno Service Center (FSC) processes various types of tax returns, conducts audits through correspondence, and responds to taxpayer inquiries from Hawaii and parts of California. FSC also responds to taxpayer inquiries nationwide through IRS’ toll-free numbers. In addition to auditing, FSC operates compliance programs that use third-party information reporting to identify taxpayers who do not file tax returns or who file returns but understate their tax liability. Different IRS work groups make abatements at the Fresno Service Center.Three divisions, including Compliance, Customer Service, and Quality Assurance and Management Support, can make abatements. Each division also has at least one branch that makes abatement decisions. Additionally, the Taxpayer Advocate’s Office can make abatements. These work groups are responsible for making different types of abatements. Table II.1 shows the work groups as well as examples of the type of abatements made by each. Type of abatement Audit reconsideration and audits Overstated income from false W-2s (Statement of Earnings) Category A claims and interest claims, substitute for return,offer-in-compromise, audit reconsideration Responses to IRS underreporter notices Taxpayer inquiries about assessments Claims on amended returns filed by taxpayers Substitute for return Credits on masterfile accounts Various types of abatements based on reviews of notices sent to taxpayers Various types of taxpayer claims that meet set criteria (e.g., problems not satisfactorily handled by other IRS groups) In the following, we briefly describe the types of abatements made at each of the work groups. We also identify the case workload at each group for all types of adjustments, including abatements, to a taxpayer’s account. However, none of these groups had data on the portion of the workload that involved abatements. The Compliance Division has four branches that can make abatements, as summarized below. The Examination Branch is responsible for conducting correspondence audits at the service center. Abatements can occur when taxpayers ask IRS to reconsider (or reaudit) assessments made in past audits. According to IRS officials, in these audits, the taxpayer typically finds documentation that counters the additional assessments. If IRS accepts the taxpayer’s documentation, the additional assessments are to be abated. The Examination Branch may also make various types of abatements in cases referred by the Criminal Investigation Branch or by the National Office. The branch also may abate assessments arising from the computer matching that IRS does to identify math and other errors. During fiscal year 1998, the branch processed nearly 132,000 cases. Additionally, it processed about 285,000 amended returns (1040X). Examination officials did not track which cases had abatements; however, branch officials said that the branch abates few assessments. The Criminal Investigation Branch is responsible for investigating all kinds of criminal activity related to the tax system. A typical abatement case involves a taxpayer filing a false Form W-2, Statement of Earnings, claiming excessive income and withheld tax. In these cases, the taxpayer claims excessive income in order to claim an excessive earned income credit (EIC). The taxpayer claims but never pays the excessive withheld tax in order to get a tax refund. Upon identifying these fraudulent schemes, the branch is to abate the excess tax liability that was claimed (but not paid) and to stop the erroneous refund or collect the overpaid EIC and refund. The Joint Compliance Branch has three sections that process abatement cases. Classification and Research. This section processes category A claims referred from the Adjustments Branch as well as interest abatement cases. Category A consists of claims and amended returns that are more complex, sensitive, or prone to noncompliance than other claims. Selected category A claims are referred to field offices for further examination and possible abatement decisions. The abatement decisions for nonselected category A claims are made in Joint Compliance. During fiscal year 1998, about 18,300 category A claims were processed, and about 2,000 were sent to field offices. The section also processed about 400 claims for interest abatement. Automated Substitute for Return. This section processes abatements for the amounts overassessed on a substitute for return (SFR). These abatements are necessary when the taxpayer files a delinquent return that reports a lower tax liability than that assessed on the substitute return prepared by IRS. During fiscal year 1998, the section chief estimated that the branch had made between 500 and 1,000 such abatements at Fresno. Miscellaneous Balance Due. This section processes offers-in-compromise for wage earners with income under $10,000, and trust fund recovery cases. Branch officials did not track how much of the workload involved abatements. The Underreporter Branch is to process cases involving the amount of income reported on tax returns. These cases are generated in IRS’ Martinsburg Computing Center by comparing income amounts reported on information returns and on the tax return. Overall, the typical case involves a taxpayer underreporting income on the tax return. However, in the typical abatement case, the taxpayer overstates some type of income on the tax return, thus overstating the tax liability. To get this overstated tax liability abated, the taxpayer must file an amended return. The Underreporter Branch at FSC processed about 11,400 cases that potentially required abatements. Branch officials, however, did not track how many actual abatements were made. The Customer Service Division has the following branches that can make abatements. None of the branches had information on the number of abatements made. The Customer Service Branch receives taxpayer inquiries about assessments by phone or correspondence. These inquiries are usually in response to notices or correspondence sent by IRS about various types of possible errors on a tax return. An abatement may occur when a taxpayer requests that IRS abate a failure-to-file penalty due to reasonable cause. The Adjustments Branch also may process abatements initiated through telephone contacts with taxpayers about their assessments. Further, the branch is responsible for processing claims on amended returns. However, as noted earlier, category A claims are to be referred to the Joint Compliance Branch and some non category A claims may be handled by the Examination Branch. The Collection Branch is responsible for call sites at which abatements may arise when taxpayers telephone IRS staff about various types of assessments. Branch officials said the primary source of abatements is the SFR program, in which taxpayers have received a substitute return and IRS has taken action to collect the tax shown on it. If the taxpayer then files a return showing a lower tax that is accepted by IRS, the branch is to abate the difference between the tax amounts assessed on the substitute return and taxpayer’s return. During fiscal year 1998, the branch had about 383,000 phone contacts and 2.8 million pieces of correspondence. However, the branch did not track the number of contacts that involved abatements. The following branches in the Quality Assurance and Management Support Division process abatements. Accounts Services is the only section that makes abatement decisions in the Accounting Branch. Its workload is generated by the Martinsburg Computing Center and consists of tax returns with unsettled credit balances. These credit balances may have resulted from SFRs, audits, math errors, and other transactions. During fiscal year 1998, Accounts Services processed nearly 113,000 adjustment cases, but it did not track how many involved abatements. The Quality and Management Support Branch is responsible for measuring the quality of actions in other branches by reviewing closed cases. Its Output Review Section is to review notices to be sent to taxpayers and is to make abatements if errors are found. Errors that are found in other quality reviews are to be referred to the initiating branch for correction. During fiscal year 1998, the Output Review Section reviewed about 37,000 individual notices and nearly 24,000 business notices. IRS officials did not have information on how many of these reviews involved abatements. The Taxpayer Advocate’s Office reports to the Executive Office of Service Center Operations that, in turn, reports to the National Taxpayer Advocate. The office is responsible for the Problem Resolution Program, which is to help those who meet certain criteria, such as taxpayers that have any contact indicating that IRS’ systems have not resolved the taxpayer’s problems, indicating that the taxpayer has not received an IRS response by the date involving the same issue at least 30 days after an initial inquiry or complaint, or 60 days for an original or amended return. During fiscal year 1998, Fresno’s Taxpayer Advocate’s Office closed over 20,000 cases, but it did not track how many of them included abatements. Staffing for some work groups varies with the time of year and the function of the group. During peak periods (usually during the months that taxpayers are filing returns), staffing levels may temporarily increase, and the staff may sometimes be used for work other than making adjustments, such as answering taxpayer inquiries about the tax law or filing requirements. Table II.2 shows the peak staffing level by the type and grade (GS) level of staff who make abatement decisions for those work groups that make most of the abatements, according to IRS officials. For the most part, those making abatements were tax examiner assistants and customer service representatives. These staff had varying duties, including reviewing notices and interest computations, responding to customer inquiries, processing correspondence, reviewing tax returns to detect fraud, and resolving taxpayer problems. In general, training to be given in the work groups that made abatements did not focus on abatements. Rather, the training focused on the range of duties for the various types of staff. Based on our discussions at Fresno, the following briefly describes the training to be provided. Tax examiners assistants are to receive 40 hours of initial classroom training, plus 40 hours of on-the-job training; 80 hours of classroom training on basic income tax; 80 hours of classroom training, plus 120 hours of on-the-job training on amended returns; 40 hours of classroom training on EIC, plus 40 hours of on-the-job training; and 40 hours of classroom training as a refresher each year. Newly hired staff are to receive 2 weeks of training followed by 2-4 months of on-the-job training. All staff are to attend a 2-week refresher class each year. Joint Compliance staff are to receive classroom and on-the-job training that covers various topics, such as the guidance and instructions in the Internal Revenue Manual, the processes for making adjustments, and the tax law. Branch staff are to receive annual training on phases of the underreporter programs being worked for a tax year. This training is to include screening cases, writing responses, reviewing statutes, and learning other core skills. Staff making interest abatements also are to attend bimonthly meetings. These staff are to receive roughly the same training, which is to include 120 hours of classroom training, primarily in refund and EIC issues. After several months of on-the-job training, they are to receive an additional 120 hours of training on more complex issues, such as how to handle balance- due accounts, installment agreements, and return delinquencies. All Accounting Branch training is to be on-the-job training. New staff are to have a coach who assists them through each of the steps involved in adjusting accounts. Staff in the Output Review Section are to receive 2 weeks of classroom training on the Notice Review Processing System and on-line notice review. They also are to receive 4-6 weeks of on-the-job training. Staff must have 2-3 years of experience dealing with adjustments, customer service, or collection before being selected for the staff. Once selected, staff are to receive classroom training for PRP caseworkers, including PRP quality standards. The branches at the Fresno Service Center had two types of supervisory review of abatement decisions made by staff. One is to occur before the decision is made and one after the decision. Regardless of the type of review, FSC officials did not collect data on the number of reviews done by supervisors or the percentage of abatement decisions that were reviewed. According to FSC officials, supervisory review and approval of abatement decisions before they are finalized is required for a few types of abatements. For example, abatement decisions involving comparatively larger dollar amounts or certain types of penalty or interest abatements generally require supervisory review and approval. Also, FSC officials said that supervisors in each branch are to review a random sample of closed cases for each employee each month. These reviews are generally to be conducted to evaluate employee performance. Each branch gave supervisors the discretion to determine the percentage of abatement decisions to review. For example, the supervisors we talked to in at least one branch said they were likely to consider the past experience and performance of the staff member as well as the complexity and size of their caseload. Fresno Service Center has a quality review program that is designed to ensure that abatement decisions are reviewed for quality by analysts outside the function making the abatement decision. These analysts at the service center are to review a sample of cases closed in each of the branches. The purpose is to measure adherence to specific quality standards. For example, each branch is subject to quality reviews for a random sample of closed cases by the Program Analysis Section. Also, the Output Review Section in the Quality and Management Support Branch is to review a random sample of notices in each branch monthly. Finally, a random sample of cases closed by the Taxpayer Advocate’s Office is to be reviewed at the Brookhaven (NY) Service Center. The Kansas-Missouri District Office (KMDO) is located in St. Louis and has satellite offices across its two-state area. The district is responsible for auditing a variety of individual and business tax returns and for responding to taxpayer inquiries. It is also responsible for compliance programs that use third-party information to identify taxpayers who do not file tax returns or who understate their tax liability on filed returns. Different work groups within the district make abatements. Of the five divisions in KMDO, four make abatement decisions, usually through one or more branches. The division not making abatements is Criminal Investigation, whose primary workload involves investigating fraudulent or illegal activities. Each division or office responsible for making abatements is shown in table III.1, which also lists examples of the type of abatements. Type of abatement Discharged bankruptcies, offers-in-compromise, trust fund recovery penaltiesCustomer Service Division Federal tax deposits, taxpayer inquiries about assessments Examination Division Office of Taxpayer Advocate By agreeing to an offer-in-compromise with a taxpayer, IRS accepts a lower dollar amount to settle a balance due. Audit reconsiderations, claims Collection hardships and Problem Resolution Program (PRP) Several factors affect which group makes the abatement decision, including the following. Type of case. Cases can vary by type of taxes (such as gift or excise), type of taxpayers (such as large corporation), and type of transactions (such as bankruptcy). Division function. The normal workload of the work group affects which group makes a specific abatement decision. For example, the Collection Division would make abatements in working cases, such as bankruptcy cases. Customer Service Division abatements could come from responding to taxpayer inquiries, such as those involving federal tax deposits. Status of taxpayer account. For example, if a taxpayer is being audited by the district office and files an amended return or claim for refund with the service center, this return or claim is forwarded to the appropriate Examination group conducting the audit. Summarized below are the responsibilities of each division and office, its workload, and examples of abatements. The Collection Division is responsible for the collection of taxes from businesses and individuals who have an outstanding balance due or who have not filed required tax returns. In fiscal year 1998, the Collection Division closed about 14,300 cases, but Division officials could not identify how many cases involved abatements because they were not separately tracked. Abatements in the division involve issues such as offers-in-compromise, trust fund recovery cases, and bankruptcy. According to Collection officials, the most common abatements come from discharged bankruptcies, in which IRS makes abatements at the direction of a court. Offer-in-compromise cases involve an agreement by IRS to accept a taxpayer’s offer to settle an outstanding assessment for less than the total due. In trust fund recovery cases, IRS has made assessments against officers or shareholders of a business that has not properly deposited its employment taxes and then is to abate the residual assessments after the tax liability has been paid. The Customer Service Division takes taxpayers’ calls and correspondence regarding inquiries about issues such as assessments. Taxpayer correspondence is forwarded to the IRS district office nearest the taxpayer for handling. During February 1998 through January 1999, the division closed nearly 17,000 cases. Customer Service officials said they did not track how many cases involved abatements. According to division officials, the most common type of abatement results from penalty assessments in failure-to-deposit cases. For IRS to abate this penalty, the taxpayer must demonstrate a reasonable cause for not making the required tax deposits by the due date. If IRS agrees that the reason is valid, the penalty is to be abated. The Examination Division is responsible for auditing individual and business tax returns selected by scoring criteria that identify returns with the greatest potential tax noncompliance. During fiscal year 1998, the Examination Division closed almost 19,000 cases. An Examination official said that the division has not tracked the number of cases that involved abatements, but that the division abates few assessments. This is because audits tend to focus on returns with potentially higher noncompliance, which is more likely to lead to additional taxes being assessed rather than assessed taxes being abated. According to Examination officials, abatements come largely from claims involving audit reconsiderations that the service center sends to the district for review before being accepted. The Special Programs Section in Examination works the claim and can decide to accept or audit the claim. Audit reconsideration cases result when taxpayers ask IRS to revisit a prior audit that assessed additional taxes. The taxpayer believes that the taxes should not have been assessed. If the taxpayer provides support for that belief and IRS agrees, any excess tax amount is to be abated. The Taxpayer Advocate’s Office is to assist taxpayers through the Problem Resolution Program (PRP) and other activities after other IRS contacts have not resolved the taxpayer’s concerns or when taxpayers ask for help. During a recent 6-month period, an Advocate official said that the office worked about 230 cases but did not know the number involving abatements because they were not separately tracked. According to the Advocate official, the office routinely sends cases to a group in the Customer Service Division staffed with employees from other divisions. There, the case is to be worked by a division employee and monitored by the Advocate’s Office to see that it is closed properly and timely. District officials said that most abatement decisions are made by authorized staff in the Collection and Customer Service Divisions. Also, a few abatements are made by staff in the Examination Division and Taxpayer Advocate’s Office. Table III.2 summarizes the type and number of staff that make abatement decisions in the district, according to IRS officials. These staff have various duties, as discussed below. Revenue officers work various types of delinquent accounts and investigations in the office and field. Also, they may work on specific programs, such as trust fund recovery penalties and lien withdrawals. Attorneys and district counsel assist with complex issues, such as bankruptcy. Revenue officer aides assist revenue officers by performing courthouse research and other duties. Customer service representatives provide service to taxpayers who have contacted IRS via correspondence, telephone, or visits. They help prepare returns and answer questions regarding tax law, IRS procedures, and individual accounts. Revenue agents and tax auditors audit various types of tax returns, such as income tax returns. Staff monitor cases sent to the divisions to ensure that they are resolved promptly and appropriately. In general, training to be given in the divisions and offices that made abatements did not focus on abatements; rather, it focused on the range of duties for the various types of staff. The following briefly describes the training to be provided. Revenue officers are to receive three phases of classroom training and on- the-job training after each phase. Afterward, they are to be assigned a mentor to provide additional on-the-job training. Revenue officer aides receive no formal classroom training. All their training is on-the-job. Employees dealing with bankruptcy cases are to receive training on bankruptcy issues, including 1 year of on-the-job training. IRS officials said employees handling the other types of abatements, such as offer-in- compromise and trust fund recovery cases, usually come from other IRS functions at which they have already received training to handle these issues. Customer service representatives are to receive three phases of tax law classroom training and four additional phases of accounts-related classroom training. These courses include installment agreements; credit transfers; refund releases; tax adjustments; and penalty abatements. All classroom training is to be supplemented with on-the-job training. Staff making abatements in the division are to receive many hours of classroom and on-the-job training, which varies by the type of staff. This training mostly addresses tax law and auditing topics, such as claims and audit reconsiderations. Revenue agents in specialty fields are to receive special training (e.g., a 3-week excise tax course). PRP analysts are to receive some specialized training. However, an Advocate official explained that its office has recruited staff from other IRS divisions that should already have been trained in collection and audit issues that the office addresses. At the district, except for one division, the staff making the abatement decision is usually the same staff that enters information about the decision into the taxpayer accounts on the computer. The exception is the Examination Division, where the staff making the decision do not have access for computer entry. Two types of supervisory review can affect abatement decisions made by staff. One occurs before the decision is made and one occurs after the decision. Neither of these types of review focus on abatements; rather, they cover all types of adjustments. As a result, KMDO did not have data on the percentage of abatement decisions that were reviewed. First, supervisors may review proposed decisions to approve them before they are finalized. KMDO did not have similar requirements for these reviews across the divisions. However, Collection officials told us that their supervisors are to review all cases. Examination officials said they have no requirement except that supervisors are responsible for the quality of the cases closed in their groups. Second, KMDO officials said that supervisors are supposed to review a random sample of closed cases for each employee each month. These reviews are generally conducted for performance evaluation, and the requirements differ across the divisions. For example, Customer Service officials told us that their supervisors are to review at least 25 percent of the cases closed by each employee. Each division is subject to reviews by a quality measurement system. Under this system, district office reviewers are to check the quality of work in closed cases against specific quality standards. These standards differ for each division. The closed cases selected for review are to be randomly drawn from all types of cases closed in a division. KMDO did not have data on how many quality reviews addressed abatement decisions because abatement cases were not reviewed separately. In the Collection Division, for example, a sample of nine closed cases per branch is to be pulled for a nationally centralized review under the Collection Quality Measurement System. Similarly, in the Customer Service Division, Quality Assurance reviewers are required to do a closed case review on paper transactions. The sampling plan for each review period is developed in the National Office. At the time of our work, for example, the Quality Assurance Office was slated to review every 107th closed case. Examination also has closed cases reviewed under a national quality measurement system. The Taxpayer Advocate Office’s review only covers those relatively few cases that are not referred to and closed at another division. The Northern California District Office is located in Oakland and is responsible for Northern California from the Oregon border to just south of San Francisco. The district office has six operating divisions: Collection, Customer Service, Criminal Investigation, Research and Analysis, Examination, and Quality Assurance. Staff in three of the divisions can make abatements through one or more branches. The Taxpayer Advocate’s Office can also make abatements, but Advocate officials said their office makes few abatements because most of their cases are referred to the other divisions or the service center for the actual decision. Table IV.1 shows the divisions or offices that make abatements as well as examples of the type of abatements made by each. In the following, we briefly describe the types of abatements made at each division or office. We also identify the caseload for all types of adjustments, including abatements, to taxpayers’ accounts. None of the divisions had data on the portion of the caseload that involved abatements. The Collection Division is responsible for processing cases involving tax delinquency accounts and tax delinquency investigations. In the former, a tax liability has been assessed but not paid; in the latter, IRS is trying to obtain an unfiled tax return to determine whether a tax liability exists. The division may abate penalties for these cases. The division receives these cases from the service center and has field branches whose staff can make abatement decisions. During fiscal year 1998, the division processed about 13,600 cases, but it did not track how many of them were abatements. According to district officials, in a typical Collection Division case, a revenue officer contacts a taxpayer who has not fully paid a tax liability. The taxpayer requests that the failure-to-pay penalty be abated and provides evidence of reasonable cause. The officer is to review the information provided by the taxpayer and make a decision on whether to abate the failure-to-pay penalty. The criteria for Collection abatements are contained in the Internal Revenue Manual, chapter 21. Evidence required to make the abatement includes statements or documents from the taxpayer supporting the reasonable cause claim. Information about the abatement decision is to be maintained in case files at the service center. Abatements in the Customer Service Division result from requests by taxpayers to abate certain types of penalties for reasonable cause. Customer service representatives are to collect evidence from the taxpayer on the reasonable cause and make the abatement decision. Documentation on these decisions is to be maintained in the case file at the service center. Officials from Customer Service could not provide caseload data on these or other types of case decisions for fiscal year 1998. According to district officials, these penalty abatement cases typically involve penalties for failure to file a required tax return on time or failure to pay assessments on time. The criteria for Customer Service abatements are also contained in the Internal Revenue Manual, chapter 21. The Customer Service Division refers some abatement cases to other IRS functions for processing. Taxpayer contacts concerning claims or requesting audit reconsideration can be referred to the district’s Examination Division or to the service center. Requests for abatement of very large dollar penalties and of assessments to be paid under installment agreements can be referred to the Collection Division for processing. Abatements in the Examination Division originate as claim referrals from the service centers, reconsideration of assessments from prior audits, requests for interest reduction, and tax reductions identified during audits. Examination staff also are to process abatements referred to them from the district’s taxpayer advocate. The Examination Division might refer cases to other divisions or IRS offices either because the taxpayer moved or because the taxpayer has a representative who lives elsewhere. During fiscal year 1998, the division closed about 44,200 cases, but the percentage that involved abatements was not tracked. According to district officials, a typical Examination abatement case is one in which the taxpayer requests that the findings of a prior audit be reconsidered. Examination staff are to review the issues and make a decision on whether to abate the amount being questioned. Abatements are also considered to be common for category A claims, which are usually referred from the service center and consist of claims and amended returns that are sensitive, prone to noncompliance, or more complex than other claims. The criteria for Examination abatements are contained in the Internal Revenue Manual, chapter 21. Evidence required for Examination abatements typically includes information from audit files, tax returns, or other documentation provided by the taxpayer that is to be maintained in the case file. The Taxpayer Advocate’s Office is responsible for helping taxpayers resolve tax-related problems. The office should receive cases that meet certain criteria, such as repetitive IRS contacts over a short time period or taxpayer problems that have not been resolved through regular channels for a long period of time. Advocate officials told us that they refer most cases to divisions, service centers, and other districts. For example, cases might be referred to the Examination and Collection Divisions, which have caseworkers to handle these cases. Primarily for geographic reasons, some cases are referred to other districts—that is, to move the case closer to where the taxpayer is located and the actions are being taken. Generally, the workload does not vary much throughout the year and consists primarily of processing taxpayer requests for reduction or elimination of penalties. This workload totaled about 4,000 cases in fiscal year 1998, but the percentage that involved abatements was not tracked. Advocate officials said their office makes the final decision on a small number of abatements. According to Advocate officials, a typical case would involve abating a failure-to-file penalty when a taxpayer had reasonable cause for not filing by the required due date. A taxpayer may initiate this action by providing information on why the penalty should be abated (e.g., taxpayer was hospitalized). The criteria for abatements are in the Internal Revenue Manual, chapter 21. Evidence required to make the abatement includes statements from the taxpayer on the reasonable cause claim. This evidence is to be kept in the case file. According to IRS officials, district office staffing does not vary much seasonally because the workload remains fairly constant. Table IV.2 summarizes the type and level of staff that make abatement decisions in the district, according to IRS officials. In general, training to be given in the divisions and offices that made abatements did not focus on abatements. Rather, the training focused on the range of duties for the various types of staff. The following briefly describes the training to be provided. Training targeted at abatements is limited. According to district officials, the only training directed at abatements was a class on processing Form 3870, Request for Adjustment. Otherwise, the staff making abatement decisions are to receive training in three phases, consisting of over 300 classroom hours. This training is to cover the collection process, including abatements, adjustments, forms, and reasonable cause for abatements. Other training is to be provided on-the-job. Staff making abatements in the division are to receive the basic training module, which includes case processing, telephone routing, customer service core skills, disclosure policies, and telephone training. Other training modules provide more specialized and advanced courses, such as computing and adjusting penalties and determining penalty relief. Staff making abatements are to receive classroom and on-the-job training. This training mostly addresses the tax law and auditing. None of the training focused on abatements. As of April 1999, the staff had not yet been trained on the new process for audit reconsideration in which the requests for reconsideration are to be submitted directly to the service center and subsequently referred to Examination for action. Staff working the cases are to receive classroom training. Initially, they are to receive 16 hours of PRP caseworker training and on-the-job training for making adjustments on the computer system. The PRP caseworker- training course was being updated, and additional classroom training for PRP specialists and analysts is to be added. Staff that make abatement decisions usually are not the same staff that enter information about decisions into taxpayers’ computerized accounts. According to district officials, this improves internal control. Exceptions are the Taxpayer Advocate’s Office and Customer Service Division, where the same individual makes the decision and enters the information. Two types of supervisory review can affect abatement decisions made by staff. One occurs before the decision is made and one occurs after the decision. Regardless of the type of review, district officials did not collect data on either the number of reviews done by supervisors or the percentage of abatement decisions that were reviewed. First, supervisors may review decisions before they are made for purposes of approval. The requirements for such a review differed according to various division officials. For example, supervisors in the Taxpayer Advocate’s Office and Collection Division are to review all requests for abatement and those in Customer Service are to review large-dollar abatements. The other divisions did not require supervisors to review and approve abatement decisions before they became final. Second, district officials said that supervisors are to review a random sample of closed cases for each employee each month. These reviews are generally conducted for performance evaluation, and the requirements differ across the district. Following is a summary of supervisory review requirements for each division and office. According to district officials, supervisors’ reviews are to be conducted using the Collection Management Information System. Supervisors are to look for appropriate documentation and data. IRS does not maintain data on the number or percent of cases reviewed. District officials said that supervisors are to review five closed cases each month for each employee to evaluate performance for the critical elements in the job description. Supervisors also are to make additional random reviews to ensure quality. According to district officials, supervisors are to review a sample of closed cases for employee evaluations. The division has no specific criteria for the number of cases to be reviewed. Supervisors may choose to review a higher or lower number of closed cases depending on the auditor’s skill level and experience and the supervisor’s workload. District officials told us that supervisors are to review all staff decisions. Reviews are to be used for both quality control and staff evaluations. Reviewers at the district office review closed cases in the divisions to check the quality of the work against specific quality standards. These standards differed for each division, but in general, the standards focused on communication, timeliness, and accuracy. District officials said that the cases selected for review are to be randomly drawn from all types of cases closed in a division. Because abatement cases are not reviewed separately, district officials did not have data on how many quality reviews addressed abatement decisions. Specifically, district officials provided the following information about the review of closed cases in each division or office by independent reviewers. The Quality Assurance Division reviews a monthly sample of closed cases from the Taxpayer Advocate’s Office. According to district officials, Quality Assurance reviews 16 cases plus 100 percent of the cases initiated from IRS’ periodic problem solving days. The Collection Division reviewed between 5 and 10 percent of the closed cases—about 1,300 reviewed cases in fiscal year 1998—as part of the Collection Quality Measurement System. The Customer Service Division’s Automated Compliance Section reviewed 439 cases during fiscal year 1998. Examination Division cases underwent Quality Assurance review against the Examination Quality Standards. In fiscal year 1998, 506 closed audit cases were reviewed as part of this Examination Quality Measurement System. In addition to those named above, Lawrence Dandridge, Rodney Hobbs, Stephen Pruitt, Louis Roberts, Elizabeth Scullin, and Kathleen Seymour made key contributions to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO described the Internal Revenue Service's (IRS) abatement process, focusing on IRS': (1) process for making abatements from initiation through final review in selected IRS locations; and (2) efforts to improve the abatement process. GAO noted that: (1) an abatement may be initiated by a request from a taxpayer or by IRS; (2) once initiated, IRS' abatement process depends on the type, complexity, and source of the assessment being abated; (3) according to IRS officials, these factors determine the IRS work group, such as those that audit tax returns or answer taxpayer inquiries, that makes the abatement decision; (4) the work group, in combination with the type and complexity of the assessment being abated, influence the type and grade level of staff making abatement decisions, the criteria and supervisory review used to guide decisions, and the quality review done after the decisions are made; (5) IRS does not have quantitative data on details of the abatement process; (6) an abatement is just one type of adjustment that IRS can make to taxpayer accounts; (7) for example, IRS staff also adjust accounts to reflect the dates and amounts of various types of assessments and payments; (8) although it has data on the number and amount of abatements, IRS cannot extract any quantitative data about the abatement process from data on all types of adjustments; (9) determining the costs and benefits of collecting data on just the abatement process is beyond the scope of this report; (10) IRS' recent efforts to improve the abatement process have generally involved task forces that have been studying various IRS concerns, including, for example, the administration of penalties and treatment of taxpayers; (11) although the studies have not focused on abatements, they have produced some proposals that would affect the abatement process, such as the documentation required to support abatements; (12) during 1993-1994, concerns about the inventory of tax debts prompted IRS to study abatements at its 10 service centers; (13) this study identified 259 problems and referred 158 to various IRS work groups for analysis and implementation of any needed changes; and (14) IRS never formally released a final study.
The U.S. Census Bureau reported in 2013 that American Indians and Alaska Natives were almost twice as likely to live in poverty as the rest of the population. A 2014 Interior report estimated that between 43 and 47 percent of American Indian families in South Dakota earned incomes below the poverty line, compared with a national average of about 23 percent for all Native American families. In addition, according to DOE documents, Indian communities are more likely to live without access to electricity and pay some of the highest energy rates in the country— hindering economic development and limiting the ability of tribes to provide their members with basic needs, such as water and wastewater services and adequate health care. Considerable energy resources, including domestic mineral resources such as oil, gas, and coal, and resources with significant potential for renewable energy development, including wind, solar, hydroelectric power, geothermal, and biomass, exist throughout Indian country. Tribes may seek opportunities to use these resources as an option to create economic benefits that provide revenue for government operations and social service programs, create high- quality jobs, and offset power costs by increasing access to reliable and affordable energy for tribal buildings and individual homes. For instance, in fiscal year 2015, the development of Indian-owned oil and gas resources generated more than $1 billion in revenue for tribes and individual Indian mineral owners, according to Interior, making oil and gas resources one of the largest revenue generators in Indian country. In addition, tribes are taking advantage of renewable energy sources and developing projects that range from facility- and community-scale production, such as rooftop solar panels or a wind turbine to power a community center, to utility-scale production of hundreds of megawatts of electricity (see fig. 1). However, the development of Indian energy resources can be a complex process involving a range of stakeholders, including federal, tribal, and state agencies. The specific role of federal agencies can vary on the basis of multiple factors, such as the type of resource, location of development, scale of development, ownership of the resource, and Indian tribe involved. Figure 2 shows various roles federal agencies may have in the development of Indian energy resources. A short description of the agencies’ roles follows. Resource Identification. To develop energy resources, developers and operators must locate a suitable resource. To locate potential oil and gas resources, most operators use seismic methods of exploration. For renewable projects, developers conduct a feasibility assessment to evaluate the potential of the resource for development and determine the viability of the project, which may include evaluating market demand for power and financing opportunities. Both IEED and IE help tribes identify resource potential. For example, according to IE data, IE provided a tribe $210,000 in 2012 to identify available solar and biomass energy resources, characterize solar and biomass energy technologies, and analyze the technical and potential economic viability of projects. Technical and financial assistance. IE and IEED manage the federal government’s financial and technical assistance programs dedicated to Indian energy development. IE provides federally recognized tribes and tribal entities financial and technical assistance to promote Indian energy development and efficiency, reduce or stabilize energy costs, and bring electrical power and service to tribal communities and the homes of tribal members. For example, in 2015, IE awarded technical assistance to an intertribal corporation for a survey of wind resources, a survey of transmission and environmental constraints, and market analysis for 16 potential wind turbine deployment sites. According to a DOE report, IE provided $48 million to assist more than 180 tribal energy projects from 2002 through 2014. IE also offers education through webinars, forums, and workshops. IEED serves tribes and their members by providing technical and financial assistance for the exploration, development, and management of tribal energy resources. According to IEED officials, IEED provided about $52 million to assist 340 tribal energy projects from 2002 through 2015. According to its data, IEED is assisting a tribal member to develop a 5,000-acre solar project by helping secure transmission line access with potential right-of-way landowners, and by meeting with county commissioners to gain support for the project. In some instances, both IE and IEED provide assistance for a project. For example, IE and IEED coordinated to help a tribal solar project. IEED provided an engineering design grant, and IE provided an installation grant and technical assistance. For the past 6 years, IEED has also provided assistance on an as-needed basis to BIA agency offices by assigning staff for a period of time— generally a year or longer—to perform such tasks as organizing oil and gas records, clearing backlogs of energy-related documents, reviewing leases, and helping BIA agencies fulfill requirements under the National Environmental Policy Act (NEPA). For instance, according to a BIA official, IEED helped a BIA agency office by conducting 1,584 NEPA and right-of-way compliance inspections for energy-related activities involving oil and gas well pads, access roads, and pipelines. In another example, IEED helped a BIA agency office by reviewing environmental documents and conducting site surveys for 77 proposed oil and gas well pads. Other federal agencies, including USDA, HUD, Commerce’s Economic Development Administration, and Treasury, can also provide financial assistance to tribes to explore and develop their energy resources. For example, according to its data, USDA awarded a $500,000 grant through its Rural Energy for America Program in fiscal year 2015 to a tribal entity seeking to develop a hydroelectric project. Also, HUD data shows that in fiscal year 2015 a tribe used funds provided through HUD’s block grant program to extend transmission lines to communities that lack basic electrical service. Regulate. Multiple federal agencies have a regulatory role associated with Indian energy development. For example, BIA approves seismic exploration permits for operators to identify oil and gas resources, maintains surface and mineral ownership records, identifies and verifies ownership of land and resources, and reviews and approves a number of energy-related documents—such as surface leases, mineral leases for the right to drill for oil and gas resources, and right-of-way agreements. In addition, BLM issues drilling permits to operators developing Indian oil and gas resources after receiving BIA concurrence to approve the permits. Further, EPA issues permits for air emissions that may be required for some oil and gas development, and FWS issues permits for incidental deaths of certain wildlife species, which may be needed for a wind project. If energy development affects navigable waters, the U.S. Army Corps of Engineers may need to issue a permit. Provide transmission access assistance. Utility-scale renewable energy projects need to connect to the electric grid to transmit power generated from the project, along transmission lines, to a destination. DOE’s Western Area Power Administration, along with other power administration agencies, is responsible for marketing and transmitting electricity across the United States and can help tribes to better understand transmission capacity, identify options for accessing available capacity on transmission lines, and assist with interconnection requirements. For example, according to officials from the Western Area Power Administration, it partnered with a tribal utility authority to resolve transmission congestion affecting a proposed 27-megawatt utility-scale solar project. Purchase power. To be economically feasible, utility-scale renewable energy projects need a customer to purchase their power. The federal government is the nation’s largest energy consumer. In fiscal year 2013, the government spent about $6.8 billion on energy for over 3.1 billion square feet of buildings and facilities—an area about the size of 50,000 football fields. GSA has general statutory authority to enter into utility services contracts of up to 10 years for all federal agencies. GSA has delegated the authority to enter into contracts for public utility services to the Department of Defense (DOD) and to DOE for procurements by those agencies. The Energy Policy Act of 2005 encourages federal agencies to purchase electricity, energy products, and by-products from tribal entities. Specifically, the act includes a provision authorizing federal agencies to give preference to a tribe or tribal enterprise when purchasing electricity or any other energy or energy by-product as long as federal agencies do not pay more than the prevailing market prices or obtain less than prevailing market terms and conditions. The Council was established by executive order in June 2013. The executive order calls for the Secretary of the Interior to lead the Council, the Council to meet at least three times per year, and Interior to provide funding and administrative support to the Council. According to an Interior document, the Council will improve efficiencies by coordinating work across the federal government and use an “all-of-government approach” to find solutions that address tribal needs. To accomplish its goals, the Council includes five subgroups—(1) energy; (2) health; (3) education; (4) economic development and infrastructure; and (5) environment, climate change, and natural resources—to discuss current initiatives and identify interagency solutions. The Energy Subgroup was formed in November 2013 with the Secretaries of Energy and the Interior as co-chairs. In May 2014, the Energy Subgroup identified nine additional federal agencies that should be included as participants and established policy goals for the Subgroup that, if accomplished, may help to overcome some of the factors that we previously identified as hindering Indian energy development. For example, in June 2015, we reported that tribes’ limited access to capital had hindered development. One of the Energy Subgroup’s goals is to evaluate, align and coordinate financial and technical assistance programs to leverage agency resources, funding, and expertise. Similarly, in June 2015, we found that the development of Indian energy resources is sometimes governed by multiple federal, tribal, and, in certain cases, state agencies and can involve significantly more steps, cost more, and take more time than the development of private and state resources. One of the Energy Subgroup’s goals is to evaluate opportunities to streamline and accelerate regulatory processes. According to DOE and Interior officials, since May 2014, the federal agencies that formed the Energy Subgroup have taken the following actions: IE and IEED signed a memorandum of understanding in June 2016 as a format for collaboration between the two agencies; IE and IEED began to meet regularly in August 2015 to discuss projects involving both agencies and grant release dates, among other things; IE, with input from numerous other agencies, developed a web-based tool that provides information about grant, loan, and technical assistance programs available to support tribal energy projects; IE hosted events to encourage tribal engagement, such as the September 2015 National Tribal Energy Summit: A Path to Economic Sovereignty; and IE, Interior, USDA, GSA, DOD, and Treasury convened a meeting to discuss opportunities to provide technical and financial assistance to a planned 1-gigawatt wind and transmission infrastructure project. In response to tribal requests for increased coordination and more efficient management of their resources from the numerous regulatory federal agencies involved with Indian energy development, in 2014, Interior took initial steps to form a new office, the Service Center, composed of staff from four Interior agencies—BIA, BLM, ONRR, and OST—with BIA as the lead agency. BIA’s fiscal year 2016 budget included $4.5 million to form the Service Center in Lakewood, Colorado. According to Interior’s fiscal year 2016 budget justification, the Service Center is intended to, among other things, help expedite the leasing and permitting processes associated with Indian energy development. Among its accomplishments, the Service Center has (1) developed a memorandum of understanding among BIA, BLM, ONRR, and OST outlining the management and operation of the Service Center; (2) developed and conducted a training course on oil and gas development standard operating procedures for 462 Interior employees at eight locations across the country; and (3) hired several positions, including a Director and Deputy Director. The Energy Subgroup has not fully incorporated leading practices that can help agencies enhance and sustain collaborative efforts, which may limit its effectiveness in addressing long-standing factors that hinder Indian energy development. Our prior work on issues that cut across multiple agencies, as Indian energy development does, has shown that collaborative approaches can increase the effectiveness of federal efforts. Our prior work has also found that agencies face numerous challenges in their efforts to collaborate. To overcome differences in agency missions, cultures, and established ways of doing business, collaborative efforts, such as the Energy Subgroup, can use leading practices that have been shown to enhance and sustain these efforts. Specifically, in September 2012, we identified sustained leadership, dedicated resources and staff, and active participation of all relevant stakeholders, among other things, as leading practices for effective collaboration. Specific leading practices the Energy Subgroup has not fully incorporated in its implementation efforts following. Sustained leadership is uncertain. DOE has not designated a career employee of the federal government to serve as its co-chair of the Energy Subgroup. Instead, DOE designated an appointed official to serve as the co-chair, and that individual may not remain in the position with the upcoming presidential transition. According to agency officials, their focus has been to complete more time-sensitive tasks, and selecting a long- term co-chair has not been a priority. Similarly, Interior originally designated an appointed official to serve as its co-chair of the Energy Subgroup, but in August 2016 Interior designated a senior-level career employee of the federal government to serve as co-chair, according to Interior officials. Interior officials also cited higher priorities as a reason for not identifying a career employee sooner, but these officials told us that the upcoming presidential transition increased the urgency to identify a long-term leader. The Executive Order establishing the Council allows for senior-level officials to perform Council duties, which may include either appointed or career employees. Our prior work has shown that turnover of political leadership in the federal government has often made it difficult to sustain attention to complete needed changes. Our prior work has also shown that transitions and frequent changes in leadership weakened the effectiveness of a collaborative effort and that a lack of leadership further challenges an organization’s ability to function effectively and to sustain focus on key initiatives. Designating a senior-level career employee to lead the Subgroup can provide some assurance that leadership will remain consistent beyond this administration. Collaborating agencies dedicated few resources and have not identified additional resources needed. Federal agencies have dedicated few staff and financial resources to the Energy Subgroup, have not identified the resources needed to accomplish its goals, and do not have an agreed-upon funding model. Our prior work has shown that collaborating mechanisms should identify the staff and financial resources needed to initiate and sustain their collaborative effort. According to DOE, Interior, and USDA officials, the Energy Subgroup has been staffed on an “other duties as assigned” basis since its creation in November 2013. These officials said that this staffing model, which was also used by the other subgroups, makes it difficult to ensure continued participation by each federal agency because of competing demands for staff and resources. In 2015, more than 2 years after the Council was established, Interior hired a fulltime Executive Director—the only dedicated Council staff—to manage activities of the Council and all subgroups. Beyond the Executive Director position, participating agencies have not dedicated or identified future financial resources for the Energy Subgroup. A few federal officials told us the effectiveness of the Subgroup in accomplishing tasks is limited without dedicated resources. For instance, federal officials told us that the Energy Subgroup has not developed a 5- year strategic plan because no federal entity has dedicated the resources for key planning activities, such as a facilitated planning session that can define and articulate common outcomes. The executive order establishing the Council directs Interior to provide funding and support for the Council. However, according to Interior officials, dedicating resources to the Energy Subgroup would take away from other programs and services that directly support tribes and their activities. Without dedicated resources, key activities completed to date are generally the result of individual federal agencies that voluntarily identified and applied their own budgetary resources to specific work activities. For example, DOE volunteered to use its own financial and information technology resources to develop a web-based tool that provides information about grant, loan, and technical assistance programs available to support tribal energy projects. According to a DOE official, the department voluntarily applied these resources because senior leadership determined the web-based tool was an important information source that could help tribes to identify federal financial assistance. Our prior work has shown it is important for collaborative mechanisms to identify and leverage sufficient funding to accomplish their objectives and that funding models can vary. In some instances, specific congressional authority or dedicated funding from Congress may be used for the interagency funding for collaborative mechanisms. In other instances, we found a collaborative mechanism can be supported by all participating agencies contributing funds as well as in-kind support. Identifying resources needed to accomplish its goals and establishing a funding model agreed upon by participating agencies may provide opportunities for the Energy Subgroup. If the Energy Subgroup does not identify resources and a funding model, it is unclear to what extent the Energy Subgroup’s collaborative efforts can be effectively sustained to accomplish its stated policy goals. The Energy Subgroup has not documented how participating agencies will collaborate. Our prior work found that agencies that articulate their agreements in formal documents strengthen their commitment to working collaboratively. A formal written agreement that includes goals, actions, responsible agencies, and time frames may be a tool to enhance and sustain collaboration. According to federal officials, the Energy Subgroup did not develop formal documentation because developing such documentation would have taken time and resources away from completing work deliverables. However, the lack of a formal agreement may have limited collaboration and involvement of some participating agencies. For example, federal officials told us that numerous Energy Subgroup agencies were actively involved in the effort to develop the web-based tool mentioned above but that after its completion there has been less collaboration between the various federal because there was no longer a clear reason to work together. The Council, including the Energy Subgroup, was established to improve efficiencies by coordinating work across the federal government and using an “all-of-government approach” to find solutions that address tribal needs. However, most of the activities completed to date have been the result of collaboration between only IE and IEED—the only two agencies focused exclusively on Indian energy and also the only two that have a documented agreement to collaborate. The achievements of the Subgroup do not reflect the efforts of the other nine federal agencies that are its members. Without documenting how all members in the Subgroup are expected to collaborate, it is unclear how participating agencies will organize their individual and joint activities to address the factors that hinder Indian energy development. BIA, the lead agency responsible for forming the Service Center, did not follow some leading practices or adhere to agency guidance during early stages of developing the Service Center—which may impact its effectiveness in helping overcome the factors that have hindered Indian energy development. An interagency plan created in response to Executive Order 13604, Interior’s Departmental Manual, and our prior work offer leading practices that have been shown to enhance the effectiveness of collaborative efforts, improve permitting, and increase the likelihood of success for organizational change. These practices include (1) creation of a lead agency or single point of contact to coordinate regulatory responsibilities by multiple agencies, (2) involvement by and active participation with all relevant stakeholders, and (3) clear identification and documentation of the rationale of key decisions. Specific leading practices that BIA did not fully incorporate when implementing the Service Center follow. The Service Center has not been formed as a center point of collaboration for all regulatory agencies involved with energy development. In June 2015, we reported that the added complexity of the federal process, which can include multiple regulatory agencies, prevents many developers from pursuing Indian energy resources for development. Interior has recognized the need for collaboration in the regulatory process and described the Service Center as a center point of collaboration for permitting that will break down barriers between federal agencies. In addition, the memorandum of understanding establishing the Service Center states that it will serve as a center point for collaboration with other federal departments for expediting oil and gas development and as a point of contact for other agencies to resolve development issues. The Service Center may increase collaboration between BIA and BLM on some permitting requirements associated with oil and gas development. This is because, according to BIA officials, only entities that participate in the Indian Energy and Minerals Steering Committee—BIA, BLM, ONRR, and OST—were included in the Service Center. According to BIA officials, no other federal agencies were included because of concerns that including them would delay establishing the Service Center. BIA has neither included a regulatory agency within Interior, FWS, as a partner in the Service Center nor identified opportunities to incorporate other regulatory agencies outside of Interior, such as EPA, USDA, and the Army Corps of Engineers, as partners. As a result, the Service Center has not been formed as the center point to collaborate with all federal regulatory partners generally involved with energy development nor is it a single point of contact for permitting requirements. According to a tribal chairman, issues raised by FWS often create significant delays in permit approvals, and Interior’s failure to include FWS as part of the Service Center is a great error. Similarly, we reported in June 2015 that delays in the regulatory review and approval process can result in lost revenue and missed development opportunities. Our prior work has shown that having a lead agency for permitting is a management practice that helps increase efficiencies for the permitting process. In addition, the Quadrennial Energy Review Task Force recommended that, when possible, federal agencies should co-locate dedicated cross-disciplinary energy infrastructure teams that consist of environmental review and permitting staff from multiple federal agencies. By not serving as a central point of contact or lead agency, the Service Center will be limited in its ability to improve efficiencies in the federal regulatory process to only those activities that can benefit from increased coordination between BIA and BLM. BIA did not involve key stakeholders in the development of the Service Center. Interior’s fiscal year 2016 budget justification stated that BIA was working with DOE to develop and implement the Service Center and leverage and coordinate with DOE-funded programs to provide a full suite of energy development-related services to tribes. Further, BIA guidance states that it will seek the participation of agencies with special expertise regarding proposed actions. However, BIA did not include DOE in a participatory, advisory, or oversight role in the development of the Service Center. Further, although IEED developed the initial concept and proposal for the Service Center, BIA did not include IEED in the memorandum of understanding establishing the Service Center. Our prior work has shown that involvement by all relevant stakeholders improves the likelihood of success of interagency collaboration. DOE and IEED possess significant energy expertise. By excluding these agencies from the development and implementation of the Service Center effort, BIA has missed an opportunity to incorporate their expertise into its efforts. Moreover, BIA did not effectively involve employees to obtain their ideas and gain their ownership for the transformation associated with development of the Service Center—a leading practice for effective organizational change. Most of the BIA agency officials we met with told us they were not aware of implementation plans for the Service Center or its intended purpose. BIA identified its agency offices as the primary customers of the Service Center, yet BIA did not request their ideas or thoughts or identify potential employee concerns. Several BIA and tribal officials said they are concerned that the creation of a new office (the Service Center) may add another layer of bureaucracy to the process of reviewing and approving energy-related documents. We have previously reported that, as a leading practice, successful change initiatives include employee involvement to help create the opportunity to increase employees’ understanding and acceptance of organizational goals and objectives, and gain ownership for the changes that are occurring in the organization. Such involvement strengthens the transformation process by allowing employees to share their experiences and shape policies. By not involving its employees in the development of the Service Center, BIA is missing an important opportunity to ensure the success of the changes that it hopes to achieve. BIA did not document its basis for key decisions. The process BIA followed to develop the Service Center and the basis for key decisions it made are unclear because BIA did not document the rationale for key management decisions or the alternatives considered. According to Interior’s Departmental Manual, agencies that propose a new office are to document the rationale for selecting the proposed organizational structure and to consider whether the new office contributes to fragmentation of the organization. Our prior work has shown that effective organizational change is based on a clearly presented business-case or cost-benefit analysis and grounded in accurate and reliable data, both of which can show stakeholders why a particular initiative and alternatives are being considered. BIA officials said they did not document the basis for key decisions because they were not aware of the requirement. In addition, a few Interior officials said that BIA did not present alternatives to the full Indian Energy and Minerals Steering Committee, even though it was making decisions about the structure and placement of the Service Center. Without documentation to justify the rationale behind key management decisions, it is unclear if the Service Center, as currently designed, is the best way to address the identified problem. Further, several tribal organizations and tribal leaders made recommendations related to the creation of the Service Center that are not currently reflected in BIA’s implementation of the Service Center. For example, the Coalition of Large Tribes Resolution calls for the Service Center to be the central location for permit review and approval support for Indian energy development. Without documentation of alternatives considered or key management decisions, it is unclear whether these requests were appropriately considered. Our prior work has shown that fixing the wrong problems, or even worse, fixing the right problems poorly, could cause more harm than good. In June 2015, we reported that BIA’s long-standing workforce challenges, such as inadequate staff resources and staff at some offices without the skills needed to effectively review energy-related documents, were also factors hindering Indian energy development. In this review, we found that BIA also has high vacancy rates at some agency offices, and it has not conducted key workforce planning activities that may be further contributing to its long-standing workforce challenges. Federal internal control standards, Office of Personnel Management standards, and our prior work identify standards or leading practices for effective workforce management. The standards and leading practices include the following: (1) possessing and maintaining staff with a level of competence that allows them to accomplish their assigned duties; (2) identifying the key skills and competencies the workforce needs to achieve current and future agency goals and missions, assessing any skills gaps, and monitoring progress towards addressing gaps; and (3) conducting resource planning to determine the appropriate geographic and organizational deployment to support goals and strategies. BIA has not taken steps to provide reasonable assurance it has staff with the skills needed to accomplish their assigned duties and has not conducted workforce planning activities consistent with these standards and leading practices. Some BIA offices have high vacancy rates and may not have staff with the level of competence that allows them to review some energy development documents. We found that some BIA offices have high vacancy rates for key energy development positions, and some offices reported not having staff with key skills to review energy-related documents. For example, according to data we collected from 17 BIA agency offices between November 2015 and January 2016, vacancy rates ranged from less than 1 percent to 69 percent in those offices. Vacancy rates for the realty position—a key position to process leases and other energy-related documents—ranged from no vacancies in five offices to 55 percent in one office. In some cases, these vacancies have been long-standing. For example, at the BIA Uintah and Ouray Agency, located in Utah, two of the six vacant realty positions have been vacant since September 2014. According to BIA officials, the high vacancy rates can be generally attributed to a number of factors, including: employee early-out and buy-out retirements in 2013, difficulties hiring qualified staff to work in geographically remote locations, and staff transfers to other positions. For example, several realty positions in BIA agency offices were vacated because staff transferred to positions in BIA’s Land Buy- Back Program. According to BIA officials, as of May 2016, the Land Buy-Back Program had hired 15 realty staff, 12 whom were from BIA regions and agency offices. In addition to high vacancy rates at some offices, some BIA agency and regional offices may lack the expertise needed to review and approve energy-related documents. For example, BIA agency officials in an area where tribes are considering developing wind farms told us that they would not feel comfortable approving proposed wind leases because their staff do not have the expertise to review such proposals. Consequently, these officials told us that they would send a proposed wind lease to higher ranking officials in the regional office for review. Similarly, an official from the regional office stated that it does not have the required expertise and would forward such a proposal to senior officials in Interior’s Office of the Solicitor. The Director of BIA told us that BIA agency offices generally do not have the expertise to help tribes with solar and wind development because it is rare that such skills are needed. In another example, a BIA agency office identified the need for a petroleum engineer to conduct technical reviews of oil and gas documents in a timely manner. However, the office has been unable to hire a petroleum engineer because it cannot compete with private industry salaries. According to BIA officials, additional staff resources and key skills related to energy development will be provided through the Service Center, once it is fully implemented. BIA has not identified key skills needed or skill gaps. The extent to which BIA does not have key skills throughout the bureau is unknown because BIA has not identified the key skills it needs and the extent to which it has skill gaps. BIA officials said that a skill gap assessment is not needed at this time because of the additional staff being hired through the Service Center. However, without key information on its workforce, as called for by Office of Personnel Management standards and our prior work, the agency risks hiring personnel that do not fill critical gaps, and the extent to which the Service Center will be able to address these gaps is unknown at this time. A BIA official also told us that in August 2015, the Assistant Secretary for Indian Affairs contracted with a consultant to develop a strategic workforce plan for BIA and the Bureau of Indian Education. However, according to a BIA official, the Bureau of Indian Education and BIA management are the primary focus of the work, and the resulting plan will not include agency offices, the level at which tribes generally interact with BIA for energy-related activities. BIA does not have a documented process to provide reasonable assurance that workforce resources are appropriately deployed in the organization and align with goals. According to several BIA and tribal officials, the workforce composition of agency offices is not regularly reviewed to provide reasonable assurance it is consistent with BIA’s mission and individual tribes’ priorities and goals. BIA officials said that an agency office’s workforce composition is routinely carried over from year to year, and positions are filled based on the budget, without consideration of existing workload needs or current tribal and agency priorities. A BIA official told us that this practice can result in a surplus of staff in some departments and staff shortages in others. In some of our meetings with BIA agency and tribal officials, we were told that BIA agency offices are not aware of tribal priorities, in part, because of poor relations between BIA agency offices and tribal leadership. Officials from one tribe said that because of concerns that BIA was not routinely seeking tribal input on priorities and goals, the tribe created a position to serve as a liaison between the BIA and tribal leadership. Among other things, the liaison routinely meets with both tribal leaders and BIA officials to ensure BIA is aware of tribal priorities and goals. Without current workforce information on key skills needed for energy development, tribal goals and priorities, and potential workforce resource gaps, BIA may not have the right people with the right skills doing the right jobs in the right place at the right time and cannot provide decision makers with information on its staffing needs going forward. In 2012 and 2013 respectively, DOE issued policy guidance and procurement guidance to give preference to tribes when DOE facilities contract to purchase renewable energy products or by- products, including electricity, energy sources, and renewable energy credits, but this guidance applies only to DOE, and GSA has not issued government- wide guidance. The purchase preference provision in the Energy Policy Act of 2005 authorizes federal agencies to give preference to a tribe or tribal enterprise when purchasing electricity or any other energy or energy by-product as long as federal agencies do not pay more than the prevailing market prices or obtain less than prevailing market terms and conditions. According to DOE documentation, several tribes had approached federal agencies to negotiate the sale of electricity from tribal renewable energy generation facilities, but negotiations had not resulted in federal purchases for a variety of reasons, including a lack of policy support and implementing guidance for the tribal preference provision. In response, DOE developed a policy to better enable the department to use the tribal preference provision. Under DOE’s policy statement and procurement guidance, DOE facilities can use a purchase preference when a tribal nation holds a majority ownership position in a renewable energy project, provided that the agency does not pay more than the prevailing market price or obtain less than prevailing market terms and conditions. The guidance provides for limiting competition to qualified Indian tribes and tribal majority-owned organizations for the purchase of renewable energy, renewable energy products, and renewable energy by- products. In contrast, GSA, the federal entity with general statutory authority to enter into utility service contracts of up to 10 years for all federal agencies, has not developed implementing guidance for the tribal preference provision contained in the Energy Policy Act of 2005, according to GSA officials. During our review, we identified one instance in which a tribe owned a majority position in a renewable energy project and submitted a bid in response to a GSA solicitation for energy. In this instance, GSA officials told us that they did not apply the tribal preference provision because GSA lacked implementing guidance. In addition, GSA officials said that the existence of the tribal preference provision and corresponding lack of implementing guidance created uncertainty and increased the time and costs associated with the solicitation review process. GSA officials also told us that they do not need to issue separate tribal preference provision guidance because DOE’s guidance is comprehensive. However, the DOE guidance is intended only for DOE, and GSA has not adopted the guidance for any GSA purchases of energy products. Without GSA guidance on the tribal preference provision—even if that guidance mirrors DOE’s guidance— it is unclear if GSA’s procurement officials will have the information they need to apply the preference to its purchases. We discussed our findings with members of the FAR Council, which is the regulatory body established to lead, direct, and coordinate government- wide procurement guidance and regulations in the federal government. According to officials, in response to our finding, the FAR Council plans to revisit the tribal preference provisions and survey agencies to seek their input on whether government-wide guidance to implement the preference should be included in the FAR. Officials said the FAR Council will consider agencies’ input and determine if it should issue regulations implementing the preference authority. Additionally, the officials told us the FAR Council will examine DOE’s guidance in this area, as well as GSA’s plans to make other agencies aware of this preference and the DOE guidance in all future delegations for renewable energies. Numerous federal agencies offer programs that could be used to assist tribes with energy development activities. However, a few stakeholders said impediments may limit or restrict tribal participation in some of these programs. Federal agencies could identify and seek to remove regulatory, statutory, and procedural impediments that limit or restrict tribal participation. For example, according to a 2016 DOE report, DOE’s Title XVII loan program provides loan guarantees to accelerate the development of innovative clean-energy technology and has more than $24 billion in remaining loan authority to help finance clean-energy projects. However, according to DOE officials, no tribes have applied to DOE’s Title XVII loan program because various program requirements discourage tribal participation. For instance, according to officials, DOE considers whether applicants have prior experience and knowledge to execute the kind of project for which they are seeking a DOE loan guarantee. The officials told us that because tribes have not had significant development opportunities that would provide prior experience, this consideration limits tribes’ abilities to compete with other applicants seeking limited resources. A few stakeholders also told us that extensive time and monetary resources are required to apply for assistance from some federal programs, making it difficult for some tribes to seek assistance. For example, DOE requires an application fee of $50,000 for its Title XVII loan program. According to DOE officials, high application fees discourage some tribes from applying. Two other stakeholders told us that tribes and tribal entities dedicate significant financial resources and time to complete grant applications for federal programs. Because of this extensive resource commitment, combined with the low probability of receiving assistance, a few stakeholders said they are reluctant to seek federal assistance in the future. Several stakeholders said the typical size of awards from IEED limit the type of projects that benefit from assistance. Projects that require significant funds are generally not awarded because officials want to fund numerous projects, according to Interior officials. For instance, according to a 2015 Interior report, a tribe requested $321,000 from IEED to drill a geothermal well, but the request was rejected because of the expense relative to funds available. DOE and Interior officials told us they receive significantly more requests for assistance with viable projects than they can fund within existing budget levels. For instance, in 2015, IEED received 22 requests for assistance through its Tribal Energy Development Capacity Program and was able to fund 10 projects, according to IEED officials. In 2014, IEED received requests for $27.5 million in assistance through its Energy and Mineral Development Program and was able to award $9.5 million. Recognizing the importance of a collaborative federal approach to help Indian tribes achieve their energy goals and to more efficiently fulfill regulatory responsibilities and manage some Indian energy resources, the President and Interior undertook two key initiatives, in the form of the Energy Subgroup and the Service Center. However, to be effective these initiatives rely on collaboration among federal agencies and programs, which can be difficult to achieve. Leading practices can help agencies enhance and sustain their collaborative efforts; however, federal agencies involved in both of these efforts have not incorporated some of these practices. The Energy Subgroup has not identified the resources it needs to achieve its goals or a funding model, and the roles of each partnering agency have not been identified and documented. In addition, BIA, in leading the creation of the Service Center, has not established a single point of contact or lead agency for regulatory activities; has not sought or fully considered input from key stakeholders, such as BIA agency office employees; and has not documented the rationale for key decisions. By following leading collaborative practices, both the Energy Subgroup and the Service Center have the potential to more effectively assist tribes in overcoming the factors that hinder Indian energy development. In addition, through the Service Center, BIA plans to hire numerous new staff over the next 2 years, which could resolve some of the long-standing workforce challenges that have hindered Indian energy development. However, BIA is hiring new staff without incorporating effective workforce planning principles. Specifically, BIA has not assessed key skills needed to fulfill its responsibilities related to energy development or identified skill gaps, and does not have a documented process to provide reasonable assurance its workforce composition at agency offices is consistent with its mission, goals, and tribal priorities. As a result, BIA cannot provide reasonable assurance it has the right people in place with the right skills to effectively meet its responsibilities or whether new staff will fill skill gaps. The Energy Policy Act of 2005 authorization of a preference for tribal entities has the potential to increase tribal access to the largest single purchaser of energy in the United States—the federal government. However, GSA—the primary entity responsible for purchasing power for the federal government—has not developed guidance to implement the authority to provide a tribal preference government-wide. By developing such guidance, GSA would help ensure that contracting officials are aware of when the authority for a tribal preference is applicable and how it should be applied to future purchases of electricity and energy products. We recommend that the Secretary of Energy, the Secretary of the Interior, and the Administrator of the General Services Administration, as appropriate, take the following 10 actions. We recommend that the Secretary of Energy designate a career senior-level federal government employee to serve as co-chair of the White House Council on Native American Affairs’ Energy Subgroup. We recommend that the Secretary of the Interior, as Chair of the White House Council on Native American Affairs, direct the co-chairs of the Council’s Energy Subgroup to take the following two actions: (1) Identify appropriate resources needed for the Subgroup to accomplish its goals, as well as a funding model. (2) Establish formal agreements with all agencies identified for inclusion in the Subgroup to encourage participation. We recommend that the Secretary of the Interior direct the Director of the Bureau of Indian Affairs to take the following six actions: (1) Include the other regulatory agencies in the Service Center, such as FWS, EPA, and the Army Corps of Engineers, so that the Service Center can act as a single point of contact or a lead agency to coordinate and navigate the regulatory process. (2) Establish formal agreements with IEED and DOE that identify, at a minimum, the advisory or support role of each office. (3) Establish a documented process for seeking and obtaining input from key stakeholders, such as BIA employees, on the Service Center activities. (4) Document the rationale for key decisions related to the establishment of the Service Center, such as alternatives and tribal requests that were considered. (5) Incorporate effective workforce planning standards by assessing critical skills and competencies needed to fulfill BIA’s responsibilities related to energy development and by identifying potential gaps. (6) Establish a documented process for assessing BIA’s workforce composition at agency offices taking into account BIA’s mission, goals, and tribal priorities. We recommend that the Administrator of the General Services Administration develop implementing guidance to clarify how contracting officials should implement and apply the statutory authority to provide a tribal preference to future acquisitions of energy products. We provided a draft of this report for review and comment to the Secretary of the Interior, the Secretary of Energy, and the Administrator of the General Services Administration. All three agencies provided written comments. Interior agreed with all 8 recommendations directed to the agency and described some actions it intends to take. Interior’s comments are reprinted in appendix I. DOE agreed with the 1 recommendation directed to the agency and provided technical comments. DOE’s comments are reprinted in appendix II. GSA also agreed with the 1 recommendation directed to the agency and described the actions it intends to take. GSA’s comments are reprinted in appendix III. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretaries of the Interior and Energy, the Administrator of the General Services Administration, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. In addition to the individual named above, Christine Kehr (Assistant Director), Patrick Bernard, Richard Burkard, Patricia Chen, Cindy Gilbert, Alison O’Neill, Daniel Purdy, and Jay Spaan made key contributions to this report.
Indian tribes and their members hold considerable energy resources and may use these resources to provide economic benefits and improve the well-being of their communities. However, GAO and others have found that Indian energy development is hindered by several factors, such as a complex regulatory framework, BIA workforce challenges, and limited access to energy markets. Tribes and their members determine how to use their energy resources. In doing so, they work with multiple federal agencies with various roles in the development process—including a regulatory role, a role as provider of technical and financial assistance, or as a purchaser of energy. GAO was asked to evaluate issues related to Indian energy development. This report examines, among other things, (1) federal efforts to help overcome factors that hinder development, (2) BIA's efforts to address workforce challenges, and (3) federal efforts to implement a preference authority to purchase energy from tribes. GAO analyzed federal data and documents and interviewed tribal and federal officials. Two key federal initiatives led by the Department of the Interior (Interior)—the interagency White House Council on Native American Affairs’ Energy Subgroup (Energy Subgroup) and Interior’s Indian Energy Service Center (Service Center)—were implemented to help improve collaboration and the effectiveness of federal efforts to fulfill management responsibilities for Indian lands, assist tribes in developing their energy resources, and overcome any related challenges. However, the Energy Subgroup and the Service Center have not incorporated leading collaborative practices, which may limit the effectiveness of these initiatives to address the factors that hinder Indian energy development. For example, GAO found the following: Energy Subgroup: Participating agencies have dedicated few staff and financial resources to the Subgroup and have not identified resources needed or a funding model—a leading practice to sustain collaborative efforts. Some participating agency officials noted that the effectiveness of the Subgroup is limited without dedicated resources. They also stated that key activities completed to date by the Subgroup are the result of agencies voluntarily applying budgetary resources to specific activities. Without dedicated resources and a funding model to support its activities, the extent to which the Energy Subgroup will be able to effectively accomplish its goals is unclear. Service Center: Interior has recognized the need for collaboration in the regulatory process and described the Service Center as a central point of collaboration for permitting that will break down barriers between federal agencies. However, some regulatory agencies, such as the Fish and Wildlife Service, the Environmental Protection Agency, and the U.S. Army Corps of Engineers have not been included as participants. Without the involvement of key regulatory agencies, the Service Center will be limited in its ability to improve efficiencies in the regulatory process for Indian energy development. GAO and others have previously reported that Interior’s Bureau of Indian Affairs (BIA) has longstanding workforce challenges that have hindered Indian energy development. In this review, GAO found that BIA has high vacancy rates at some agency offices and that the agency has not conducted key workforce planning activities, such as an assessment of work skills gaps. These workforce issues further contribute to BIA’s inability to effectively support Indian energy development. Federal internal control standards recommend agencies identify the key skills and competencies their workforces need to achieve their goals and assess any skills gaps. Until BIA undertakes such activities, it cannot ensure that it has a workforce with the right skills, appropriately aligned to meet the agency’s goals and tribal priorities. A provision in the Energy Policy Act of 2005 authorizes the federal government, the largest single consumer of energy in the nation, to give preference to tribes for purchases of electricity or other energy products. However, the General Services Administration (GSA), the federal agency with primary responsibility for purchasing energy, has not developed guidance to implement this provision government-wide; doing so could help to increase tribal access to the federal government’s energy purchasing programs. GAO is making 10 recommendations, including that the Secretary of the Interior identify resources and a funding model for the Energy Subgroup, involve other agencies in the Service Center so it is a single point of contact for the regulatory process, and require BIA to undertake workforce planning activities. GAO is also recommending that the Administrator of the GSA develop implementing guidance relating to purchasing energy from tribes. Interior, DOE, and GSA concurred with GAO's recommendations.
In 1995, DOD’s TMA introduced TRICARE’s purchased care system. Since then, TMA has implemented three generations of contracts to support that system. The first generation of TRICARE contracts included seven MCSCs that covered 11 geographic health care regions nationwide. In 2001, GAO testified about the acquisition process for TRICARE’s first generation of MCSCs, reporting that TMA’s approach to the acquisition process for these contracts resulted in administrative challenges and contributed to funding shortfalls. In 2002, TMA made changes to its second generation of TRICARE MCSCs, consolidating the number of regions from 11 to 3, and reducing the number of MCSCs from seven to three. TMA also changed the management and oversight of TRICARE’s purchased care and direct care systems through the development of a governance plan. The plan established a new, regional governance structure, including the creation of TRICARE regional offices to manage the three newly established U.S. regions: North, South, and West. TMA retained the three regions for the third generation of TRICARE MCSCs. In 2008, TMA issued a request for proposals (RFP) and six offerors submitted seven proposals—two proposals in the North region, three in the South region, and two in the West region. One offeror submitted a proposal in both the South and West regions. The RFP provided that an offeror could not receive an award for more than one of the three U.S. regions. Therefore, TMA awarded one regional contract to three different offerors. TMA initially awarded a contract to Aetna Government Health Plan (Aetna) in the North region, UnitedHealth Military & Veterans Services (UnitedHealth) in the South region, and TriWest Healthcare Alliance Corporation (TriWest) in the West region. Each award decision was protested; protests were filed with GAO in the North and South regions, and an agency-level protest was filed in the West region. As a result of sustained decisions in all three regions, TMA implemented corrective actions to address the recommendations in the post-award bid protest decisions and announced different awards in all three regions. Specifically, Health Net Federal Services (Health Net), the incumbent contractor, was awarded the North region MCSC; Humana Military Healthcare Services (Humana), also an incumbent contractor, received the South region MCSC; and UnitedHealth, a non-incumbent contractor, received the West region MCSC. The award decisions in the South and West region were protested, but withstood these challenges when the protests were denied. Federal regulations—the FAR and DFARS—largely defined the acquisition process TMA used to obtain health care services through TRICARE’s third generation MCSCs. This acquisition process included steps necessary to plan for, develop, and award these contracts. TMA policy provided further guidance on the acquisition planning and process steps beyond what was required in the federal regulations. This included developing additional documentation and obtaining additional approvals from senior acquisition officials within TMA, as well as conducting peer reviews of the acquisition process. TMA’s acquisition staff conducted a three-phased approach to the contract award process—(1) planning the acquisition, (2) issuing the RFP and soliciting responses, and (3) awarding the contracts—for TRICARE’s third generation MCSCs. (See fig. 1.) Acquisition planning. According to a senior TRICARE acquisition official, staff in the former TMA Requirements Branch developed requirements for TRICARE’s third generation MCSCs during the acquisition planning phase. This senior official explained that the Requirements Branch was disbanded in 2009 because TMA leadership officials determined that the responsibility for developing requirements should be located within the program management office requiring the services and not TMA’s acquisition office. An official who participated in the third generation MCSCs’ acquisition process told us that the Requirements Branch reviewed the contract requirements of the second generation MCSCs, as well as any modifications, as a starting point for developing the requirements for the third generation MCSCs. TRICARE acquisition officials developed one document that combined the acquisition strategy and plan for the third generation MCSCs. The document outlined a statement of need that identified why health care services were being acquired and the objectives to be achieved. The document also specified activities, such as market research, TMA would undertake prior to issuing an RFP. Market research was accomplished by publishing requests for information, which led to meetings between TMA and companies in the health care industry to collect information and feedback about the acquisition. TMA conducted further market research by sharing the draft RFP with companies in the industry to solicit feedback on the RFP, which included the proposed contract requirements. TMA also developed a source selection plan, which defined the evaluation factors and subfactors, and how much weight or importance should be assigned to each factor and subfactor when making a source selection. In addition, the plan identified the source selection team—key individuals participating in the evaluation and source selection process, as well as the procedures to be followed. Request for proposals. Following the acquisition planning phase, TMA issued an RFP. The RFP documented TMA’s requirements, including the contract type, significant contract dates, pricing arrangements, and the criteria to be used to assess offerors’ proposals. The RFP also documented information presented in both the acquisition and the source selection plans. Award. Once proposals were received, the proposals were evaluated by the source selection team consisting of four primary entities: the teams that comprised the Source Selection Evaluation Board (Evaluation Board), the group that made up the Source Selection Advisory Council (Advisory Council), an individual serving as the Source Selection and an individual serving as the Authority (Selection Authority),Procuring Contracting Officer (Contracting Officer). Each entity had specific tasks to complete during the award phase and performed these in a specific order. (See fig. 2.) TMA used a process established in the source selection plan to evaluate offerors’ proposals. To accomplish this, Evaluation Board teams reviewed offerors’ proposals against the three RFP evaluation factors and their relevant subfactors. The evaluation factors, in descending order of importance, were: (1) technical approach, (2) past performance, and (3) price/cost. These evaluation factors were developed to target critical aspects of the program for review and evaluation. The Evaluation Board evaluated each proposal against these factors. Ratings were assigned to each of the offerors’ proposals under the technical and past performance evaluation factors, and each offeror’s total proposal price was determined during the price/cost evaluation. The source selection team used a best- value tradeoff process to compare the relative merits of the offerors’ proposals under the various evaluation factors. The RFP provided that the technical approach and past performance factors, combined, were significantly more important than the price/cost factor, which allowed TMA to accept other than the lowest priced proposal in favor of a technically superior proposal in the best-value tradeoff decision. The technical approach factor was used to evaluate the offerors’ proposed approach—how the offeror intended to deliver services to fulfill contract requirements. Under this factor, the RFP identified seven evaluation subfactors, including network development and maintenance (which encompassed the consideration of network provider discounts), and claims processing.rating and the subfactors were equally weighted during the evaluation. The technical evaluation team’s responsibility was to evaluate how well an offeror’s proposed approach met or exceeded TMA’s minimum requirements for each subfactor. The past performance factor was used to evaluate an offeror’s ability to supply services based on a demonstrated record of performance. If an offeror did not have relevant past performance, the Evaluation Board’s past performance evaluation team was allowed to consider information from a predecessor company or parent organization. The subsequent ratings assigned to past performance considered each offeror’s demonstrated recent and relevant record of performance to predict the offeror’s likelihood of success in meeting the current contract requirements. The price/cost factor was used to evaluate whether the prices and costs in each offeror’s proposal were reasonable and realistic. In evaluating proposals under the RFP’s price/cost factor, the Evaluation Board was to arrive at a total evaluated price for each of the proposals and could also use the results of a price realism analyses in assessing performance risks. An official who participated in the third generation MCSC acquisition process told us that offerors were able to ask the Contracting Officer clarifying questions about the RFP and adjust their proposals based on those discussions. According to this official, after the Contracting Officer received the final proposal revisions, the Evaluation Board teams completed their evaluation and the Chair prepared the report. The Selection Authority considered information provided in the Evaluation Board report, as well as the recommendations presented in the Advisory Council report, and then selected the offeror whose proposal represented the best-value to the government. The final source selection decisions represented the Selection Authority’s independent judgment, which in some instances deviated from the judgments made during the underlying evaluations. For example, in the 2009 award decision in the South region, the Selection Authority disagreed with the judgment of the Evaluation Board that the proposal of one offeror was more advantageous than the awardee’s proposal under one of the technical subfactors, finding instead that the two proposals were equally advantageous under that subfactor. After TMA announced the awards to the selected offerors, post-award bid protests were filed by unsuccessful offerors. Peer reviews of the acquisition process for certain contracts became TMA policy in September 2008 after the issuance of the RFP for TRICARE’s third generation MCSCs. Following bid protests in 2009, the Selection Authority requested that, as a best business practice, a peer review of the acquisition process for these contracts be conducted. The Deputy Director of Defense Procurement and Acquisition Policy documented in a memorandum the peer review team’s findings related to the acquisition process, identifying some of the same issues that were raised in the bid protests, such as the adequacy of discussions during TMA’s evaluation. The memorandum also included TMA’s responses to the peer review team’s concerns, citing whether TMA agreed or disagreed with each finding and why. If TMA agreed with a finding, it also identified how the issue would be addressed. According to a senior TRICARE acquisition official, post-award peer reviews are expected to be conducted before the exercise of each option- year period for each MCSC. This official explained that two independent post-award peer reviews of TRICARE’s third generation MCSCs have been conducted. The first review conducted in March 2012 and the second in March 2013 found no significant issues or concerns regarding the performance of any of the MCSC contractors. Subsequent post-award peer reviews of the third generation MCSCs will be conducted concurrently on an annual basis prior to the exercise of future option-year periods. Documentation from the peer reviews is to be included as part of the acquisition file for TRICARE’s third generation MCSCs. Bid protests were filed by unsuccessful offerors in all three TRICARE regions. Most of the bid protests raised issues related to TMA’s evaluation of offerors’ proposals under the three RFP evaluation factors: technical approach, past performance, and price/cost. There were six bid protests from offerors across the three TRICARE regions. Four offerors filed five separate bid protests with GAO, and one offeror filed an agency-level protest with TMA. Out of the six bid protests filed across the three TRICARE regions, two protests were sustained by GAO, one protest was sustained by TMA, and the remaining three protests were denied by GAO. In response to decisions and recommendations made in the sustained bid protest decisions, TMA implemented corrective actions that resulted in new award decisions in each of the three TRICARE regions. The new award decisions withstood subsequent bid protest challenges, which were filed in two of the three TRICARE regions, the South and the West. For TRICARE’s third generation MCSCs, the offerors that filed the six bid protests raised various issues and each protest varied in the number of issues raised. However, a common theme cited by all offerors was TMA’s evaluation of proposals under the three evaluation factors: technical approach, past performance, and price/cost. Of the three evaluation factors, offerors that filed bid protests most frequently challenged TMA’s evaluation of proposals under the technical approach factor and, in particular, the subfactor under which TMA evaluated network provider discounts. Network provider discounts may result in reduced health care costs, and TMA was to consider an offeror’s proposed network provider discounts, if any, during the evaluation of technical proposals under the network development and maintenance subfactor. Offerors challenged TMA’s evaluation of network provider discounts in four protests, two of which were sustained and two of which were denied. In general, the issue concerned whether TMA properly evaluated offerors’ proposed network provider discounts in evaluating the relative merit of competing technical proposals under the network development and maintenance subfactor. Offerors that filed bid protests also raised issues related to TMA’s evaluation of proposals under the past performance and price/cost factors. For example, in one bid protest, the offeror that filed the protest claimed that TMA improperly evaluated the awardee’s past performance based on the past performance of its affiliated companies. Additionally, in the same protest, the offeror that filed the bid protest also alleged that TMA conducted a flawed price realism analysis in evaluating the awardee’s proposal under the price/cost factor. Offerors that filed bid protests sometimes raised issues that went beyond TMA’s evaluation of proposals under specific evaluation factors. Other issues raised by offerors that filed bid protests included improper business practices, unfair competitive advantage, conflict of interest, improper source selection, inadequate discussions to resolve proposal weaknesses, and TMA’s alleged failure to penalize offerors for not following RFP instructions regarding right of first refusal, page limits, and Medicare rate uncertainty. (See table 1.) Appendix I includes additional details on each of the six bid protests. TRICARE acquisition officials reported they have identified several areas where changes could be made to improve the acquisition process for future TRICARE MCSC acquisitions, including those scheduled to be awarded in 2018. According to TRICARE acquisition officials, preliminary lessons learned from the third generation acquisition process and resulting bid protests include (1) improvements in communication and documentation to increase transparency during the evaluation of the proposals and (2) increases to the length of the acquisition process to allow for additional time to evaluate proposals and for the transition from one MCSC to another. For TRICARE’s third generation MCSCs, TRICARE acquisition officials told us the sustained bid protest decisions in all three TRICARE regions prompted them to take corrective actions and evaluate revised proposals. According to TRICARE acquisition officials, among the issues they considered was whether TMA had clearly communicated to offerors through the RFP how it would evaluate the technical approach factor, specifically the subfactor related to network provider discounts, and whether TMA had adequately documented discussions during its evaluation of the proposals. In response to the sustained bid protests, TRICARE acquisition officials told us they were able to identify some preliminary lessons learned, which they implemented during their evaluation of revised proposals in the South and West regions. TMA accomplished this in two ways. Communication: To more clearly communicate how TMA would evaluate proposals, TMA issued an amended RFP for the South and West regions and allowed offerors to submit proposal revisions. In the amended RFP, TMA added language to clarify how network provider discounts would be considered as part of the technical evaluation and the subsequent best value analysis. Documentation: To address the need for adequate documentation of discussions during the evaluation process, TRICARE acquisition officials who participated in the MCSC evaluation process told us they improved their documentation during the reevaluation of proposals in the South and West regions. Specifically, these officials said that they were more thorough in their evaluation of the revised proposals and more rigorous in documenting their discussions. Adequate documentation of discussions during the evaluation process ensured that the documentation accurately reflected the evaluation process that occurred and that all evaluations were conducted in accordance with the RFP. Additionally, one of these officials told us that TMA had incorporated these preliminary lessons learned from the third generation MCSC acquisition process in subsequent acquisitions. For example, TMA incorporated these preliminary lessons learned into the RFPs for TRICARE’s three dental plans. Specifically, this official told us that in drafting the RFP for one of TRICARE’s dental plans—the TRICARE Dental Program—officials made sure to clearly define how TMA planned to assess the evaluation factors in the RFP to ensure that potential offerors understood the scope and magnitude of how the evaluation factors would be considered for awarding the contract. TRICARE acquisition officials told us they also learned from the TRICARE third generation MCSC awards that more time may be required for the acquisition process. Specifically, officials said that additional time may be required to conduct proposal evaluations. In addition, officials said that the transition period may need to be longer to accommodate a change in contractors. TRICARE acquisition officials who participated in the evaluation of TRICARE’s third generation MCSCs told us that TMA underestimated how much time was needed to evaluate proposals. Specifically, these officials told us that more time might be needed to conduct the evaluation of proposals under the technical approach factor because of the multiple evaluation subfactors that must also be considered. Out of the three evaluation factors—technical approach, past performance, and price/cost—the technical approach factor had the most subfactors. According to one TRICARE acquisition official, the number of technical subfactors made it more difficult to conduct the evaluation in the time allotted. This official suggested that for future MCSC acquisitions, DHA should consider whether all seven technical approach subfactors are Furthermore, the official stated that since some of these necessary.subfactors encompassed required administrative functions of the TRICARE Program and are laid out in its Operations Manual and other policy and guidance documents, offerors who are awarded the contract are expected to perform the required administrative functions. A senior TRICARE acquisition official told us that DHA is considering adding 2 or 3 months to the transition-in period for the fourth generation MCSCs to accommodate delays that may occur when responding to any bid protests that may be filed and transitioning from one contract to the next. This official explained that delays in initiating the contract performance periods for TRICARE’s third generation MCSCs could potentially increase costs if option periods are added to align the MCSCs’ end dates. The official explained that this is because terms for these option periods would be negotiated in a non-competitive environment, which may affect the government’s ability to get the best value in terms of price and quality. Other factors can also add time to the transition from one generation of MCSCs to the next, according to TRICARE acquisition officials. For example, the start of the performance period for TRICARE’s third generation MCSC in the West region was delayed because a decision on the bid protest in that region could not be made until TMA made a new source selection decision in the South region following a sustained bid protest. In particular, the agency held UnitedHealth’s July 2009 West region protest in abeyance while TMA took corrective action following a sustained bid protest in the South region, where UnitedHealth was also an offeror. Because the same offeror could not win contract awards in more than one region, UnitedHealth’s West region protest would have become moot if it received the South region award following TMA’s evaluation of the revised proposals. After TMA awarded the South region contract to Humana in February 2011, UnitedHealth’s agency-level protest in the West region was revived. A senior TRICARE acquisition official also told us that transition delays may affect the beneficiaries who rely upon the services provided by the MCSC contractor. Specifically, the official said that more time may be required to transition from an incumbent contractor to a new contractor. A new contractor may need additional time to implement services for TRICARE beneficiaries, whereas the incumbent contractor essentially provides a continuation of services. For example, beneficiaries reported problems with referral authorization as well as customer service when UnitedHealth assumed management of the TRICARE West region contract on April 1, 2013. Despite the implementation of lessons learned from TRICARE’s third generation acquisition process and the related bid protests, TRICARE acquisition officials told us that they cannot confirm which, if any, of these lessons will be incorporated into the acquisition process for TRICARE’s fourth generation MCSCs scheduled for 2018. However, these officials noted that the acquisition process for previous TRICARE MCSCs, including lessons learned from related bid protests, are generally considered when initiating the acquisition process for the next generation of TRICARE MCSCs. DHA began developing an acquisition plan for TRICARE’s fourth generation MCSCs during the first quarter of fiscal year 2014, according to a TRICARE program official. We requested comments from DOD, but none were provided. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at draperd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contribution to this report are listed in appendix II. There were six bid protests filed by unsuccessful offerors across the three TRICARE regions. Out of the six bid protests filed across the three TRICARE regions, two protests were sustained by GAO, one protest was sustained by the TRICARE Management Activity (TMA), and the remaining three protests were denied by GAO. In response to decisions and recommendations made in the sustained bid protest decisions, TMA implemented corrective actions that resulted in new award decisions in each of the three TRICARE regions.subsequent bid protest challenges that were filed in two of the three TRICARE regions, the South and the West. The new award decisions withstood This appendix provides summary information regarding the issues raised in the six bid protests. With respect to sustained bid protests, we describe only the issues sustained and not the issues denied, which did not form the basis for the ultimate decision in the case. In addition, we categorized protest issues at the level of evaluation factors, which were (1) technical approach, (2) past performance, and (3) price/cost. In some protests, unsuccessful offerors raised issues that went beyond the evaluation of proposals. We used the term “other” to encompass any protest issue that was not specific to the evaluation of a proposal under one of the three request for proposal (RFP) evaluation factors. Health Net Federal Services (Health Net) filed two bid protests challenging the award to Aetna Government Health Plans (Aetna) in the North region in July 2009. GAO sustained one of the two protests and recommended that TMA reevaluate proposals and take other actions consistent with the bid protest decision.award to Aetna based on GAO’s decision sustaining the protest and made an award to Health Net, the incumbent contractor, which is currently performing in the North region. (See fig. 3.) In the first bid protest, Health Net contended that TMA violated federal procurement law by improperly disclosing Health Net’s proprietary pricing information prior to the award of the contract, which, for purposes of this report, we have classified in the category of “other” protest issues that go beyond the evaluation of proposals. GAO denied the bid protest as follows. Other. Health Net alleged that TMA posted Health Net’s pricing and proposal information on a public website, and also provided the information to Congress without disclosing that the information was competitively sensitive. GAO recognized that serious errors had occurred, but determined that the Contracting Officer had reasonably concluded that the competition was not compromised, in part because the website disclosure did not occur until after final proposals were due. Therefore, Aetna—Health Net’s competitor—could not have used the information to its advantage. In the second protest, Health Net successfully challenged the award of the contract to Aetna on the basis of a number of issues including TMA’s evaluation of proposals and possible conflicts of interest. GAO sustained the bid protest based on the following reasons. Technical approach. Health Net argued that TMA failed in its evaluation to adequately account for the network provider discounts associated with its existing TRICARE network. GAO agreed and sustained this protest issue because TMA had not properly accounted for the potentially significant cost savings to the government that would result from Health Net’s proposed network provider discounts. Past performance. Health Net contended that TMA improperly assigned the highest past performance rating to Aetna’s proposal based on the past performance of its affiliated companies. GAO agreed and sustained this protest issue because TMA did not establish which of these affiliated companies were involved in the prior contracts or the roles, if any, that each of the affiliated companies would play in performing the TRICARE contract. In addition, GAO found that TMA failed to consider the fact that the contracts previously performed by the affiliates—evaluated as part of the past performance factor—were not comparable in size to the TRICARE managed care support contract (MCSC). Price/cost. Health Net contended that TMA’s evaluation of Aetna’s proposal and subsequent price realism analysis were flawed. GAO agreed and sustained this protest issue because TMA did not reasonably evaluate whether Aetna’s staffing plan, as related to its price/cost proposal, reflected a lack of technical understanding or proposal risk. GAO also found that TMA had not reasonably considered whether Aetna’s proposal to hire a high percentage of incumbent staff at reduced wages was realistic. Other. Health Net contended that Aetna gained an unfair competitive advantage in competing for TRICARE’s MCSC in the North region because Aetna had retained a former senior TMA official to assist with the preparation of its proposal. GAO found that the official had access to proprietary information related to Health Net’s performance of its incumbent contract and that this created at least the appearance of impropriety. GAO sustained the protest on the grounds that the Contracting Officer should have reviewed the matter, but did not do so because it was not brought to his attention. There were two bid protests in the South region. The first protest was filed by Humana Military Healthcare Services (Humana) in July 2009 challenging TMA’s award to UnitedHealth Military & Veterans Services (UnitedHealth). GAO sustained this protest and recommended that TMA reevaluate proposals consistent with its bid protest decision and make a new source selection decision. In implementing GAO’s recommendations, TMA issued an amended RFP and allowed offerors to submit revised proposals. TMA then reviewed the revised proposals and, based on this evaluation of revised proposals, awarded the contract to Humana, a different offeror than was initially awarded the contract. After TMA announced this award, a second protest was filed in the South region by UnitedHealth in March 2011. GAO denied this second protest. Humana—the incumbent contractor—is the current contractor for the South region. (See fig. 4.) In its protest, Humana contended that TMA failed in its evaluation to adequately account for the network provider discounts associated with its existing TRICARE network. GAO agreed and sustained the protest as follows. Technical approach. Humana claimed that TMA, during its technical evaluation, did not adequately account for the potentially significant cost savings to the government that would result from Humana’s network provider discounts. GAO recommended that TMA reevaluate the proposals consistent with GAO’s decision and make a new source selection decision. Following the sustained GAO decision in the Humana bid protest, TMA amended and reissued the RFP in the South region and provided the offerors an opportunity to revise their proposals, including providing more information about network provider discounts. After evaluating the revised proposals, TMA selected Humana for the contract award. In its protest filed with GAO of TMA’s contract award to Humana, UnitedHealth raised a number of issues involving TMA’s technical evaluation of network provider discounts, as well as other issues related to its analysis of the price/cost evaluation factor and its failure to penalize offerors for not following RFP instructions. GAO determined that none of these issues had merit and denied the bid protest for the following reasons. Technical approach. UnitedHealth contended that TMA failed to consider the substantial risk related to Humana’s ability to achieve its proposed network provider discounts. GAO found that TMA reasonably evaluated Humana’s proposed network provider discounts and denied this aspect of UnitedHealth’s protest. Price/cost. In challenging TMA’s price realism evaluation, UnitedHealth argued that TMA should have assigned a greater risk level to Humana’s revised proposal based on Humana’s plan to significantly reduce underwriting fees during the revision process.reasonably assessed the risk associated with Humana’s revised GAO found that TMA had underwriting fees and denied this issue. In addition, UnitedHealth argued that TMA failed to assign additional risk to Humana’s proposal based on the reduced staffing level of its claims processing subcontractor. GAO found that TMA adequately reviewed the subcontractor’s proposal and factored it into its overall assessment. Other. UnitedHealth alleged that Humana failed to follow the RFP requirements regarding the following issues: Right of first refusal: UnitedHealth argued that Humana’s proposal deviated from the RFP requirement that military treatment facilities be given a right of first refusal to patient referrals and that TMA should have rejected the proposal or deemed it a significant weakness. GAO declined to consider UnitedHealth’s claim that Humana improperly deviated from these requirements because UnitedHealth had made a contradictory argument in the first South region bid protest. Page limits: UnitedHealth contended that Humana failed to adhere to a page limit on proposal revisions. GAO denied UnitedHealth’s argument, finding that Humana met the page limit in revisions to its technical proposal. Medicare rates: UnitedHealth alleged that Humana did not comply with an RFP requirement to acknowledge and discuss the linkage between TRICARE reimbursement rates and Medicare rates, which are uncertain and subject to change. In denying this issue, GAO rejected UnitedHealth’s interpretation of the RFP as requiring Humana to assume that Medicare rates would decline. There were two bid protests in the West region. The first protest was an agency-level protest filed by UnitedHealth in July 2009 challenging the award to TriWest Healthcare Alliance Corporation (TriWest). This protest was sustained and included a recommendation that TMA reevaluate proposals and make a new source selection decision that was reasonable and consistent with the RFP. In implementing this recommendation, TMA issued an amended RFP and allowed offerors to submit revised proposals. TMA then reviewed the revised proposals and, based on this evaluation of revised proposals, awarded the contract to UnitedHealth, a different offeror than was initially awarded the contract. After TMA announced the new award, a second West region protest was filed by TriWest in March 2012. GAO denied the second protest and UnitedHealth is the current contractor for the West region. (See fig. 5). In the first West region bid protest, an agency-level protest filed with the Contracting Officer, UnitedHealth claimed that TMA did not conduct meaningful discussions that would have enabled UnitedHealth to correct a documented weakness in its technical proposal and that this weakness unreasonably tipped the source selection decision in favor of TriWest. For purposes of this report, we have classified this issue as “other” rather than a proposal evaluation issue, which TMA sustained as follows. Other. UnitedHealth alleged that TMA failed to conduct meaningful discussions to alert UnitedHealth to a weakness assigned to its proposal under one of the technical approach evaluation subfactors involving claims processing. Under this subfactor, TMA had assessed a weakness in UnitedHealth’s plan for dealing with claims submitted by providers that were outside the West region. After reviewing the record of discussions between TMA and UnitedHealth during the contract evaluation process, TMA’s Contracting Officer determined that TMA had not conducted meaningful discussions in the area of claims processing. In addition to TMA’s failure to identify the proposal weakness through meaningful discussions, the Contracting Officer also found that the weakness was so minor it should not have been the tipping factor in selecting TriWest for the award. As a result, the Contracting Officer sustained the protest. Following the sustained agency-level decision in the UnitedHealth bid protest, TMA issued a series of amendments to the RFP in the West region and allowed offerors to submit revised proposals. After evaluating the revised proposals, TMA selected UnitedHealth for the contract award. After the award was made to UnitedHealth in the West region, TriWest— the other remaining competitor—filed a protest with GAO. TriWest raised a number of issues involving TMA’s evaluation of proposals and source selection decision. GAO determined that none of these issues had merit and denied the bid protest for the following reasons. Technical approach. TriWest contended that TMA did not give its proposal sufficient credit for its network provider discounts, and that the discounts offered by UnitedHealth were overstated. GAO found no basis to question TMA’s evaluation of either offerors’ network provider discounts. Past performance. TriWest challenged the past performance rating TMA assigned to UnitedHealth on several grounds, including the relevance of its past work and the scope of the past performance information TMA considered. TriWest also challenged its own past performance rating. GAO found no merit to these allegations. Price/cost. TriWest asserted that TMA’s evaluation of UnitedHealth’s labor rates and subsequent price realism analysis was based on outdated information. GAO concluded that TMA conducted a proper price realism analysis of UnitedHealth’s proposal. Other. TriWest argued that TMA’s Selection Authority gave undue weight to some of the evaluation subfactors, even though the RFP said that all subfactors would be weighted equally. GAO found, however, that the Selection Authority properly relied on those subfactors thought to be key discriminators in selecting UnitedHealth for the award. In addition to the contact name above, Marcia A. Mann, Assistant Director; Jacob L. Beier; Kathryn A. Black; Sarah C. Cornetto; Victoria C. Klepacz; Deitra H. Lee; Laurie L. Pachter; and William T. Woods made key contributions to this report.
DOD provides certain health care services through its TRICARE Program, which complements the health care services provided in military treatment facilities. DOD acquires these health care services through MCSCs with private sector companies. As of October 1, 2013, DOD's Defense Health Agency is responsible for awarding, administering, and overseeing TRICARE's MCSCs. Prior to this date TMA handled these duties. DOD's health care costs have more than doubled from $19 billion in fiscal year 2001 to its fiscal year 2014 budget request of more than $49 billion. Senate Report 112-173, which accompanied a version of the National Defense Authorization Act for Fiscal Year 2013, cited concerns with the growth of DOD's health care costs and identified private sector health care contracts as an area for potential savings and efficiencies. The Senate report mandated that GAO review DOD's process for acquiring TRICARE's MCSCs. This report examines: (1) TMA's acquisition process to award TRICARE's third generation MCSCs; (2) the extent to which issues were raised in the bid protests involving these MCSCs, including identifying any common themes; and (3) lessons learned from the acquisition process to award these MCSCs and how these lessons may be used in future acquisitions. GAO reviewed relevant federal statutes, regulations, policy documentation, and the bid protest decisions for TRICARE's third generation MCSCs. GAO also interviewed TRICARE officials about the acquisition process and lessons learned. The TRICARE Management Activity (TMA) within the Department of Defense (DOD) used the acquisition process prescribed by federal regulations to acquire health care services for the TRICARE Program through the third generation of TRICARE's managed care support contracts (MCSC). This process included a three-phased contract award process outlined in the figure below. TMA policy also defined steps for the acquisition process beyond what was required in the federal regulations, including developing additional documentation and obtaining additional approvals from senior TRICARE acquisition officials. For example, peer reviews of the acquisition process are conducted and documented for certain DOD contracts, including TRICARE's MCSCs. TMA awarded a contract for each TRICARE region (North, South, and West), but challenges (bid protests) to the agency's award decisions were filed by unsuccessful offerors in all three regions. Of the six bid protests filed, three were sustained and three were denied. Following the resolution of the bid protests, the MCSCs in all three regions were awarded to a different offeror than was initially awarded the contract. The offerors who filed the bid protests cited various issues, most frequently TMA's evaluation of proposals. For example, four bid protests challenged TMA's evaluation of offerors' proposed network provider discounts, which are discounts of provider payment rates negotiated by offerors to reduce overall health care costs to the government. TRICARE acquisition officials said that sustained bid protests and TMA's implementation of corrective actions prompted them to identify lessons learned where changes could be made to improve the acquisition process for subsequent TRICARE MCSCs. Lessons learned included (1) improvements in communication and documentation to increase transparency during the evaluation of proposals and (2) increases to the length of the acquisition process to allow for more time to evaluate proposals and for the transition from one MCSC to another. TRICARE acquisition officials also said that some of these lessons have been applied in other contracting activities; however, they could not confirm which, if any, of these lessons will be incorporated into the acquisition process for the next generation of TRICARE MCSCs, scheduled for 2018. GAO requested comments from DOD on the draft report, but none were provided.
Established in 1967, NTSB is an independent nonregulatory agency. NTSB’s principal responsibility is to promote safety in various modes of transportation through accident investigation, special studies, and recommendations intended to prevent accidents. For the fiscal year ended September 30, 2000, NTSB was appropriated a total of $77 million for salaries and expenses with 421 full-time equivalent employees. In April 1989, NTSB began using a Rapidraft system under which designated employees were authorized to use commercial draft payment instruments similar to blank checks. Using Rapidraft, approximately 175 authorized employees could make immediate payment to vendors or reimburse employees for purchases and travel claims up to $2,500. The stated purpose of Rapidraft was to eliminate the extra paperwork and processing time required to issue checks through the traditional Department of the Treasury process. During 1999, NTSB officials began seeing evidence of misuse of Rapidrafts and concern arose about possible embezzlement. In August 1999, NTSB requested the Department of Transportation’s Inspector General (DOT IG) to conduct a review of the Rapidraft system. In September 1999, NTSB stopped using Rapidrafts. The DOT IG audit report, issued in November 1999, disclosed significant internal control weaknesses associated with the use of the Rapidraft system, including the lack of supporting documentation, poor security over unused Rapidraft instruments, and lack of review and reconciliation of processed payments. Concurrent with this review, the suspected fraud by two NTSB employees was investigated and confirmed, ultimately leading to the employees’ successful prosecution. In response to the disclosure of internal control weaknesses surrounding its use of the Rapidraft system, NTSB, in April 2000, hired PwC to further review NTSB’s past use of the Rapidraft system and to conduct a broad- based review of NTSB’s current financial management processes and related internal controls. Also in April 2000, the House Committee on the Budget’s Task Force on Housing and Infrastructure held hearings on the widespread internal control weaknesses associated with NTSB’s use of the Rapidraft system and subsequently asked us to review internal controls over other selected areas of NTSB’s financial operation. Internal control represents an important and integral part of managing an accountable organization. Internal control consists of the plans, methods, and procedures designed and implemented by an organization to achieve its mission, goals, and objectives and support performance-based management; represents an organization’s first line of defense in safeguarding assets and protecting against errors and fraud; and provides management with important assurance that an organization’s operating and administrative objectives are being achieved, namely, effective and efficient operations, reliable financial reporting, and compliance with applicable laws and regulations. Since the enactment of FMFIA, executive branch agencies have been required to annually assess the adequacy of their internal controls in achieving established control objectives. As required by the act, the Comptroller General has established standards that agencies must use in assessing their internal control. Also, in accordance with the act and guidance issued by the Office of Management and Budget, federal agencies are required to annually assess the adequacy of their internal controls and report to the President and the Congress on the extent to which their system of internal control is achieving their intended objectives. In May 2000, following the House Budget Committee’s Task Force on Housing and Infrastructure hearing on NTSB’s use of the Rapidraft system, the committee asked us to review internal controls related to other types of NTSB financial activities. Following discussions of the potential nature and scope of our review, we agreed with the committee staff that, because PwC’s ongoing review was examining NTSB’s internal controls in place at the time of PwC’s review (April and December 2000), our review would focus on selected aspects of NTSB’s fiscal year 1999 financial operation. We also agreed to monitor and consider PwC’s ongoing work at NTSB in reporting on the adequacy of NTSB’s overall internal controls. Specifically, we agreed to determine, for selected payment types (travel, products and services, and nonroutine benefits) whether key NTSB internal controls applicable to fiscal year 1999 payments were designed effectively to provide reasonable assurance that assets were safeguarded against unauthorized use and that expenditures were made in accordance with management’s authority and applicable laws and regulations; determine, for those fiscal year 1999 transactions selected for testing, whether NTSB complied with key controls; and consider and report on results of two ongoing PwC reviews at NTSB as they related to adequacy of NTSB’s overall internal controls. To accomplish these objectives we (1) gained an understanding of applicable policy and related laws and regulations, transaction processing, and supporting documentation, (2) identified key controls related to safeguarding assets and executing expenditures in accordance with management authority and laws and regulations, (3) reviewed, for a targeted selection of transactions from each payment type, the available supporting documentation and, as necessary, followed up with NTSB officials, and (4) concluded on whether key controls were effectively designed and, for those transactions reviewed, effectively implemented by NTSB management and staff. To monitor PwC’s ongoing work at NTSB, we discussed the nature, scope, and approach of PwC’s separate reviews with NTSB officials and PwC representatives, examined PwC’s results by reviewing its written reports, and identified those PwC results that, when considered in conjunction with our results, we considered relevant to the overall adequacy of NTSB internal controls. We conducted our review from June 2000 through May 2001 in accordance with generally accepted government auditing standards. Additional information on our scope and methodology is contained in appendix I. Our review of the design and, for those transactions tested, operation of key internal controls related to NTSB’s payments for travel, products and services, and nonroutine benefits revealed weaknesses in (1) the manner and extent to which policies effectively incorporated key internal controls and (2) the implementation and monitoring of those controls that were incorporated in agency policy. For the three payment types we examined, the nature and extent of these weaknesses were indicative of insufficient and/or ineffective management attention paid to ensuring that, during the period reviewed, (1) key internal controls were effectively designed into NTSB administrative policies and procedures and (2) employees and management effectively implemented their respective internal control responsibilities when initiating and approving payment transactions. The weaknesses we identified during our review impaired NTSB’s organizational accountability over payments for travel, products and services, and nonroutine benefits and exposed NTSB’s assets to possible misuse or loss. Presented below for each of the three payment types reviewed are the results of our consideration of the design of key controls, tests of NTSB implementation of key controls, and tests of NTSB managerial review and approval functions. In addition, we present an analysis of NTSB’s recent monitoring of and reporting on the adequacy of internal controls. Key internal controls must first be clearly documented in management directives and administrative polices and procedures. Our review of NTSB’s policy guidance applicable to the three payment types, which typically consisted of Board orders, management directives, and/or office memorandums, found that provisions for certain key internal controls were not clearly and consistently incorporated into the policy guidance. We also found that certain aspects of NTSB policies reduced the opportunity for effective internal controls over payment transactions. One of NTSB’s policies, the Alternative Home Base (AHB) rule, allowed Board members to use their government travel card and contract airfare rates for travel between a “place abode,” a residence located outside the area of the traveler’s official duty station, and an official travel destination. The rule requires the traveler to submit a constructive cost analysis to show that the cost to the government would be the same or less than traveling from the traveler’s official duty station. However, the policy did not clearly establish when and under what circumstances the AHB rule could be applied. Under the Federal Travel Regulation, government travelers are prohibited from using the government travel card and contract airfares for personal travel. Thirty-four of the 103 Board member travel vouchers we reviewed involved the use of their government travel card and contract fares for trips that included, in part, travel to or from their distant place of abode, as permitted by the AHB rule. For 12 of the 34 vouchers involving the AHB rule, we noted that the government travel card and contract fares were used for mixed-purpose trips (partially business and partially personal) or for segments of trips for which no valid business purpose was evident from the supporting documentation available at the time of our review. NTSB officials advised us that the rule was not intended to provide for the use of the government travel card and contract airfares for personal commuting between a Board member’s official duty station and his or her place of abode. Rather, the rule was intended to allow a Board member to originate and/or complete official travel at his or her residence instead of his or her official duty station, if doing so did not cost the federal government more. NTSB officials also stated that it was not appropriate to use the government travel card or contract airfare for personal travel. NTSB’s policy also provides for the use of annual travel orders, giving staff broad authority for travel related to accident investigations and, for senior management and Board members, authorization to attend conferences and to visit headquarters and field offices. The annual orders provided by NTSB for our review generally authorized the traveler to travel any time during the year without specific authorization of the purpose, destination, or estimated cost of each trip. According to NTSB officials, more than 75 percent of NTSB’s staff had annual orders for fiscal year 1999. While annual travel orders are permitted under the Federal Travel Regulation and their use is justified for those traveling with limited or virtually no advance notice for accident investigations, NTSB’s widespread use of annual orders largely negated the effectiveness of pretravel authorization as a control. In addition, NTSB’s policy on annual travel orders does not ensure that the traveler obtained advance authorization required by the Federal Travel Regulation for certain travel (e.g., travel to attend a conference or involving acceptance of payment for travel expenses from a nonfederal source). This lack of up-front authorization for most travel takes on greater significance at NTSB, as discussed later in this report, because of weaknesses in the review and approval function over travel payments we reviewed. According to NTSB officials, 23 of the 25 travelers for whom we examined at least one fiscal year 1999 travel payment had annual orders, but NTSB could provide annual orders for only 13 of the 23 travelers. According to NTSB officials, 142 of the 149 vouchers we examined were authorized by annual orders. Of these 142 reviewed trips, 105 were for non-accident- related travel by 16 travelers. As a result, travel arrangements for those trips, such as the (1) trip’s purpose and destination (including attendance at conferences and foreign travel), (2) itinerary and estimated cost, (3) possible use of indirect or interrupted travel or leave while on travel, and (4) possible acceptance of payment for travel expenses by nonfederal sources, were not reviewed and authorized as part of a trip-specific travel authorization. While use of annual orders is permitted by the Federal Travel Regulation, the regulation also requires that “open” authorizations, such as NTSB annual orders, must contain, as a basis for adequate funds control, an up- front estimate of the total costs for travel being authorized. However, NTSB’s policy governing the use of annual orders did not require that the orders include the estimated costs of travel being authorized. Consistent with its policy, none of the 13 annual orders provided by NTSB contained estimates of the costs of travel covered by the annual orders, and the lack of cost estimates severely limited NTSB’s ability to exercise effective funds control over travel authorized by annual orders. NTSB policy governing the purchase of products and services required authorized officials to ensure, prior to incurring an obligation to procure products or services, that (1) funds were available and (2) specific procurement was approved. However, while NTSB’s policy provided a standard form to document these two key controls for purchases greater than $2,500, the policy did not specify a mechanism for documenting these two actions for purchases of $2,500 or less. As a result, for 21 of the 31 payments we reviewed for products and services that were $2,500 or less, the supporting documentation lacked evidence of one or more of the required advance approvals; 14 had neither evidence of funds-availability approval nor procurement approval, 6 had no evidence of funds- availability approval, and 1 lacked evidence of the advance procurement approval. Purchases made without the approvals required prior to incurring the obligation exposed the agency to possible expenditure of funds in excess of appropriated amounts and inappropriate acquisition of goods and services. OPM performance award regulations require that awards based on a percentage of pay be computed only on a percentage of base pay and exclude the employees’ locality pay. We found that NTSB’s policy applicable to fiscal year 1998 and 1999 performance awards directly contributed to individual performance award determinations being based on a percentage of the employee’s adjusted base pay, which consists of base and locality pay. In discussing the incorrect use of adjusted base pay, NTSB officials acknowledged the error in their guidance and advised us that the guidance had been corrected for the fiscal year 2000 award cycle. Also, during fiscal year 1999, NTSB, under certain circumstances, provided employees with advances and paid the employer and employee portions of federal health insurance premiums for student employees employed by NTSB while they were on leave without pay (LWOP) and attending school. NTSB had no policy guidance or operating procedures to require that the resulting amounts due back to NTSB be recognized as accounts receivable, controlled, and collected. Essential to effectively safeguarding NTSB assets is the need to recognize, control, and seek recovery of such advances. Details on the types of situations requiring recovery follow. Our review of 15 advances to employees totaling more than $15,000 paid out in fiscal year 1999 found little or no evidence that NTSB had taken any direct action to record, control, or collect the advances. At the time we began our inquiries into these amounts, 7 of the 15 had not been fully repaid, representing an unrecovered balance of more than $3,000. NTSB staff advised us that, with respect to the repayments that had occurred, employees were under the honor system. In reviewing NTSB records, we identified a student employee in LWOP status for a continuous 29-month period. According to NTSB staff, the agency paid the employee and employer shares of health insurance premiums for the entire period and made no attempt, prior to our review, to recover more than $650 owed by the student for the employee’s share of 12 months’ premiums. The amount owed by the student was limited to the employee’s share of premiums for 12 months because the student became ineligible for the coverage after 12 continuous months in LWOP status. NTSB continued paying for coverage after the student’s eligibility ended, resulting in improper payments of more than $4,800 in health insurance premiums during the student’s remaining 17 months in LWOP status. According to NTSB staff, NTSB had no policies or procedures related to monitoring the status of health insurance premiums paid by NTSB on behalf of student employees in LWOP status; halting premium payments after 12 months in LWOP status; or tracking, controlling, and recovering any related amounts owed to NTSB. According to an NTSB official, the student recently indicated that she was unaware that the benefit coverage continued during her LWOP status and did not use it. NTSB also plans to request a return of premium from the insurance provider for the student’s period of ineligibility. Without adequately designed internal controls that are clearly and unambiguously documented in management directives, the effectiveness of the entire control system is impaired and accountability is reduced. In addition, management is limited in its ability to assure that control objectives—effective and efficient operations, reliable financial reporting, and compliance with applicable laws and regulations—are being achieved. To compensate for design weaknesses that reduce the opportunity for control effectiveness, other internal controls related to the execution, review, and approval of transactions must play a greater role in assuring that funds are used in accordance with management’s authority and applicable laws and regulations. However, as discussed in the following sections, these other important controls were often ineffective for the transactions we reviewed. Key internal controls that have been incorporated into agency policy guidance must be followed consistently to be effective. For those transactions selected for testing from the three payment types, we found many instances in which key controls that were a part of NTSB’s established policies were not followed. The key control most commonly not followed was the development and maintenance of required supporting documentation. As a key control, supporting documentation provides evidence of transactions and compliance with related internal controls, and should be readily available for examination. For those transactions tested, we found that the supporting documentation required by NTSB policies was often missing or inadequate. We noted the following instances in which the supporting documentation, or lack thereof, provided evidence of noncompliance with key internal controls for transactions related to travel, purchase of products and services, and nonroutine benefits. Thirty-two of the 149 travel payments we reviewed involved foreign travel. NTSB’s travel policy guidance requires written evidence of advance authorization for foreign travel. However, NTSB was unable to provide evidence of the required advance approval for 28 of the 32 foreign trips. Forty-one of the 149 travel payments we reviewed claimed reimbursement for actual subsistence expenses in lieu of per diem, which, according to NTSB’s policy, required the approval of NTSB’s Chief Financial Officer. However, 5 of the 41 travel payments lacked evidence of the required CFO approval. Eight of the 149 travel vouchers that were reviewed and approved for payment lacked an airfare, hotel, or car rental receipt, which was required by NTSB policy. NTSB policy requires NTSB staff to specifically acknowledge the receipt of products or services prior to payment. Our review of the supporting documentation related to 54 applicable purchases found that 45 of the 54 had no clear indication of the required acknowledgment by NTSB staff that the products or services had been received. NTSB policy requires that for purchases greater than $2,500, a Form 4400.1, Requisition for Supplies, Services and Shipments, be completed to support the prepurchase determination of funds availability and approval to use available funds for the purchase. Thirty of the 86 total payments we reviewed represented purchases of products and services for amounts greater than $2,500. For 6 of the 30 payments NTSB was unable to provide the Form 4400.1 or other evidence that the required approvals had occurred. In addition, for the 24 that had the required form, 2 lacked evidence of the required determination of funds availability and another 3 lacked evidence of the advance approval to use funds for the intended purpose. NTSB property management policies required the creation and maintenance of a central inventory control record for property that cost more than $200 and can be easily removed from the agency premises and an Individual Property Receipt form, which identified the employee responsible for custody of the inventoried item. For each of the four property items purchased during fiscal year 1999 that we tested for compliance with these requirements, no central inventory control record or individual property receipt could be located by NTSB. Each of the items—three laptop computers and one television set—was eventually located following an extended 2-month-long, agencywide search. NTSB officials blamed the lack of inventory records and property receipts on the fact that the required records were not maintained for property purchased by agency offices other than NTSB’s Contracting Office. NTSB’s policy related to performance and special act awards paid in fiscal year 1999 required managers to forward applicable supporting documentation (including, as appropriate, nomination memos or award recommendation forms, performance appraisals or award justifications, as well as evidence of review and approval of the awards) to NTSB’s Human Resources Division prior to paying the awards. During our testing of performance and special act award transactions, we requested supporting documentation for 40 performance awards and 41 special act awards paid to 26 and 24 employees, respectively, during fiscal year 1999. After reviewing official personnel files and conducting a search for the supporting documentation, NTSB was unable to provide sufficient documentation to support payment of 16 of the 40 performance awards and 6 of the 41 special act awards that we tested. For each of the four 1999 payments for Senior Executive Service bonuses (for 1998 performance) we reviewed, NTSB could not produce supporting documentation required by NTSB policy and OPM regulations to justify or provide basis for the bonuses. Statute and OPM regulations related to the payment of awards, bonuses, and retention allowances generally require that amounts not be paid to an employee if (or to the extent that) the payment would cause the employee’s estimated aggregate compensation for a calendar year to exceed the Executive Level I compensation ceiling, which, for 1999, was $151,800. While NTSB policies do not provide specific guidance on how to apply the aggregate annual compensation limitation in determining the respective amounts that can be paid for awards, bonuses, and allowances, NTSB officials told us that they followed OPM regulations in applying the limitations to applicable NTSB employees. We used compensation data provided by NTSB to identify three employees (from the 36 employees for whom we reviewed nonroutine benefits) whose total compensation would have approached or possibly exceeded the Executive Level I compensation ceiling. Our test found that for each of the three, NTSB improperly projected the employee’s aggregate annual compensation, resulting in incorrect compensation payments and/or incorrect deferred award amounts. For one employee, our tests found a mathematical error in the computation of the employee’s estimated aggregate compensation for calendar year 1999 that resulted in the employee receiving paid compensation that exceeded the Executive Level I compensation ceiling for calendar year 1999 by $1,100. For the second employee, NTSB improperly authorized a retention allowance that was $14,226 higher than it should have been under applicable OPM regulations because it incorrectly excluded from the estimate of aggregate compensation, award amounts known to be payable during 1999. For the third employee, we noted that, although NTSB properly applied OPM regulations in projecting the employee’s calendar year 1999 aggregate annual compensation, NTSB improperly authorized, early in calendar year 2000, a retention allowance that was $9,696 higher than it should have been under applicable OPM regulations because it incorrectly excluded award amounts known to be payable during 2000 from the estimate of aggregate compensation for 2000. In addition to weaknesses in the design of key internal controls, NTSB’s failure to comply with established key internal controls further compromised the effectiveness of NTSB’s internal control environment and its financial accountability over resources. Managerial review and approval is an important key control–one that is intended to provide oversight of control activities and detect and address problems with individual transactions. By its nature, managerial review and approval represents the last and best opportunity to detect and address inadequate supporting documentation and other control deficiencies. In addition to failing to identify and resolve the various instances of missing or inadequate documentation noted earlier, we found instances in which the managerial review and approval process failed to detect, in each of the payment types we tested, other inadequacies and deficiencies. Under NTSB’s travel policy, claims for reimbursement involving the Alternative Home Base rule or indirect travel must be accompanied by a constructive cost analysis. The analysis is intended to demonstrate that the reimbursement claimed for trips involving an alternative home base or indirect travel location was the same or less than the cost of traveling from or to the employee’s official duty station. However, of the 37 paid vouchers we reviewed involving either alternative home base or indirect travel trips, 25 of the vouchers approved for payment lacked the required constructive cost analysis to show that the government had not incurred any excessive costs. Our review of available documents identified several trips that might have had excess costs reimbursed. Under NTSB travel policy, signatures of the traveler, approving official, and certifying officer are required for the payment of a travel reimbursement claim. However, 41 of the 149 vouchers we examined lacked one or more of the three required signatures. Of the 41 paid vouchers that lacked required signatures, 2 vouchers did not have any of the three required signatures, 9 lacked two of the three required signatures, and 30 lacked signatures of either the approving official or the certifying officer. One of the basic determinations that should be made during the voucher approval and certification process for a travel reimbursement claim is that the amount claimed is correct. However, the amounts approved and paid for 22 of the 149 vouchers we examined were incorrect. Of the 22, 16 were based on a per diem amount that exceeded the applicable General Services Administration (GSA) rates, 3 were totaled incorrectly, and 3 were based on per diem amounts that exceeded the applicable GSA rates and were totaled incorrectly. An official who was not authorized to certify payments signed as certifying officer on 52 of the 149 paid travel vouchers. NTSB policy stipulated that the Managing Director approve all purchases over $10,000 in advance for purchases not related to accident investigations and after the fact for purchases related to on-scene accident investigations. Of the 86 purchase payments we reviewed, 18 were greater than $10,000 and required the Managing Director’s approval, either in advance or after the fact. Of these 18, 11 lacked evidence of the Managing Director’s required approval. NTSB policy required that invoices or other appropriate supporting documentation accompany the approved voucher for the payment of products and services. Of the 86 payments for products and services we reviewed, 6 lacked adequate evidence of an invoice or other appropriate supporting documentation. For example, one of the six was a payment of more than $70,000 supported by an office purchase card billing statement showing only monthly totals for unpaid charges dating back to July 1996, as well as estimated interest and penalties of more than $6,000. For another one of the six, NTSB could not provide any supporting documentation. Eleven of the 86 payments for products and services represented claims for reimbursement of amounts paid initially by employees. Three of the 11 were approved for payment without the employee’s signature on the requests for reimbursement. As part of our review of special act awards for fiscal year 1999, we were given access to the official personnel files for recipients of 41 special act awards. In reviewing the files for one employee’s fiscal year 1999 special act award that had evidence of proper review and approval, we noted a breakdown in the review and approval of a subsequent special act award provided to the employee in January 2000. Following the Chairman’s approval of a $19,000 special act award, NTSB initiated a personnel action to process the award. However, NTSB’s system rejected the action because awards greater than $10,000 require specific review and approval by OPM. Instead of forwarding an award to OPM for approval, NTSB changed the personnel records that documented the $19,000 award by awarding two special act awards (for $9,500 each) and separately justifying the two awards by separating the special actions that had been used to support the original $19,000 award into two award justifications. In doing so, NTSB avoided the OPM-required review and approval for special act awards greater than $10,000. OPM regulations and NTSB policy for relocation bonuses generally provide that NTSB, in advance of the selection, must consider various recruitment-related factors and justify in writing that, without the bonus, it would be difficult for NTSB to fill the position with a highly qualified candidate. Our review of the one relocation bonus in our test of transactions showed that NTSB’s review and approval of the bonus was inadequate because the supporting documentation for the $10,000 relocation bonus did not address any of the factors or determinations required by the applicable policy. The memo supporting the bonus, which was approved by NTSB officials, justified the bonus solely on the basis that the relocated employee, who had accepted the position 12 months earlier, moved himself instead of requesting reimbursement for a permanent duty change of station. OPM regulations and NTSB’s policy on granting retention allowances require an annual recertification. Our review of 11 retention allowances found that two employees continued to be paid retention allowances without being recertified and approved on or before their respective annual recertification dates. As evidenced by the various problems of inadequate supporting documentation and ineffective review of transactions and compliance with agency control policies, NTSB’s managerial review and approval—an important detective control—was inadequate for many of the transactions we reviewed. Ineffective managerial review and approval impaired NTSB’s internal control environment and placed at risk management’s ability to assess compliance with key controls and to properly account for the financial activities of the agency. In addition, ineffective managerial review and approval can lead staff involved in executing transactions to think that no one is holding them accountable for complying with established policies and procedures. Given the internal control weaknesses associated with fiscal year 1999 payments, we inquired into NTSB’s recent efforts to assess and report on internal control effectiveness, which are required by OMB Circular A-123, Management Accountability and Control and FMFIA (31 U.S.C. §3512). Under A-123, agencies must systematically and proactively develop and implement controls, monitor and assess control adequacy, correct identified deficiencies, and report annually on the extent to which control objectives as of the close of the fiscal year were being achieved and on the existence of material weaknesses in agency controls. On the basis of responses to our inquiries and related supporting documentation provided by NTSB officials, we determined that the last assessment and reporting under FMFIA covered fiscal year 1998 and no assessment of and reporting on the adequacy of internal controls took place for fiscal year 1999. The only explanations offered for why management did not assess and report on the adequacy of fiscal year 1999 internal control were that (1) NTSB staff who had been involved in overseeing past efforts had been reassigned and (2) the Department of Transportation’s Inspector General 1999 review of problems with the Rapidraft system represented a review of internal controls. With respect to the adequacy of internal controls in fiscal year 1998, NTSB reported that its self-evaluation process included senior manager meetings with staff to review operations and identify potential vulnerabilities to waste, fraud, or mismanagement. NTSB’s senior management later evaluated the results of the staff reviews. This process culminated in the NTSB Chairman’s letter to the President (dated February 11, 1999) that stated that NTSB’s internal controls provided reasonable assurance that programs and resources were protected from waste, fraud, or mismanagement. In addition, the Chairman noted that because of the Board’s proactive approach to identifying and solving problems, he believed that NTSB had adequate accountability over the resources entrusted to NTSB’s care. With respect to fiscal year 2000, NTSB’s response to FMFIA’s assessment and reporting requirements was limited to the Chairman’s December 28, 2000, letter to the President, which noted that PwC was conducting a complete independent evaluation of NTSB’s financial controls and that they expected a final report early in 2001. The Chairman observed that, as a result, a management assessment of controls required by the act would be premature and also referred to the recent enactment of legislation, discussed later in this report, designed to strengthen NTSB’s financial accountability. However, as of August 13, 2001, NTSB management had not reported on its assessment of the adequacy of its fiscal year 2000 internal controls as required by FMFIA. Without a reliable and comprehensive process for monitoring and reviewing internal control adequacy, management does not have the information needed to assess and report on the extent to which control objectives are being achieved and whether there is proper accountability over the resources of the agency and compliance with laws and regulations. The results of two recent PwC reviews, involving different aspects of NTSB’s financial operations and related internal controls (1) confirmed many of the control weaknesses associated with NTSB’s Rapidraft use that were previously reported by the DOT IG and (2) disclosed wide-ranging internal control weaknesses associated with NTSB’s 2000 financial activity. While PwC’s report on the results of its review of NTSB’s fiscal year 2000 internal controls acknowledged that–in the aftermath of the serious Rapidraft problems–NTSB management has taken certain action to demonstrate an attention to internal control, it also points out that successfully establishing effective internal control depends on senior management’s visible leadership and endorsement and their willingness to hold all employees accountable for failure to follow established internal control policies and procedures. In this regard, PwC’s report noted that changing the control atmosphere at NTSB will be difficult and that to do so successfully, senior management must set the tone for change—one that acknowledges the need for strong financial controls throughout the agency. In light of the documented instances of fraud related to Rapidraft abuses, NTSB engaged PwC to perform a “forensic accounting investigation” to determine if similar instances of inappropriate payment activity had occurred prior to NTSB’s termination of the Rapidraft system in September 1999. At NTSB’s request, the investigative procedures applied by PwC were designed to identify transactions, from a review of available documentation, that appeared questionable and in need of follow-up by appropriate authorities. PwC’s review methodology was based on an examination of documents retained in NTSB’s Office of the Chief Financial Officer. To facilitate its review, PwC obtained available supporting documentation and had the relevant information entered into an automated database that it analyzed for indications of questionable transactions. In all, PwC’s investigative procedures were applied to more than 10,600 Rapidrafts, totaling more than $5.2 million, that were written by NTSB employees from July 1998 through October 1999. PwC reported that, based on the procedures it performed, it did not identify any specific (additional) instances of questionable transactions. However, PwC recommended that NTSB review various transactions for which PwC could not determine the validity. More specifically, following its initial review and data entry procedures, PwC identified 688 additionalpayment transactions for which there was no supporting documentation on file. PwC reported that following an agencywide search, in March 2001, NTSB located missing files for 630 of the 688, leaving 58 payment transactions issued by 21 individuals for which no documentation could be located. Accordingly, because PwC’s scope consisted of a review and analysis of supporting documentation available from NTSB, PwC reported that it could not determine the appropriateness of the 58 payments and recommended that NTSB follow up with the employees involved. In addition to the payments with no supporting documentation on file, PwC identified other transactions for which there were gaps in the supporting documentation. PwC reported that, in some of these cases, it was able to reasonably determine the propriety of the payment transactions with the documentation that was available. With respect to transactions for which NTSB could not locate some or all of the supporting documentation during PwC’s investigation, PwC recommended that NTSB continue its attempt to locate as many of the missing documents as possible. For those transactions for which the documentation cannot be located, PwC’s report noted that a discussion by NTSB with the check writer and/or the payee might be necessary to determine the legitimacy of the disbursement. PwC also made 17 additional recommendations related to specific transactions for which NTSB should conduct additional follow-up. With respect to the lack of internal controls applicable to NTSB’s use of Rapidrafts, PwC’s work found many of the previously disclosed weaknesses associated with the Rapidraft system including (1) inadequate safeguarding and access controls related to physical custody of blank Rapidraft instruments, (2) lack of managerial reviews and approvals for a large number of payments, and (3) lack of supporting documentation. Concurrent with its review of NTSB Rapidraft transactions, PwC was engaged to conduct a broad-based review of NTSB’s internal controls applicable to the financial management processes it had in place during 2000. The review is discussed below. In April 2000, NTSB management engaged PwC to perform a comprehensive review of NTSB’s internal controls applicable to its current financial management processes. PwC’s January 2001 report on the results of its review disclosed a wide range of internal control weaknesses. Specifically, PwC identified weaknesses related to the completeness and clarity of financial policies, recording and reviewing transactions, segregation of duties, and reporting on budget execution. In light of the weaknesses identified, PwC concluded that NTSB is “exposed to significant risk of financial loss.” PwC conducted its review by applying the Committee of Sponsoring Organization’s framework for evaluating internal controls, Internal Control–Integrated Framework. Specifically, PwC reviewed NTSB existing policies and procedures, activities and records, interviewed NTSB managers and staff, and reviewed selected transactions. The review, conducted from April through December 2000, covered NTSB financial activities associated with procurements, disbursements, payroll, asset and receipts management, budget planning and execution, and financial reporting and systems. In making more that 50 recommendations in response to the control weaknesses identified, PwC noted several cross-cutting themes associated with the need to improve NTSB internal controls, including those noted below. Greater Attention to and Awareness of Financial Policies. PwC concluded that NTSB needed to establish policies governing various financial activities including accounting for and controlling fixed assets and various types of receipts owed to NTSB and to update and clarify existing policies including procurement and use of agency credit cards. PwC also noted that once the financial policies are updated and clarified, NTSB staff need training on their application and enforcement. Finally, PwC noted that NTSB management must demonstrate, through review of activities and compliance audits, that the financial policies will be enforced. Improved Recording of Transactions. PwC concluded that NTSB needs to record selected transactions when the financial information needed to record and track the transactions becomes known to NTSB. PwC based this conclusion, in part, on the fact that NTSB had not been recording certain types of transactions (including amounts owed by others to NTSB and automatic charges, known as OPAC charges, made by other agencies against NTSB’s funds) until well after the transactions or their underlying economic events have occurred. In addition, PwC noted that NTSB needs to improve controls over the tracking of invoices pending approval and the payment of approved invoices to help ensure that all invoices are paid in a timely manner. Finally, PwC noted that NTSB needs to strengthen procedures to better identify and more timely record the purchase of fixed assets. Proper Review of Transactions. PwC concluded that NTSB needs to establish new and strengthen existing processes for managerial review of transactions involving training requests and the use of travel and office purchase cards. Also, controls should be strengthened over the approval of requisitions and the processing of disbursements. PwC observed that a separate review of selected travel vouchers from the first half of fiscal year 2000, conducted at NTSB’s request by another organization, found policy noncompliance on approximately 40 percent of the vouchers reviewed involving 10 percent of the amounts claimed evidencing a serious lack of managerial review, understanding, and enforcement of NTSB’s travel policies. Strengthen Segregation of Duties. PwC noted a number of instances in which NTSB needed to take action to address inadequate segregation of key duties and responsibilities. Specifically, PwC’s review found that (1) an individual responsible for controlling access to the personnel system also had the ability to modify payroll time sheets, (2) certain management level staff were able to initiate and approve their own requisitions (up to $10,000) for products and services in the financial management system, and (3) an individual who authorized new purchase (credit) cards and convenience checks also received the new cards and checks when they were issued by the vendor. Improved Budget Execution Reporting. PwC concluded that NTSB needs to take a series of actions designed to improve its accounting and reporting on its use of funds (budget execution and reporting). PwC noted that NTSB had not recorded all of its transactions in its accounting system, which contributed, in part, to NTSB’s failure to prepare and submit its budget execution reports as required by OMB. In addition, PwC noted that NTSB did not have the ability to generate comprehensive budget execution data and reports from its automated system, further inhibiting NTSB’s ability to monitor its status of funds. In addition to the more than 45 recommendations related to the specific internal control weaknesses they identified, PwC made several important broad-based recommendations for management action. These recommendations were intended to address the need to (1) revise, update, and refocus NTSB policies to ensure that management’s directives are carried out and, once disseminated, that policies are monitored for effectiveness, (2) adequately train and/or reeducate NTSB employees to better appreciate their responsibilities under the policies, (3) hold all employees accountable for failing to follow established controls, and (4) provide visible management leadership and endorsement for establishing internal controls. In making its recommendations, PwC observed that the changes needed in NTSB’s internal control environment will be difficult, given the crisis-driven nature of NTSB’s mission and the erosion of internal controls that has occurred in recent years. PwC further observed that to accomplish this change NTSB’s senior management must set a tone for change—one that recognizes the need for strong controls, supports the changes needed to establish these controls, and demonstrates that all staff will be held accountable for their control responsibilities. The National Transportation Safety Board Amendments Act of 2000enacted November 1, 2000, includes various provisions designed to strengthen NTSB’s financial accountability. Specifically, the act provides for a statutory Chief Financial Officer reporting directly to the Chairman on matters of financial management and budget execution, a Board-approved budget for non-accident-related travel expenditures of Board members, the submission of the budget to congressional oversight committees, and an annual report detailing the non-accident-related travel and expenses by Board members, establishment of comprehensive internal controls for its financial programs based on findings and recommendations resulting from a review of NTSB’s internal controls conducted by PwC, and the Inspector General of the Department of Transportation to review the financial and property management and business operations of NTSB, including internal accounting and administrative controls systems, to determine whether they comply with applicable laws, rules, and regulations. According to NTSB officials, the agency has taken the following actions in response to this legislation: designated a Chief Financial Officer who reports directly to the Chairman; established a budget for non-accident-related travel for Board members and submitted it to the congressional oversight committees (and when due, plans to issue the required reports); developed, and is implementing, a corrective action plan based on the recommendations made by PwC; and arranged for the DOT IG to begin audit and review activities over NTSB business operations. Our review of the design and operating effectiveness of NTSB’s internal controls found that basic safeguards necessary to protect against fraud, waste, and mismanagement were lacking regarding the payments for travel, products and services, and nonroutine benefits that we selected. Certain control-related policies and procedures were poorly designed and NTSB staff often did not follow those that were properly designed. Also, the control design and compliance problems were compounded by the lack of effective review and approval functions, resulting in impaired organizational accountability for agency resources. NTSB management’s failure to monitor and report on the adequacy of its internal controls as required by FMFIA and OMB Circular A-123 for fiscal years 1999 and 2000 further evidenced the deterioration of NTSB’s internal control environment. Failure to monitor the adequacy of internal controls precluded NTSB management from having information it needed to assess whether control objectives were being achieved and whether it had proper accountability over agency resources and was in compliance with laws and regulations. While the scope and focus of our internal control–related review and those of PwC were different, the control weaknesses identified by these reviews were indicative of insufficient and/or ineffective management attention to building and maintaining a sound internal control environment at NTSB during the periods reviewed. With the results of these reviews and the requirements imposed by recent legislation, management has the opportunity and responsibility to fundamentally change NTSB’s organizational commitment to internal control and take the actions necessary to build an appropriate control environment—one in which the acceptance of and adherence to efficient and effective internal control represents an important element of NTSB’s management and operating culture. NTSB’s management has already taken some positive steps to this end and has expressed a commitment to address the problems identified by PwC and us. NTSB now needs to be vigilant in ensuring that all necessary actions for improving NTSB’s internal control are fully implemented. To aid NTSB in building an effective internal control environment and addressing the specific weaknesses identified during our and PwC’s reviews, we recommend that the Board and Managing Director or their designees take the following actions. Regularly monitor implementation of the corrective actions planned by NTSB in response to each PwC recommendation. Ensure that NTSB management fully and consistently carries out its responsibilities under OMB Circular A-123 and FMFIA to develop and implement effective controls, monitor and assess control adequacy, correct control deficiencies, and report annually on the adequacy of controls and the existence of material weaknesses in agency controls. Comprehensively review and, as necessary, revise administrative policies and procedures to ensure that they incorporate–in a clear and unambiguous manner–sufficient controls to ensure that management’s control objectives are being achieved. Specifically, ensure that policies and procedures clearly and unambiguously specify the nature and extent of supporting documentation required for each type of payment transaction and define the roles and responsibilities of individuals responsible for initiating, processing, reviewing, and approving transactions for payment. Ensure that management and staff are properly trained in the internal control–related provisions of all applicable policy guidance. This training should specifically cover each employee’s internal control–related responsibilities in initiating, processing and, to the extent applicable, reviewing and approving each type of transaction; safeguarding assets; and complying with applicable laws and regulations. Institute a regular and comprehensive process for monitoring the performance of those responsible for initiating, processing, reviewing, and approving each type of transaction and the adequacy of related supporting documentation. Ensure that those responsible for each function are held accountable for carrying out their responsibilities. Clarify travel policies to specifically prohibit the use of government travel cards and contract airfares for personal travel of any type. For all Alternative Home Base rule and indirect travel occurring since the start of fiscal year 1999, ensure that constructive cost analyses are prepared from appropriate supporting documentation and determine whether any reimbursements exceeded the amounts permitted by NTSB policy. Pursue recovery of any excess travel reimbursements or, if recovery of an excess reimbursement is not sought, document the authority, basis, and rationale for the decision. Minimize, to the extent practical, use of annual orders to authorize travel. Specifically, consider restricting the use of annual travel orders to those trips for which the exigencies of crash investigations or other emergencies make it impossible or impractical to obtain advance, trip-specific, supervisory review and approval of all pertinent travel provisions. Ensure that all annual travel orders include estimated costs of the travel being authorized by the annual travel order as required by the Federal Travel Regulation. Establish formal requirements for uniformly documenting the prepurchase determination of funds availability and approval to use available funds for the purchase of products or services costing $2,500 or less. Identify each item of NTSB’s existing accountable property, the accountable office, and the accountable employee through physical inventory and a review of accounting records, and ensure that information about each item of accountable property is entered in the property control in accordance with NTSB policy. Ensure that all control information required by NTSB’s property management policy is recorded in the property control record for each new accountable property item that is acquired. Review and revise policy guidance applicable to performance awards to ensure that it clearly and accurately documents the basis or bases on which managers are permitted to determine employee performance awards. In so doing, ensure that the revised policy guidance complies with applicable OPM regulations. Establish specific procedures, in accordance with applicable OPM regulations, for determining estimated aggregate annual compensation, award and bonus amounts (both paid and deferred), and amounts eligible for retention allowances. Use these procedures to review calendar year 1999, 2000, and 2001 determinations of aggregate annual compensation, award and bonus payments and deferrals, and retention allowance payments for all applicable NTSB employees. Determine and document what action is to be taken to correct for any payments or deferred amounts that exceed amounts allowable under statute and OPM regulations (e.g., pursue repayment, reduce deferred amounts carried forward, and/or suspend retention allowance payments). Establish and document policies that ensure that all amounts owed to NTSB–including those associated with the payment of salary advances and insurance benefit premiums for student employees in LWOP status–are identified, tracked, controlled, and collected on a timely basis. Review past employee advance transactions to identify those that have not been repaid and pursue collection of the amounts owed. We provided a draft of our report to NTSB for its review and comment and met with the Chief Financial Officer and other NTSB officials to discuss the draft and obtain their oral comments. These officials generally agreed with our findings, conclusions, and recommendations. In commenting on the draft report, NTSB officials told us that NTSB has taken multiple actions, since the period covered by our report, to improve and strengthen internal controls, including strengthening requirements for the submission of required documentation needed to support awards and bonuses. In addition, with respect to NTSB’s reporting on the adequacy of its internal controls for 2000, NTSB officials said that they believe the Chairman’s letter to the President in December 2000 satisfied NTSB’s requirement, under FMFIA, to report on the adequacy of its internal controls. We do not agree with NTSB on this matter, as the Chairman’s letter did not include management’s assessment of fiscal year 2000 internal controls that is required by FMFIA. While the Chairman’s letter noted that an assessment by management would be premature because an internal control evaluation by an independent public accounting (IPA) firm was ongoing, NTSB has not reported on its assessment since the IPA report was issued in January 2001. NTSB officials also provided comments of a technical and/or editorial nature. As appropriate, we have revised our report to incorporate those comments. We are sending copies of this report to the Ranking Minority Member of the House Budget Committee. In addition, we are sending copies to Members of the National Transportation Safety Board, the Office of Management and Budget, the Inspector General of the Department of Transportation, and the public accounting firm of PricewaterhouseCoopers, LLP. This report will also be available on GAO’s home page at http://www.gao.gov. Please call me at (202) 512-9508 if you or your staffs have any questions. Major contributors to this report are listed in appendix II. To achieve our objectives related to the design of and compliance with key controls applicable to the three payment types, we used GAO’s sensitive payments framework. Key elements of the framework include understanding applicable internal controls, identifying and testing key controls, and, for those transactions selected for testing, assessing whether key controls were followed. As requested by your offices and consistent with the sensitive payments framework, we targeted our selection of transactions for testing by including, to the extent applicable, transactions involving payments to or for the benefit of NTSB’s Board members and their staff, senior management, and other NTSB staff. To assess the design of key internal controls, we considered the NTSB control environment, identified key controls and related laws and regulations, and assessed whether, if effectively implemented, key controls would achieve their intended objectives. Specifically, we reviewed applicable NTSB policy guidance (Board orders and office memorandums) for the three payment types, applicable provisions of the Federal Travel Regulation, the Federal Acquisition Regulation and regulations issued by the Office of Personnel Management, and Standards for Internal Control in the Federal Government. We also discussed with NTSB officials the agency’s policies and procedures pertaining to the three types of transactions that we reviewed and applicable laws and regulations. For each of the three payment types that we reviewed, we identified internal controls that we considered key to providing reasonable assurance that assets are safeguarded from unauthorized use, loss, or misappropriation and that expenditures are executed in accordance with management authority and applicable laws and regulations. For travel, the key controls we identified included travel authorization, supporting documentation, and voucher approval. For purchases of products and services, the key controls we identified included purchase authorization (funds control and approval to purchase), receipt and acceptance of goods, review and approval of payment vouchers and supporting documentation, and property controls. For nonrecurring benefits, the key internal controls we identified included recommendation and justification of personnel action and review and approval. Because the underlying nature of individual transactions varied among the three payment types, different combinations of key controls were applicable to different transactions. Based on our understanding of the design and operation of key controls, we drew preliminary conclusions on the design of key controls. Following our review of compliance with key controls, we reconsidered our initial conclusions on the design of key controls. To determine, for those transactions selected for testing, whether NTSB complied with key internal controls, we reviewed supporting documentation; pursued, as necessary, any open issues by making follow- up inquiries and requesting additional information of NTSB officials; and concluded on whether the objectives of key controls were achieved and laws and regulations were followed. We selected payment transactions for testing of travel and purchases of products and services from an electronic database file of fiscal year 1999 payments that was provided to us by NTSB. We selected transactions for testing nonroutine benefit payments from electronic database information that included all NTSB fiscal year 1999 bonus, award, and allowance payments by employee, and NTSB fiscal year 1999 annual compensation rates by employee, which were provided to us by NTSB. We did not perform an assessment of the accuracy and completeness of the electronic database information provided to us by NTSB. In selecting transactions for testing, we intended, to the extent applicable, to target our selection to transactions that were to or for the benefit of Board members and their staff, senior management, and other NTSB staff. Upon making initial test transaction selections, we reviewed all available supporting documentation for those transactions. When selecting additional transactions for testing, we considered internal control weaknesses identified during our review of the transactions initially selected. For travel we selected a total of 149 payments for travel by 25 individuals. For purchases of products and services we selected a total of 86 payments. The electronic database information provided to us by NTSB for purposes of identifying transactions for testing did not include information sufficient to allow us to target expenditures to third parties that might benefit any specific employee. For nonroutine benefits, we selected 36 employees having a total of 99 payments for performance and special act awards, recruitment and relocation bonuses, and retention allowances. For the selected test transactions, we reviewed all available supporting documentation provided by NTSB for evidence that NTSB complied with key internal controls. We followed up, as necessary, with NTSB officials to obtain further clarification and additional information on selected transactions. Specifically, for selected test transactions from each of the three transaction types, we reviewed all available documentation including the following. Travel--travel vouchers, receipts, itineraries, constructive cost analyses, travel orders, and payment instrument copies. Purchases of products and services--requisition forms, purchase orders, transaction correspondence, invoices, receipts, and payment instrument copies. Nonroutine benefits--Requests for Personnel Action and other documents supporting the recommendation, justification, and approval of specific award, bonus, and allowance transactions that were contained in the Official Personnel Folder (OPF) or Employee Performance Folder (EPF) maintained for each employee or were maintained elsewhere and were located by NTSB staff. Because certain nonroutine benefit payments for awards, bonuses, or allowances were subject to calendar year aggregate compensation limitations and possible deferral provisions, we also reviewed award, bonus, and allowance transactions made during calendar year 1999, the preceding year, and the succeeding year for those selected employees that, according to our calculation, had aggregate calendar year 1999 compensation that approached the annual aggregate limitation on pay. We concluded, based on the supporting documentation provided, whether the individual transactions tested complied with NTSB’s key internal controls. Because the nature of individual transactions varied within one payment type, not all payments of one type were subject to the same set of key controls. Test results were therefore limited to the key controls applicable to individual transactions, and the test universe differed between various key controls within one transaction type. The nature and scope of our procedures, including the targeted selection of transactions for testing, were not sufficient to provide an opinion on internal control related to the payment areas we reviewed, nor would they disclose all weaknesses. Because of the inherent limitation in any system of internal control, errors or irregularities may nevertheless occur and not be detected. Also, projection of any evaluation of controls to future periods is subject to the risk that control procedures may become inadequate because of changes in conditions or that effectiveness of the design and operation of control policies and procedures may deteriorate. To determine whether the results of PwC’s ongoing reviews should be incorporated into our report, we gained an understanding of the nature and scope of both reviews through discussions with NTSB officials and PwC representatives. We reviewed PwC’s reports and considered their findings, conclusions, and recommendations, and discussed with NTSB officials and PwC representatives the scope, methodology, and results of PwC’s work. We determined those findings and conclusions that were relevant to the nature and scope of our review, particularly those related to the adequacy of NTSB’s internal control environment. We conducted our review from June 2000 through May 2001 in accordance with generally accepted government auditing standards. In addition to the individual named above, Carol Browder, Marian Cebula, Dave Engstrom, Jeff Jacobson, Jack Warner, and Greg Ziombra made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to our home page and complete the easy-to-use electronic order form found under “To Order GAO Products.” Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: fraudnet@gao.gov, or 1-800-424-5454 (automated answering system).
The National Transportation Safety Board (NTSB) promotes transportation safety through accident investigations, special studies, and recommendations intended to prevent accidents. Separate reviews at NTSB by PricewaterhouseCoopers, LLP (PwC) and GAO found significant shortcomings in the design and operation of NTSB's internal controls during 1999 and 2000. These deficiencies indicated insufficient or ineffective management attention to establishing and maintaining an effective system of internal control over financial management operations. The resulting weaknesses exposed the agency to waste, fraud, and mismanagement. Some basic controls were not always clearly and consistently incorporated into NTSB policies and procedures, and, in some cases, the written policies were ambiguous and contributed to possible improper transactions. Furthermore, NTSB's payment review and approval process--the last and best opportunity to detect and address inadequate documentation and other policy violations prior to payment--was often ineffective. Separate reviews of different aspects of NTSB's 1999 and 2000 financial activities and related internal controls, done by PwC at NTSB's request, documented various internal control weaknesses, including problems with the completeness and clarity of policies, the recording and review of transactions, and the tracking and reporting its use of funds.
Responding to the massive and expensive failure of federally insured savings and loans in the mid-1980s, Congress and the administration examined government-sponsored enterprises to ensure that they were operating in a manner that protected the taxpayers’ interest and minimized the risk that these entities would fail. Prior GAO reports noted that the enterprises’ ties to the government had weakened private market discipline to the point that creditors believed the government would likely assist an enterprise through financial difficulties, even though the government has no legal obligation to do so. This expectation is not without foundation; in 1987, Congress approved $4 billion to help the ailing Farm Credit System. According to a report from the Senate Committee on Banking, Housing, and Urban Affairs, this perception is based on the structure and privileges of the enterprises and the special treatment of their securities. The enterprises were created by statute, are exempt from state and local income taxes, and each has a line of credit with the Treasury of $2.25 billion. Their securities are issued and paid through the facilities of the Federal Reserve Banks and are eligible for purchase by the Federal Reserve, for unlimited investment by Federal Reserve member banks and federally insured thrifts and as collateral for public deposits of the U.S. government and most state and local governments. As a result, the enterprises’ debt trades at yields only marginally above those on Treasury debt of comparable maturity. This implicit credit enhancement allows the enterprises to operate with relatively little capital. The enterprises’ federally established charters set out specific purposes that they are to address. They are to provide stability in the secondary market of residential mortgages, respond appropriately to the private capital market, and provide ongoing assistance to the secondary market for residential mortgages. They are also to promote access to mortgage credit throughout the nation by increasing the liquidity of mortgage investments and improving the distribution of investment capital available for residential mortgage financing. For these reasons, the government has an interest in seeing that these enterprises are managed and operated in a prudent and financially sound manner. Both enterprises have been highly profitable for several years,but Congress recognized that both may not always remain as profitable and well managed as they are today. Consequently, Congress reformed the enterprises’ regulatory structure and established capital requirements by passing the Federal Housing Enterprises Financial Safety and Soundness Act of 1992. According to the Senate report accompanying the bill, the capital provisions are designed to ensure that the enterprises are financially sound. The primary emphasis of these provisions is on a risk-based capital standard that reflects risks the enterprises assume. The new regulatory structure was designed to address concerns over HUD’s inadequate regulation and supervision of the enterprises. The act established OFHEO as an independent entity within HUD. The act reserved for the Secretary of HUD certain authorities relating to HUD’s responsibility to oversee the enterprises’ efforts to meet housing goals.But the act also clarified that the duty to ensure that Fannie Mae and Freddie Mac are adequately capitalized and operate in a safe and sound manner, consistent with the achievement of their public purposes, belongs to OFHEO. Consequently, the act specifically delegated other responsibilities and authorities to the Director of OFHEO. OFHEO was intended to operate separately from HUD and be staffed with experts in financial management or financial institution oversight. OFHEO is under the management of a presidentially appointed and Senate-confirmed Director. The act provided the Director with numerous exclusive authorities (i.e., without the review and approval of the Secretary of HUD), such as powers to examine the operations of the enterprises, determine capital levels, assign the enterprises capital classifications, and take certain enforcement actions. The act also gave the Director exclusive authority to manage OFHEO, including establishing and implementing annual budgets and hiring personnel. Thus, the Director leads and directs OFHEO’s activities by setting internal and external policy, managing overall operations, and serving as the chief spokesperson for the organization. As of February 1995, OFHEO’s six components reported to the Director and Deputy Director. Figure 1 illustrates OFHEO’s organizational structure. Three line functions are directly responsible for carrying out OFHEO’s mission: The Office of Examination and Oversight (OEO) is responsible for designing and conducting annual on-site examinations of Fannie Mae and Freddie Mac, as required by law, and performing additional examinations as determined by the Director. Research, Analysis and Capital Standards (RACS) is responsible for developing and implementing financial “stress tests,” which use interest rate and credit risk scenarios prescribed in the act to determine the enterprises’ risk-based capital requirements. This unit is responsible for conducting research and financial analysis on issues related to the enterprises’ activities, such as simulating Treasury yields and associated interest rate indices. The General Counsel has responsibility for preparing the regulations required by the act and advising the Director on legal issues, including financial institution regulatory issues, applicable securities and corporate law principles, and administrative and general legal matters. Three staff functions support OFHEO’s mission: Finance and Administration is responsible for ensuring that OFHEO has the infrastructure to function independently. This unit is to provide human resources management, budget formulation and execution, financial and strategic planning, contracting and purchasing, office automation, travel, records and document security, and related administrative support services. Finance and Administration is also responsible for developing annual budgets and serving as the liaison with the Office of Management and Budget. The Office of Chief Economist is responsible for providing and coordinating economic and policy advice to the Director on all issues related to the regulation and supervision of the enterprises. This office is also to direct and conduct research to assess the impact of issues and trends in the housing and mortgage markets on OFHEO’s regulatory responsibilities. The Office of Congressional and Public Affairs is responsible for handling public and press inquiries, briefing Members of Congress and staff on matters relating to OFHEO, monitoring legislative development, and bringing congressional concerns to the attention of the Director. OFHEO’s staffing has grown slowly but steadily since the Director, its first employee, was sworn in on June 1, 1993. At the end of fiscal year 1993, five other employees were on board. According to testimony from the Director in October 1993, recruiting specialized staff was slowed somewhat by OFHEO’s need to first do such fundamental things as obtain hiring authority, develop procedures, and obtain office space. By September 1994, however, OFHEO had 37 staff on board. And in February 1995, OFHEO’s staff had grown to 53. OFHEO’s budget projects filling 69 positions in fiscal year 1996. Table 1 compares the distribution of positions among OFHEO’s different units at the end of fiscal year 1994 with that projected for fiscal year 1996. Relative to other federal financial regulatory agencies, OFHEO is a small organization. In testimony before the House Banking Committee in October 1993, OFHEO’s Director spoke about the unique challenge facing OFHEO in regulating two entities as large, complex, and sophisticated as Fannie Mae and Freddie Mac. She compared OFHEO’s regulatory task with that of other federal financial regulatory agencies. For example, OFHEO’s fiscal year 1994 budget funded 45 positions to oversee the enterprises (1 employee for every $23.1 billion in regulated assets). As a point of comparison, 2,500 Office of Thrift Supervision (OTS) employees oversaw institutions with $800 billion in assets (1 employee per $0.3 billion in assets). To address that challenge, OFHEO’s philosophy was to create a lean and flat organization that would attract and retain a highly qualified, diverse staff with sophisticated financial, legal, and supervisory expertise. Most OFHEO expenses cover personnel and contractor services. For fiscal year 1996, OFHEO estimates that it will spend $7.9 million (53.3 percent of its $14.9 million total) on personnel services (i.e., expenses related to personnel compensation and benefits, but exclusive of contractors). According to OFHEO, it bases salaries and benefits on market rates for the required technical expertise comparably with those of other federal banking regulatory agencies. The second largest category of expenses (“Other services”) generally covers OFHEO’s contractor services. In fiscal year 1996, OFHEO expects to spend $4.4 million (29.7 percent) on specialized technical services associated with developing and maintaining its research capability and computer models, examination services, and specialized legal services. All other expenses constitute a smaller, but growing, proportion of OFHEO’s total obligations. These expenses cover such fundamental but crucial items as computer acquisition, travel, and rent, which fluctuate with changing numbers of staff and contractors on location. Table 2 shows actual and estimated OFHEO obligations for fiscal years 1994 through 1996. Although OFHEO’s financial plans and forecasts are to be included in the budget of the United States and in HUD’s congressional justifications, it is not funded with tax dollars through the congressional appropriations process. Rather, the act requires the enterprises to pay annual assessments to cover OFHEO’s costs. OFHEO’s fiscal year 1996 budget estimated that it would obligate nearly $0.6 million less than was estimated for operations in fiscal year 1995. That estimate reflects a diminished reliance on other contractor services, especially in support areas, partially offset by increases in rent associated with the need for additional space and in personnel expenditures associated with four additional OFHEO positions. Because OFHEO continues to develop its major management systems, we limited the scope of our work for this report to the planning and initial implementation of those systems. As of February 1995, none of OFHEO’s major management systems had been completely implemented. Consequently, we did not evaluate their ultimate implementation or effectiveness. To develop our information, we interviewed OFHEO’s senior management and HUD officials and reviewed OFHEO management systems documentation, focusing on the policies and procedures relating to OFHEO’s human resource management and accounting and financial control systems. We reviewed changes in these plans and systems that OFHEO made over time. We also reviewed OFHEO’s plans for its organizational development, particularly its examinations and capital adequacy functions, along with its completed examinations of the enterprises and the advance notice of proposed rulemaking on the risk-based capital regulation. OFHEO provided written comments on a draft of this report. The comments are summarized on page 22 and reprinted in appendix I. We did our work at OFHEO and HUD in Washington, D.C., between September 1994 and February 1995. This review was done according to generally accepted government auditing standards. Before OFHEO’s Director was sworn in on June 1, 1993, OFHEO had no employees, no structure, no physical location, no policies or procedures, and no plan. Much has changed since then. OFHEO has made steady progress toward establishing administrative and financial management systems and controls designed to enable it to operate independently of HUD. OFHEO has nearly finished designing its human resource management system, worked with HUD to meet its initial financial and accounting needs, and begun exploring alternative arrangements for “cross-servicing,” through which OFHEO would contract with another agency to provide administrative and financial management services. According to its September 1994 operational workplan, OFHEO is basically on schedule developing and installing its infrastructure. OFHEO now faces the challenge of implementing its systems and resolving unforeseen difficulties. By February 1995, OFHEO had substantially completed the design of its human resource management system (called the Performance Evaluation Management System, or PEMS) and established milestones for pilot testing the system. OFHEO anticipates full implementation of PEMS at the beginning of fiscal year 1996. Rather than adopting HUD’s or another agency’s human resource management system, OFHEO developed its own. Based on recommendations of two consulting firms, OFHEO’s performance management system is intended to enable it to respond to changing regulatory needs and ease its developmental operations. PEMS’ broad pay band structure reflects three general design elements: (1) salaries to help OFHEO recruit technical expertise, (2) pay ranges to provide management flexibility to make pay decisions that reflect labor market pay while providing internal equity, and (3) occupational levels to be significantly and recognizably different. OFHEO classifies personnel by occupation and assigns them to one of seven pay bands. Occupations are based in part on the type of work done relative to OFHEO’s mission, the nature and subject matter of the work, and the fundamental qualifications required. Pay levels and ranges are based on comparisons with similar occupations in other federal financial regulatory agencies. An individual’s pay also depends on the complexity of the work, scope of responsibility, and supervisory content of the job. Changes in pay are to occur once a year, at the end of the review cycle. Individuals may receive pay increases within each pay band, unless they are at the band’s ceiling. Those staff meeting qualifications for other occupations and demonstrating the ability to perform the relevant duties and responsibilities may also be promoted. In 1994, pay could range from $15,000 to $135,000. In March 1995, OFHEO was preparing to implement PEMS. Having obtained official approval of PEMS from the Office of Personnel Management (OPM) on February 23, 1995, OFHEO planned to implement PEMS in April 1995. OFHEO will abbreviate PEMS’ initial performance cycle and begin regular, full-year cycles in fiscal year 1996. Compared to the progress made with PEMS, OFHEO has experienced greater difficulty in establishing its accounting and financial systems and controls. OFHEO has worked with HUD to meet its immediate needs, but OFHEO is still uncertain as to whether HUD can meet its accounting and financial systems requirements. As of February 1995, OFHEO was still considering either establishing its own systems or arranging for cross-servicing with another federal agency. OFHEO has established general requirements for its accounting and financial management system. According to OFHEO, the system must support all personnel and procurement components within an integrated automated accounting system and accommodate its classification and compensation system. In addition, the system must produce auditable financial statements. The system must maintain accounting records that accurately reflect expenses and income; provide monthly budget, accounting, and exception reports; and pay approved invoices. OFHEO has used HUD’s accounting and financial systems since its inception but experienced various difficulties. In the procurement area, for example, OFHEO used the services of HUD’s Office of Procurement and Contracts. According to OFHEO, HUD’s office was not staffed to provide the expedited procurement processing OFHEO’s start-up mode of operation required. OFHEO eventually hired its own procurement contracting officer and exercised independent contracting authority. To date, HUD has handled OFHEO’s payroll and other personnel processing actions. OFHEO inputs its time and attendance data directly but uses HUD’s contract with the National Finance Center in New Orleans to process its payroll. HUD has serviced OFHEO’s other personnel processing needs (such as processing tax forms, health benefits enrollments, and direct deposit forms), but not, according to OFHEO, without problems. During OFHEO’s early months of existence, HUD did not understand or appreciate the significance of OFHEO’s Schedule A hiring authority and its authority to make independent appointments. This added considerable time to the processing of personnel actions, since individuals unfamiliar with OFHEO’s exemptions subjected those actions to HUD’s own internal review procedures. According to OFHEO, this processing relationship smoothed out over time. Senior OFHEO officials expressed dissatisfaction with HUD’s accounting and financial management systems. For example, OFHEO experienced substantial delays in how HUD recorded OFHEO obligations and expenses. Additionally, HUD restricted access to its accounting system, limiting the ability of OFHEO’s finance and administration staff to retrieve their data to support budget formulation and financial management activities. HUD staff also confused OFHEO with another HUD agency, the Office of Fair Housing and Equal Opportunity (FHEO), causing them to confuse financial transactions and misdirect financial reports. OFHEO had procedures and controls in place that enabled it to detect such problems. Consequently, HUD’s errors were corrected and OFHEO ultimately was able to reconcile its fiscal year 1994 accounts. However, the lack of current accurate data during the year made it difficult for OFHEO to determine its unobligated balances on a monthly basis. In turn, this hampered OFHEO’s ability to notify the enterprises with certainty what the next semiannual assessment would be. Over time, OFHEO has recognized improvements in the data provided by HUD’s accounting system. Senior OFHEO officials told us that they have noted fewer errors and that HUD has been able to provide information in a more timely manner. According to the Director of Finance and Administration, OFHEO had a great degree of confidence in the accounting data on which it based its March 1, 1995, assessment of the enterprises. HUD officials acknowledged that some problems existed with the accounting system that processed OFHEO’s financial data. They told us that HUD is replacing the system that generated many of the problems, and OFHEO is to be switched to the new system later in fiscal year 1995. According to those officials, the new system should fully meet OFHEO’s needs and requirements, allowing OFHEO to access its financial data and generate its own unique reports. As of February 1995, OFHEO had not yet decided how it would meet its future administrative and accounting needs. OFHEO has not determined whether to continue using HUD systems, create and maintain its own, or contract those functions out to another federal agency. Initially, OFHEO had decided against devoting its own staff to these functions, believing that it would not be a cost-effective use of its limited resources. It approached nine other federal agencies, focusing on financial regulators because they have similar missions and compensation systems. OFHEO eventually determined that none of them would fully meet its needs. In some cases, OFHEO questioned the agencies’ capacity. In other cases, the agencies responded that entering into a cross-servicing arrangement would not be practicable or cost-effective. According to a senior OFHEO official, the biggest problem involved the particular systems OFHEO had implemented because of exemptions in its statute, such as that exempting OFHEO from certain provisions of Title 5 of the U.S. Code, relating to classification and General Schedule pay rates. Resolving the cross-servicing issue is a priority for OFHEO’s Director of Finance and Administration. Because OFHEO has been unable to identify another agency that has the resources to meet its unique accounting system needs, and because the quality of the accounting information from HUD has improved significantly, OFHEO is considering continued use of HUD accounting and financial management systems. OFHEO officials have also been encouraged by HUD’s willingness to work with them to ensure that the new accounting system meets OFHEO’s requirements and needs. OFHEO has made considerable progress in establishing its key functions—examining the enterprises’ financial condition to ensure their financial safety and soundness as well as developing and implementing the required risk-based test to determine how much capital the enterprises need (the “stress test”). After having hired its key professional staff and establishing working relationships with the enterprises, OFHEO conducted its first examinations, purchased and installed the computer system needed to run the stress test, and obtained the data needed for the financial modeling and research that underlie the test. Additionally, OFHEO established plans to guide its work at least through the end of fiscal year 1995. OFHEO has met most of its legislated requirements, including making determinations of the enterprises’ capital adequacy during the transition period prior to the release of the risk-based capital standards. Despite this progress, OFHEO has experienced delays, causing some important activities to fall behind schedule. For example, OFHEO did not meet the legislatively mandated deadline for issuing final regulations establishing the risk-based test of the enterprises’ capital adequacy.OFHEO has established intermediate milestones and now expects to publish proposed stress test regulations in late 1995 and final regulations in 1996. These delays, while unexpected, are not unusual; most, such as hiring qualified staff and creating internal processes, seem related to establishing a new organization. OFHEO has taken steps to address each problem encountered. OFHEO has established its examination function, adopting a general examination methodology and setting out a 2-year workplan. In 1994, OFHEO completed its first examinations and began its second, along with some off-site supervisory monitoring activities. OFHEO has experienced some unexpected delays that are related to establishing a new organization. As a result of these setbacks, some activities scheduled to be done by the end of 1994 are behind schedule. The act gave OFHEO broad authority to examine the enterprises. The act requires OFHEO to do an annual on-site examination of each enterprise to evaluate its financial condition for ensuring financial safety and soundness. In addition, the act authorizes OFHEO to do other examinations necessary to ensure the financial safety and soundness of the enterprises. OFHEO plans to meet its mandate by doing a combination of on-site examinations and off-site monitoring. Examinations are to assess the safety and soundness of the enterprises and the adequacy of their books and records. Supervisory monitoring is the ongoing assessment of the enterprises’ financial condition and performance. Monitoring projects include special studies of enterprise activities, quarterly company analyses, and the development of a call report. OFHEO uses a “top down” examination approach to identify the risks inherent in the enterprises’ activities and to determine whether risks are prudently managed and controlled. This approach has three levels. First, OFHEO examiners are to meet and evaluate senior management, analyze strategic and business plans, determine the quality of internal and external audits, and verify records and control systems. Second, the examiners are to expand the examination to include further testing, statistical sampling, and analyses of the enterprises’ systems. If the second-level review raises any questions, a team of examiners is to do a detailed third-level review of these concerns. By September 1994, OEO had planned its activities through calendar year 1995. According to this schedule, OEO would have seven or more projects ongoing at all times—at least two examinations at each of the enterprises, along with at least three supervisory monitoring projects. OEO’s director characterized this workplan as “ambitious.” Table 3 summarizes OEO’s workplan from July 1994 through the end of 1995. It shows, for example, that during the first half of 1995, OEO staff are to complete “general,” “specialty,” and books and records examinations of both enterprises. In the general examination, OEO staff will (1) examine each enterprise’s risk management by assessing the adequacy of management processes to identify, measure, monitor, and control the risk exposure of the enterprise; (2) examine a functional area of each enterprise’s secondary market operations, such as its marketing or mortgage securitization; or (3) assess the adequacy of some aspect of each enterprise’s business or product lines, such as its single- or multifamily guarantee programs. The specialty examination is to cover the enterprises’ electronic data processing (EDP). The objective of these examinations is to ensure that EDP systems used by the enterprises are adequate, reliable, accurate, and operated within a secure environment. OFHEO plans to test major systems to verify the data and report integrity. In the books and records examination, OEO staff are to evaluate the accounting and auditing to determine the reliability of the enterprises’ financial information. OEO’s small staff has not been able to keep up with its workplan. OFHEO completed its first on-site examinations, focusing on the enterprises’ use of off-balance sheet financial derivatives, in November 1994. OFHEO reported that the enterprises used derivatives for sound business purposes and that derivatives activities were consistent with the enterprises’ objectives. OEO began its second examination of both enterprises, the corporate examinations, in October 1994 and anticipates completing those reports in April 1995, 4 months behind its original schedule. OEO has completed one part of its planned supervisory monitoring activities (an analysis of the enterprises’ strategic and business plans). But two other activities that were scheduled to be done by the end of 1994—the development of OFHEO’s call report and the third quarter company analysis of the enterprises—are also behind schedule. All activities planned for 1995 are also behind schedule. The reasons OFHEO has fallen behind seem related to complications in establishing a new organization. First, according to the director of OEO, OFHEO had trouble hiring qualified examination staff. Occasionally, OFHEO found it hard to attract qualified candidates, because it was new and not well known. According to OFHEO’s Director, there are a limited number of individuals with the particular skills that OFHEO needs, forcing it to compete with other financial regulators for qualified staff. OFHEO also noted that it had difficulty competing with private salary offers. In addition, some employees who accepted employment with OFHEO were unable to begin as early as OFHEO management wanted. Second, OFHEO needed to establish its internal review processes to help ensure quality reporting. Upon completing its first examinations, the draft report was reviewed by key OFHEO management officials while developing a review procedure. OFHEO hopes to streamline its review process in the future. Finally, senior OFHEO officials told us that the amount of information from the enterprises that its staff needs to review is voluminous. OFHEO may have underestimated the time needed for its small examination staff (five staff examiners and regulatory specialists at the end of the calendar year) to review and understand such large amounts of information. On the other hand, according to the director of OEO, the initial examination activities have proven to be valuable exercises in building OFHEO’s overall knowledge and expertise, which should help focus subsequent work. The act requires OFHEO to establish a risk-based capital test for the enterprises (the stress test) to determine the amount of capital the enterprises must hold. That amount must be adequate to last during a 10-year period (the stress period), which the act defines with specific parameters relating to credit risk, interest rate risk, new business, and other activities. The act defines an enterprise’s required risk-based capital level as equal to the amount calculated by applying the stress test, along with an additional 30 percent to allow for management and operations risks. Using this test, OFHEO is then to determine the enterprises’ capital adequacy, using classifications established in the act. Further, the act also required the Director to issue final regulations establishing the stress test within 18 months of the Director’s appointment (i.e., by December 1, 1994). The Research, Analysis and Capital Standards section (RACS) is responsible for developing and implementing the stress test. By the end of fiscal year 1994, OFHEO had hired the majority of its RACS staff and completed the design, purchase, and installation of the computer hardware and software (called the Research Systems Environment, or RSE) needed to establish the financial simulation modeling capability to run the stress test. OFHEO has also begun developing the computer programs and databases that will allow it to simulate the enterprises’ businesses for purposes of the stress test and other tasks, such as the analysis of various economic scenarios, new financial products, proposed policy initiatives, and new business scenarios. As of October 1994, RACS had received all historical data requested from the enterprises. Among other things, RACS staff has transformed the data into a form necessary for simulation modeling, created historical data sets, begun building the algorithms, and has started drafting parts of the description of the stress test. Despite these accomplishments, OFHEO did not meet the December 1, 1994, deadline imposed by its authorizing legislation regarding the development of the stress test. OFHEO published the Advanced Notice of Proposed Rulemaking in the Federal Register on February 8, 1995, announcing its intention to develop the regulation and soliciting public comment on a variety of issues. Following a 90-day comment period, OFHEO will consider those comments and incorporate them as appropriate into a proposed rule to be published later in 1995. OFHEO expects to publish the final regulations in 1996. The enterprises will then have 1 year to bring their capital levels into compliance. Although not meeting the legislatively mandated deadline for issuing final regulations establishing the stress test of the enterprises’ capital adequacy and slightly behind on its internal examinations workplan, OFHEO has met other important operational and reporting requirements. For example, OFHEO submitted annual written reports to the Senate and House Banking Committees as required by the act, covering such topics as actions taken by OFHEO and descriptions of the financial safety and soundness of each enterprise. In addition, OFHEO submitted its annual report to the President and Congress, as required by the Federal Managers’ Financial Integrity Act of 1982 (FIA), on December 20, 1994. The report noted that OFHEO as a whole complied with FIA. Perhaps most important, OFHEO made quarterly determinations of the enterprises’ capital adequacy. During the 18-month period following enactment (i.e., from October 28, 1992, through April 27, 1994), OFHEO was to classify the enterprises as “adequately capitalized” if they maintained an amount of core capital that equaled or exceeded certain minimum capital levels that were to apply only during that “transition” period. OFHEO determined the enterprises to be adequately capitalized for the second, third, and fourth quarters of 1993 and for the first quarter of 1994, applying those transitional minimum capital standards. For the second and third quarters of 1994, OFHEO determined the enterprises to be adequately capitalized, applying other minimum capital standards contained in the act. The transition minimum capital levels are lower than those effective following the transition period. Table 4 summarizes the act’s deadlines, other major requirements, and actions taken by OFHEO through the end of calendar year 1994. We requested comments on a draft of this report from the Director of OFHEO. In its written comments, OFHEO agreed with the content and conclusions. OFHEO’s comments are reprinted in appendix I. We are sending copies of this report to the Director, OFHEO; the Secretary, HUD; and other interested parties. We will also make copies available to others on request. This report was prepared under the direction of William J. Kruvant, Assistant Director, Financial Institutions and Markets Issues. Other major contributors are shown in Appendix II. Please contact either Mr. Kruvant on (202) 942-3837 or me on (202) 512-8678 if you have any questions about this report. Herbert I. Dunn, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the status of the Office of Federal Housing Enterprise Oversight's (OFHEO) development, focusing on OFHEO progress in designing and instituting key management systems. GAO found that: (1) OFHEO has made considerable progress toward setting up its key management systems; (2) as of February 1995, OFHEO had most of its administrative structure in place with 53 of its 65 authorized staff, and had prepared for the implementation of its human resource management system; (3) although OFHEO has defined its financial and accounting system requirements and has worked with the Department of Housing and Development (HUD) to meet its immediate accounting and administrative needs, it is having difficulties using HUD systems; (4) OFHEO has not decided whether to continue to use HUD systems, create and maintain its own systems, or contract out its accounting and administrative functions to another federal agency; (5) OFHEO has begun its mission-related programs, established the fundamentals of its examination function, adopted an overall examination framework, and set a 2-year workplan; (6) OFHEO has completed its first on-site examination of the Federal National Mortgage Association and the Federal Home Loan Mortgage Corporation and the design, purchase, and installation of computer software to support its financial research; (7) OFHEO has not met its legislative deadline to complete final regulations establishing the enterprises' risk-based capital stress test, but it has adopted intermediate milestones and expects to publish the regulations in 1996; and (8) although OFHEO staff is having difficulties meeting its 2-year workplan, OFHEO has generally met its legislative reporting requirements.
The IRS Restructuring and Reform Act of 1998 established a goal of having 80 percent of individual returns filed electronically by 2007. IRS worked to promote e-filing, and although rates increased steadily between 1998 and 2007, IRS did not meet the 80 percent goal. The IRS Oversight Board recommended extending the goal’s time period to 2012 and expanding the scope of the goal to include all major individual, business, and exempt organization returns. Table 1 shows e-filing rates by type of return for fiscal year 2010. Except for individual returns, those rates were far from the 80 percent goal. IRS is implementing the e-file mandate in two phases. In 2011, paid preparers who reasonably expect to file 100 or more individual, estate, or trust income tax returns are required to e-file. In 2012, the e-file requirement will apply to paid preparers who reasonably expect to file more than 10 individual, estate, or trust tax returns. IRS decided to implement the mandate in two phases to give preparers time to make any necessary changes to their business practices and to help IRS prepare for the anticipated volume in e-file applications. (See app. II for possible business changes affecting preparers as a result of the mandate as well as a comparison of the paper filing and e-filing processes.) In recent years, taxpayers have relied heavily on third party software companies and preparers to get tax information and prepare their tax returns. Figure 1 illustrates how taxpayers avoid direct interaction with IRS by working with third parties. In 2010, 91 percent of all tax returns were prepared using tax software—some by preparers and some by taxpayers. Approximately 70 percent of all 2010 returns were e-filed while the remainder were printed and mailed in on paper (see more details of preparation and filing methods in app. III). Also, as reported in IRS’s Return Preparer Review, many taxpayers often rely on third parties to assist them with their tax law questions. IRS accepts e-filed tax returns through two systems, the legacy Electronic Management System (EMS) and the new Modernized e-File (MeF) system. During filing season 2011, only certain forms were accepted through the MeF system, so not all tax returns could be processed with the new system. IRS plans to add all of the forms currently accepted in EMS to MeF and discontinue use of EMS in October 2012. IRS officials said that MeF provides several benefits; for example, MeF accepts or rejects individual tax returns faster than the EMS and provides better explanations for rejections. MeF will also allow taxpayers to attach Portable Document Format (PDF) files to their tax returns, which will be useful in instances where taxpayers are required to submit additional documentation, such as settlement statements when claiming the First Time Homebuyer Credit. Another benefit is that MeF accepts prior year returns starting with tax year 2009. As we previously reported, IRS tested MeF during the 2010 filing season, but use was low. Industry stakeholders, who are major users of the e-filing systems, said MeF was unstable (i.e., the system often had down time, time-outs, slow servers, and delayed acknowledgments). Use of MeF increased during the 2011 filing season and industry stakeholders reported improvements and positive experiences with the system. IRS officials said that plans to exclusively use MeF for the 2013 filing season are viable. E-filed tax returns provide IRS with digital information. IRS transcribes select data from paper returns to convert it to a digital format. IRS’s policy is to post the same information from electronic and paper returns to its databases. Only information posted to its databases is readily available for use in IRS’s enforcement programs and audit selections, which means that similar paper and electronic returns have equal chances of being selected for audit. In addition to e-filing or transcribing, IRS can potentially obtain digital tax data from two-dimensional (2-D) bar coding. A 2-D bar code is a black and white grid that encodes tax return data. Tax software would print bar codes on paper tax returns and high-speed scanners would scan them and import the data into IRS’s systems. IRS released a study in December 2010 that found that using 2-D bar codes would provide significant flexibility and generate cost savings. Paper returns that were prepared with pens or typewriters rather than software would still have to be transcribed. As of August 12, 2011, IRS processed 108 million returns electronically and 29 million returns on paper, for an e-filing rate of about 79 percent, according to IRS’s weekly processing data. IRS officials had estimated that the mandate would increase the individual e-filing rate to 75 percent in 2011 and 77 percent in 2012. The e-file rate for preparers increased this year, to about 89 percent, which is an increase of about 11 percentage points over last year’s rate, according to data as of July 2, 2011, from the Individual Return Transaction File. Although the e-file mandate did not apply to individual taxpayers who self-prepared their returns using commercial tax software, their e-file rate also increased to about 71 percent or about 7 percentage points over last year’s rate. (See fig. 2.) Based on IRS processing data and interviews with select software companies, IRS did not have any problems processing the 7.9 million additional returns e-filed in 2011. While IRS has not conducted an analysis to determine what factors influenced e-file growth, officials in the Return Preparer Office said that the mandate was one of the main contributors. According to IRS Forecasting Office officials, the increased e-file rate for self-prepared returns may have been due, in part, to IRS no longer mailing the paper forms and instructions to taxpayers, and to taxpayers becoming more comfortable with technology. By far the most common reason preparers cited for not e-filing was that the taxpayer asked to file on paper, as shown in figure 3. Preparers subject to the mandate who did not e-file a tax return were required to complete Form 8948, “Preparer Explanation for Not Filing Electronically,” and submit it with the return. IRS has not analyzed why taxpayers chose to file on paper, but it plans to do so in the future. Other reports about e-filing suggest some taxpayers have security concerns with e- filing. The preparers we talked with who were new to e-filing said they experienced increased costs and administrative burdens as a result of the mandate. Other preparers, who previously e-filed returns, told us that the mandate had little effect on their practice. We interviewed 26 prepare who were members of national preparer groups. Their views are not parers, but they provide representative of the entire population of pre some examples of preparer experiences. Five of the 26 preparers we interviewed had not previously e-filed, and they provided a variety of examples of how the mandate affected them. One preparer noted that her software provider charged an addit to e-file returns. Another preparer stated he hired an additional administrative employee to help manage requirements that were new to him when e-filing, such as Form 8879, “IRS e-file Signature Authorization.” Several preparers told us that it took them several ho make changes to rejected returns—a step that may not occur until months later, if at all, when filing a paper return. Another preparer who had not previously e-filed reported that she experienced a learning curv for e-filing, but after e-filing the first 10 or so returns, the process went more smoothly. Several preparers who had been e-filing prior to the mandate said they experienced some of these same problems when they first e-filed, but do not any longer. Some preparers who previously e-filed said that e-filing helped their businesses and found that the mandate did not change their operations greatly from the previous years. These preparers liked the convenience of e-filing and told us that it reduced the time needed to file returns, ensured y, the receipt of returns at IRS, and did not cost them any additional mone but in fact saved them money. One preparer noted that he had e-filed some returns in previous years, but this year was the first time tha all of his clients e-filed due to his encouragement because of the mandate. He said he saved money by placing PDF versions of his clien returns on a secure network space for them to review before he e-filed the return with IRS, reducing printing costs. Some preparers we interviewed relied heavily on software companies One preparer noted that he attends tax law training conducted by his software company to get annual updates. Three of the preparers we interviewed noted that their software automatically generated the new forms needed to comply with the mandate (e.g., Form 8948), and two preparers said that they heard about the mandate through their software companies. . IRS officials believe there are potentially three types of p may not comply with the e-file mandate: Preparers who the mandate or do not fully understand the requirements; know about the mandate, sign the tax return, but intentionally choose not to e-file; or complete tax returns, but do not sign the returns or submit them electronically. IRS refers to these preparers as “ghost preparers.” IRS has preliminary plans to identify each of these types of noncompliant preparers, but the plans are not yet fully developed. To identify preparerswho may be unaware of or deliberately noncompliant with the mandate, IRS plans to review paper tax returns and identify ones that were completed by a preparer and appear eligible for e-filing but have no Fo 8948, “Preparer Explanation for Not Filing Electronically.” r identify preparers who are willfully noncompliant with the mandate, officials plan to review preparers’ use of Form 8948, but have not yet determined how they will use this information to identify noncompliant preparers. Officials in IRS’s Return Preparer Office also stated they plan to modify the existing e-file monitoring program—currently focused on e-file security—to include compliance with the e-file mandate. However, officials have not yet developed the selection criteria to determine which preparers they will send notices to and visit, nor have they determined how compliance with the e-file mandate will fit into the scope of their visits. IRS has some plans regarding ghost preparers that er were developed to enforce new regulations for preparers. Among oth urns things, IRS plans to send letters to taxpayers who submitted ret yet appear to have had assistance without preparer signatures completing their returns. IRS officials said that one reason they have not completed t compliance plans is that they do not yet know the extent of noncompliance with the mandate. IRS officials said an extensive compliance strategy may not be needed unless noncompliance rates are -year high. They said that because this is the first year of the mandate’s 2 implementation period, there is not sufficient data to know whether noncompliance with the e-file mandate is high or will be high in the future. In fact, they suspect that noncompliance may be low, because e-file rates exceeded projections in the first year of the mandate. Nonetheless, th acknowledged that some noncompliance likely exists, and that there could be more next year when the mandate is fully effective and the threshold for the e-file requirement drops to more than 10 returns (down from 100 in 2011). A return that appears eligible for e-filing could include a paper return with a preparer tax identification number (PTIN) but not a Form 8948, “Preparer Explanation for Not Filing Electronically.” Starting in 2011, tax return preparers must obtain a PTIN, and use it to sign all returns they prepare, paper and electronic. 26 C.F.R. § 1.6109-2. who fail to comply with the e-file mandate, but does have some authority to discipline preparers under the Department of Treasury’s Circular No. 230. Circular 230 governs the practice of practitioners, including tax o return preparers, before IRS, and provides IRS with a limited ability t sanction preparers for failing to e-file. IRS’s Office of Professional Responsibility (OPR) administers and enforces Circular 230 standards. OPR officials said that they do not plan to frequently impose sanctions against preparers for noncompliance with the e-file mandate because the lso administrative process is resource-intensive and the sanctions may a be harsher than necessary. Prior to imposing sanctions, OPR must provide practitioners with notice and an opportunity for a hearing. Buildin a case against a preparer is time-consuming, often taking longer than a e filing season. Sanctions can include censure, suspension from practic before IRS, or disbarment. IRS can also impose monetary sanctions under Circular 230, but officials said they likely would not because the agency does not have the authority to collect any unpaid amounts—such cases must be referred to the Department of Justice, adding to the time and costs of enforcement. Because imposing monetary sanctions under Circular 230 is time- consuming and costly, IRS could benefit from separate penalty authority under the IRC. IRS already has authority under the IRC to impose penalties in other, similar circumstances. For example, IRS may impose a $50 penalty per return if a preparer neglects to sign a tax return or include a Preparer Tax Identification Number (PTIN) on a return. According to IRS’s penalty handbook, penalties exist to encourage voluntary compliance by supporting the standards of behavior required by the IRC. Granting IRS the authority to penalize for failing to e-file would build upon IRS’s existing penalty regime and provide a more commensurate sanction than those which can be imposed under Circular 230. Without such penalty authority, IRS may be limited in its ability to deter noncompliance and enforce the e-file mandate. According to the Director of the Return Preparer Office, IRS intends to conduct a “lessons learned” review of steps taken to implement the e-file mandate, but does not have a plan or schedule for doing so. As discussed in our previous reports, lessons learned can be useful tools for an organization to identify areas of improvement or document things that worked well. Areas of focus could include a review of staffing levels, timeliness of management decision making, and communication with the public. One lesson learned during the first year of the e-file mandate may be that additional staff helped IRS process e-file applications in a timely manner. (See app. IV for more information on the e-file application process.) Performing a lessons learned analysis on the e-file mandate for preparers could have future benefits because the fiscal year 2012 budget request for IRS included five legislative proposals for additional e-file mandates. Without identifying and documenting lessons learned, the knowledge could be lost for reasons such as staff turnover. The scope and depth of a lessons learned study should, of course, balance the costs of the study against the potential benefits. Paper returns limit the effectiveness of IRS’s enforcement and service programs. To control costs, IRS does not transcribe all the information on paper tax returns into its computer databases. In addition, as previously noted, IRS has a policy of posting the same information from electronic and paper returns to its databases, so that similar paper and electronic returns have equal chances of being selected for audit. In part, IRS’s intention with this policy is to avoid disincentives to e-filing. Consequently, if a line is not digitized from paper returns, it is not posted from electronic returns either, which limits the amount of information readily available for enforcement and service purposes. For example, taxpayers’ telephone numbers are not digitized. When IRS wants to obtain a phone number from a paper tax return, the number must be retrieved from the originally filed return, which takes extra time. Digitizing data can benefit taxpayers. In addition to faster refunds and improved convenience, IRS officials said that having more tax return information available electronically would improve audit selection, thus reducing burdensome audits for compliant taxpayers. Additionally, digital information can help some taxpayers get larger refunds, or reduce their taxes due. For example, using automated error checking, IRS corrected 7.7 million returns from taxpayers claiming the Making Work Pay credit, about 60 percent of which were in the taxpayer’s favor. These automated corrections, which reduced taxes, may not have been possible without digitizing relevant data from paper returns. According to IRS officials, digitizing and posting more comprehensive information from individual income tax returns could also facilitate enforcement efforts, expedite contacts for faster resolution, reduce handling costs, and increase compliance revenue. For example, in fiscal year 2010, IRS increased the amount of data it transcribed from Form 5405, “First-Time Home Buyer Credit.” It used this additional information to conduct prerefund compliance checks to ensure that taxpayers do not claim the credit in multiple years. We calculated IRS’s increased enforcement efforts prevented about $95 million in erroneous refunds in fiscal year 2010. Options for digitizing more paper tax return data include optical character recognition (OCR), bar coding, and transcription. An OCR system would read text directly from paper returns using optical scanners and recognition software and convert the text to digital data. IRS is not currently considering implementing OCR to obtain more digital data because of the high expense of the additional equipment needed. Instead, IRS is pursuing what it considers to be a less costly bar coding system and transcribing more data from paper returns. Two-dimensional (2-D) bar coding technology would capture data on v- coded returns, which are those returns that are prepared using software but are printed and mailed to IRS. All of the information on a return can be coded in bar codes, and unlike transcription, bar codes transfer data with 100 percent accuracy (although poorly printed bar codes may be rejected by the scanners). On a limited basis, IRS already uses bar coding technology to digitize the Schedule K-1, “Partner’s Share of Income, Deductions, Credits, etc.” This system, implemented in the early 1990s, was used to digitize about 15 million Schedule K-1s in fiscal year 2010. IRS also recently started to use bar codes to mask Social Security numbers on communication letters to preparers. IRS’s Submission Processing officials told us that they have written a proposal for a bar coding system, and IRS may request funding for it in the near future. Officials believe bar coding will improve tax return processing and reduce costs. As we previously reported, bar coding could contribute to IRS’s agency modernization goals and produce some of the same efficiencies as e-filing by replacing the labor-intensive transcription process and eliminating transcription errors. However, returns that were prepared without software—for example, with a typewriter or pen—would still require manual transcription, and bar coded returns would require some paper processing such as receiving and opening mail. A cost- benefit analysis for bar coding could include costs for processing and transcribing remaining paper returns. In our prior report, we recommended that IRS determine actions needed to require bar coding and related costs. IRS’s 2012 Revenue Proposals included a legislative proposal that would require all taxpayers who prepare their returns electronically but print and file them on paper to print the returns with a 2-D bar code. As of September 1, 2011, there had been no action on this legislative proposal. Officials also told us that even if a potential request is funded, it will be another 18 to 24 months before IRS could begin scanning individual paper returns using the technology. IRS has not yet produced a cost- benefit analysis or a return on investment study related to the bar coding initiative. Without a cost-benefit study, IRS management and Congress will lack key information useful for deciding whether to fund bar coding. IRS officials told us that if IRS is able to implement bar coding, they plan to model the bar code system on systems that states have developed and standardized, working in collaboration with software companies. As of 2007, 24 states and the District of Columbia were using bar code scanning to process some or all of their tax returns. State tax agencies reported that bar coding is quicker, more accurate, and less expensive than manual transcription of paper tax return data. A 2007 survey by the Federation of Tax Administrators asked the states that bar coded various state tax returns that year about savings they realized by bar coding instead of transcribing data manually. Eleven states provided answers with quantitative cost information; each of them reported that bar coding cost less than manual transcription. Manual transcription of data is the primary method currently available to IRS to digitize data from paper returns. IRS has analyzed the cost of transcribing all remaining lines on individual paper tax forms, and estimates that it would require 1,714 full-time equivalent (FTE) staff, costing about $71 million, for fiscal year 2012. However, IRS officials told us that IRS does not have funding at this time to transcribe all the remaining data lines from paper returns. According to IRS officials, IRS may increase the digital data available to its programs by transcribing selected lines, although the amount of transcription needed would be reduced if bar coding were implemented. If the additional transcription is phased in, a cost-benefit analysis that prioritizes data on a line-by-line basis could be used to determine which lines would have the lowest costs and greatest benefits. IRS has not completed such an analysis. IRS has taken some steps, such as developing priority lists of additional lines to transcribe, but has not quantified the costs or benefits of transcribing each line. Also, these listings were developed in different business operating divisions and are not integrated across divisions, so there is no ranking of the agency’s transcription priorities as a whole. Ranking transcription priorities could have benefits because the cost of transcription varies by line. We used IRS data to develop calculations to illustrate the potential variability of transcription costs across different tax return lines that IRS included in its priority listing. We estimated the costs of transcribing different lines from all paper tax returns submitted during a filing season, and found that costs varied from less than $1,000 to more than $500,000. We illustrate the variability of transcription costs by presenting averages for all lines on selected forms. For example, if IRS were to transcribe an average line on Form 1040 Schedule C from all paper tax returns for a filing season, it would cost $123,400. Costs for different lines on Schedule C would vary substantially around this average. Table 2 illustrates some of the average costs for lines on high-volume forms and schedules. (More details on our calculations are in app. V.) Because an increasing percentage of returns are e-filed, IRS could be at the tipping point where the service and compliance benefits of digitizing additional data from paper returns are greater than costs. Given today’s tight budget environment, additional resources for transcription would likely have to be moved from other areas within IRS. In deciding on whether to transcribe more lines, IRS would have to balance the benefits of additional transcription against the value of the work foregone. Preparers and taxpayers wanting to e-file may not be able to because some forms have not been added to IRS’s e-file systems. We identified two high-volume individual forms that currently cannot be e-filed and do not have a time line to be added to MeF.  Form 1040-X, “Amended U.S. Individual Income Tax Return.” About 6.9 million taxpayers submitted a Form 1040-X in fiscal year 2010. Several tax preparers we spoke with said it was a burden not to be able to e-file this form. Electronic Tax Administration officials said eventually they would like to enable e-filing of the 1040-X, but they have not done so because the technology would need to be developed to check the amended data against what was originally filed.  Form 1040-NR, “U.S. Nonresident Alien Income Tax Return.” About 621,000 taxpayers submitted a Form 1040-NR in 2010. IRS officials said that Form 1040-NR was not included in the decision process when IRS decided on the sequence of adding forms to e-file, but it might be in the future. Having a time line to add these high-volume or other forms to the e-file system could help IRS further achieve its e-filing goals, consistent with IRS’s e-Strategy for Growth. Without a time line, these high-volume or other forms may not be added to the e-file system. One reason IRS does not have a complete time line is that it has not developed a complete list of forms that cannot currently be e-filed. IRS’s Strategic Plan calls for using data and research across the organization to make informed decisions and allocate resources. Adding forms to the e- file system requires one-time expenditures, but ultimately may have compliance benefits, save IRS money, and reduce the burden on preparers and taxpayers. Without a list of forms that are not currently e- filed, IRS would not be able analyze the costs and benefits of adding different forms in order to prioritize which forms to add. Further, as with additional transcription, IRS would need to weigh the benefits of shifting resources to enable forms to be e-filed against the value of the work forgone. E-filing provides important benefits to taxpayers, including faster refunds and more accurate returns. It provides a low-cost option for IRS to improve enforcement operations and services to taxpayers. This is especially important in an era of tight budgets when federal agencies will be expected to do more with less. The increased e-filing rates from the first year of the mandate are helping IRS do this. IRS would benefit from having increased penalty authority to enforce the mandate and deter noncompliance. There are also several steps IRS could take to reduce costs, obtain more digital data, and facilitate further growth in e-filing. Documenting lessons learned from the current mandate might help with the implementation of future e-file mandates. Analysis of the costs and benefits of bar coding technology and additional transcription could better inform decisions about whether to digitize more data from paper returns. Similarly, developing a list and scheduling the addition of more forms to the e-file system could inform resource allocation decisions. Congress should consider amending the Internal Revenue Code to authorize IRS to assess penalties on preparers for failure to comply with section 6011(e)(3). To help increase electronic filing and to better target IRS’s efforts, we recommend that the Commissioner of Internal Revenue direct the appropriate officials to take the following five actions:  develop a plan for and schedule to conduct a study that identifies and documents lessons learned from the implementation of the e-file mandate;  determine whether and to what extent the benefits of bar-coding would outweigh the costs;  determine the relative costs and benefits of transcribing different individual lines of tax return data;  develop and prioritize a list of forms that still need to be added to the Modernized e-File system; and create a timetable to add additional forms to the Modernized e-File system, particularly for high-volume forms, such as the 1040-X and 1040-NR. We provided a draft of this report to the Commissioner of Internal Revenue for his review and comment. The Deputy Commissioner for Services and Enforcement provided written comments, which are reprinted in appendix VI. IRS also provided us with technical comments, which we incorporated into the report as appropriate. In response to our draft report, the Deputy Commissioner expressed appreciation to GAO for recognizing the noticeable increase in e-file participation this year and agreed with all five of our recommendations. However, the steps IRS outlined in its comments may not fully address two of our recommendations. Regarding our recommendation on the relative costs and benefits of transcribing different individual lines of tax return data, IRS stated that it has determined the relative costs of transcribing individual lines. To fully address this recommendation, however, IRS should also quantify the benefits of transcribing individual lines and compare them to the individual costs. This analysis could inform budget decisions by allowing IRS to compare the option of additional transcription against any work foregone. Regarding our recommendation to develop and prioritize a list of forms that still need to be added to MeF, IRS outlined the next three releases scheduled for MeF (through filing season 2014). While IRS has a list of forms it plans to transfer from EMS to MeF, there are still some forms that cannot be e-filed on either system. To fully address this recommendation, IRS should also develop a list of forms and schedules that cannot currently be e-filed. A complete list would enable IRS to analyze the costs and benefits of adding different forms to MeF as well as prioritizing which forms to add first. We plan to send copies of this report to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We are also sending copies to the Commissioner of Internal Revenue, the Secretary of the Treasury, the Chairman of the IRS Oversight Board, and the Director of the Office of Management and Budget. This report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions or wish to discuss the material in this report further, please contact me at (202) 512-9110 or WhiteJ@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals making key contributions to this report can be found in appendix VII. To describe electronic filing (e-file) rates, the Internal Revenue Service’s (IRS) processing capacity, reasons why preparers did not e-file, and their experiences implementing the mandate, we obtained and analyzed data from IRS’s weekly processing reports and the Individual Return Transaction File from Submission Processing and the Return Preparer Office, respectively. Data included numbers of returns that were completed by taxpayers and preparers as well as the filing method—e- filed, “v-coded,” and paper filed. In addition, we obtained and analyzed data about Form 8948, “Preparer Explanation for Not Filing Electronically,” and Form 8944, “Preparer e-file Hardship Waiver Request.” To determine the reliability of IRS’s data, we interviewed IRS officials who created the reports, reviewed related documentation, and reviewed the data for obvious errors. We found the data to be sufficiently reliable for our purposes. We interviewed Submission Processing officials and representatives from the National Association of Computerized Tax Processors about how well IRS processed additional e-filed returns this year. We also interviewed 9 tax software companies and 26 representatives from tax preparation firms to obtain their views about the mandate’s implementation. We chose a nonrepresentative sample of preparers affiliated with national preparer groups: American Institute of Certified Public Accountants, National Association of Tax Professionals, and National Association of Enrolled Agents. Each group gathered 6 to 9 preparers for group interviews, and two of the groups identified 4 preparers whom we spoke with individually. Additionally, we spoke with one preparer who contacted us in response to our previous report on e- filing. To assess IRS’s plans to enforce the mandate and determine lessons learned, we reviewed the 2011 planning documents for the integrated preparer compliance strategy and E-file Monitoring Program. We also interviewed officials from the Return Preparer Office about their compliance and enforcement plans for ensuring preparers were following the mandate. We interviewed officials from the Chief Counsel’s Office and the Office of Professional Responsibility to determine current options available to sanction preparers noncompliant with the e-file mandate. We also interviewed IRS officials from the Return Preparer Office about IRS plans for conducting a lessons learned study on the mandate’s implementation. To assess IRS’s analysis of options for digitizing more data from paper returns, we reviewed IRS’s 2008 proposal, Modernized Submission Processing: Solution Concept Briefing, to add a bar coding system to process paper returns and analyzed IRS’s priority transcription list developed by IRS’s Deputy Commissioner’s Office for Service and Enforcement (Service and Enforcement). Using IRS data, such as staff cost per keystroke and volume of paper forms, we estimated costs to transcribe additional lines of data (see app. V). To determine the reliability of IRS’s data, we interviewed IRS officials who developed the transcription priority listing, reviewed related documentation, and reviewed the data for obvious errors. We found these data to be sufficiently reliable for our purposes. We shared our calculations with IRS officials in Service and Enforcement who agreed with our approach. Also, we interviewed IRS officials from Submission Processing and Service and Enforcement about their plans to implement bar coding technology and transcribe additional lines of data and any analysis of the costs and benefits of implementing such methods. To determine whether there are any tax forms IRS cannot accept electronically and assess IRS’s plans for adding them to the e-filing system, we compared a list and time line of all tax forms that Submission Processing planned to add to the e-file system to a list of all existing IRS forms obtained from the Forms and Publications division. We also interviewed officials from the Wage and Investment division and the offices of Electronic Tax Administration and Submission Processing about their plans to add more forms to the e-file system. For each objective, we also interviewed officials at IRS’s office of Electronic Tax Administration and Return Preparer Office. Our work was done primarily at IRS Headquarters in Washington, D.C. and its division offices in New Carrolton, Maryland, and Atlanta, Georgia where the IRS officials who manage the e-file mandate implementation are located. We conducted this performance audit from March 2011 to October 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Tax return preparers who have never electronically filed (e-filed) a tax return may need to make some changes in their business practices as a result of the e-file mandate. Some preparers may need to purchase a tax software package to enable them to e-file. Most preparers will also need to apply to become an Authorized E-file Provider with the Internal Revenue Service (IRS), which allows them to submit electronic tax returns to IRS. New steps in preparing returns for those who have never e-filed could include obtaining a taxpayer’s signature on Form 8879, “IRS e-file Signature Authorization,” to document that the taxpayer has reviewed the return and that it is ready for transmission to IRS. Also, preparers who e- file receive an acknowledgement from IRS stating that the return was accepted or rejected into IRS’s e-file system. When a return is rejected, the preparer must correct the error, sometimes with more information from the taxpayer, in order to resubmit it to IRS. In instances when a preparer needs to file a return on paper, the preparer must submit Form 8948, “Preparer Explanation for Not Filing Electronically.” Figure 4 compares the processes preparers go through to submit a return electronically versus on paper. As of July 2, 2011, almost 60 percent (about 76 million) of all individual income tax returns were completed by a preparer and the remainder (54 million) were self-prepared by taxpayers (see fig. 5). Filing methods include electronic filing (e-filing) and paper filing. Returns that are prepared using software, but are printed and mailed to the Internal Revenue Service (IRS), are called “v-coded” returns. In order to e-file, a preparer must be an Authorized E-file Provider. The requirements to become an Authorized E-file Provider include submitting an application and passing background and suitability requirements. The Internal Revenue Service (IRS) issues an Electronic Filing Identification Number (EFIN) to firms or sole practitioners who meet these requirements. As of June 30, 2011, IRS processed 36,714 applications for preparers to become Authorized E-file Providers, 19 percent more than during the same time period in 2010—an increase that IRS officials said was due predominantly to the mandate. Applications that were submitted electronically had an average processing time of 18 days, while those submitted on paper had an average processing time of 26 days—both within IRS’s normal 45-day processing time. Overall, 89 percent of e-file applications were processed in fewer than 45 days. For 2012, when the mandate threshold is lowered to more than 10 returns, IRS officials project that e-file applications will increase by 38 percent over the average annual applications based on prior years. Electronic Products and Support Services officials anticipate the preparers who apply to become Authorized E-file Providers in 2012 will require additional assistance resulting in longer calls or multiple calls. As shown in table 3, IRS officials told us they will need 11 additional full-time equivalents (FTE) to manage this workload. We used the Internal Revenue Service (IRS) data to develop calculations to illustrate the potential variability of transcription costs across different tax return lines that IRS included in its priority listing. The formula we used to calculate the cost of transcribing a line of data is the following: All of these elements can vary:  Cost/keystroke is based on the hourly rate for transcription staff multiplied by number of keystrokes per hour. Number of keystrokes per hour varies slightly for different forms.  Average keystrokes per line varies for different data lines, from 1 to several hundred, with most under 10, as shown in figure 7.  Paper volume of the form is the number of forms that are submitted to IRS on paper. For example, if 10,000 taxpayers submit paper returns that include a given form, the paper volume of that form is 10,000. Paper volume is related to the total volume of the form and the e-file rate of the form: Paper volume = # taxpayers who submit the form × (1 – e-file rate of form)  E-file rates vary significantly for different forms, as shown in figure 6; for example, 78 percent of Form 8863s were e-filed for tax year 2009, compared to 55 percent of Schedule C’s.  Number of taxpayers who submit the form varies significantly for different forms, from under 10,000 to over 50 million.  Occurrence rate of the line is the rate at which the line is filled in. Some lines are left blank most of the time, while others are filled in more often or always. Occurrence rates vary from 1 percent to 100 percent. IRS has all of these data for over 500 lines identified by its Business Operating Divisions as high priorities for transcription. As an example of variations in these factors, different e-file rates for some high-volume forms are shown in figure 6. All other variables being equal, a line on a form such as Schedule C with a 55 percent e-file rate (45 percent paper file rate) would be about twice as expensive to transcribe as a line on a form such as Form 8863 with a 78 percent e-file rate (22 percent paper file rate). This is because there would be about twice as many returns from which to transcribe that line (45 ÷ 22). As another example, figure 7 is a frequency chart showing that most lines would require 1 to 15 keystrokes to transcribe, while some would require 46 or more. All other variables being equal, a line that required 46 keystrokes would be 46 times more expensive to transcribe than one requiring 1 keystroke. In addition to the contact named above, Libby Mixon, Assistant Director; Amy Bowser; Michele Fejfar; Cynthia Saunders; Robyn Trotter; and Meredith Trauner made key contributions to this report.
The Internal Revenue Service's (IRS) goal is to receive 80 percent of all major types of tax returns electronically by 2012. Legislation passed in November 2009 supports the 80 percent goal for individual income tax returns by requiring tax return preparers who file more than 10 individual returns per year to file them electronically, or e-file. GAO was asked to review IRS's implementation of this e-file mandate. Specifically, GAO (1) described e-file rates and preparers' experiences implementing the mandate, (2) assessed IRS's plans to enforce the mandate, (3) assessed IRS's analysis of options for digitizing more data from paper returns, and (4) determined whether there are any tax forms IRS cannot accept electronically and assessed IRS's plans for adding them to the e-file system. To conduct these analyses, GAO reviewed IRS processing data and e-file planning documents, and interviewed IRS officials and 26 members of national preparer organizations. In 2011, 79 percent of all individual tax returns were e-filed, a noticeable increase over prior years. Both preparer and self-prepared e-file rates increased, which IRS officials attributed to different factors. They said the e-file mandate was one key factor in the growth of preparer e-filing. Preparers GAO interviewed who were new to e-filing said they experienced increased costs and administrative burdens due to the mandate. Several preparers who had been e-filing prior to the mandate said they experienced some of these same problems when they first e-filed, but they now find that e-filing helps their business--for example, by reducing the time needed to file returns. IRS's plans to identify preparers who are not complying with the mandate are not fully developed because IRS does not know the extent of noncompliance and it may be low. Nonetheless, officials stated some noncompliance likely exists and may increase in 2012 when the mandate applies to more preparers. Regardless of the extent, IRS does not have authority under the IRC to assess penalties on preparers who fail to comply. IRS may be able to impose sanctions under Department of Treasury regulations that govern practice before IRS. However, the process is costly and the penalties, which could include suspension of practice, may be harsher than needed. IRS is considering pursuing two options to digitize more data--bar coding and additional transcription. IRS does not transcribe all lines from paper returns. IRS's policy is to post the same information from electronic and paper returns to its databases, so that similar paper and electronic returns have equal chances of being audited. IRS has not analyzed the costs and benefits of these options, which could support informed funding decisions. Some forms cannot be e-filed, including two relatively high-volume forms for amended returns and nonresident aliens. IRS has not developed a complete list of forms that cannot currently be e-filed nor does it have a time line for adding them to the e-file system. Without adding forms such as these to the system, IRS will limit e-filing's growth potential. Congress should consider amending the Internal Revenue Code (IRC) to provide IRS with penalty authority for preparer noncompliance with the mandate. GAO also recommends, among other things, that IRS conduct analyses on the costs and benefits of implementing bar coding and additional transcription and create a time line and list of forms to be added to the e-file system. IRS agreed with the recommendations.
The United States has approximately 500 airports served by scheduled airlines. Airports have long served as places for planes to take off and land, consisting of runways, control towers, terminals and other facilities that directly served airlines’ passengers and cargo. Airlines play a key role in the functioning of airport systems because they make decisions about which airports to serve and how frequently to provide service. Airlines may consider a number of factors in making these decisions, such as the presence of regional businesses and residents who are potential customers, the market share that can be obtained, the effects on their service network, and the service provided by competing carriers. Over the last three decades some airports began providing a greater range of passenger and business services and increasing their concessions to increase their revenue stream. Airport operators began to view airports as a destination as well as a place from which to take off and land. By the early 2000s, many airports focused on upscale concessions, such as exclusive restaurants and designer boutiques, and other premium services, such as rental car facilities and parking facilities linked to the airport, to help maximize revenue generation. John Kasarda, an airport and development expert, along with other researchers, noted these changes occurring at airports, both nationally and internationally, and began researching commercial development on airport property—referred to as the airport city—more than 15 years ago. According to Kasarda and other researchers, development on the airport spills over to the surrounding region and results in a new urban growth form with the airport at its center. (See app. IV for a selected bibliography.) According to Kasarda, this new urban growth form, for which he coined the phrase, “aerotropolis,” is similar to the growth of the traditional metropolis, in which the central city is linked to the suburbs through a surface transportation system. Studies on growth around airports have found that businesses that require or benefit from air transport seek locations near the airport, extending as far as 20 miles. Regional corporate headquarters, information and communications technology complexes, retail, hotel and entertainment centers, manufacturing facilities, trade representative offices, big-box retail stores, health, wellness, and fitness centers, conference centers, and residential developments, for example, are increasingly being established near airports as a part of airport-centric development. Researchers have noted that plans for such development involve arrangements for targeted development to facilitate the efficient flow of surface traffic, attract complementary businesses, and mitigate environmental contaminants usually associated with airports to increase the speed at which airport-centric development occurs. Researchers have also noted that the benefits from the development on the airport property reach far beyond the airport into the surrounding region, which can, in turn, reciprocally benefit the airport. Globally, aviation and airport systems vary and, in practice, the approaches to airport-centric development have varied. In several European countries and in the United States, airports were established decades ago and commercial development around most of those airports evolved in a piecemeal or ad-hoc way, without centralized planning or the cooperative efforts of airport operators, local and regional planners, and business developers. In the United States, the last large airports serving scheduled airlines that have been newly constructed on previously undeveloped land were the Denver (1995) and Dallas/Fort Worth (1974) International Airports. Operators at airports like Hong Kong and Incheon in South Korea are beginning to incorporate elements of commercial development at their airports and some operators have introduced policies or incentives to encourage targeted airport-dependent land use and development at and around the airports. In countries in which the aviation and airport systems are much newer, such as China and the United Arab Emirates, officials are employing a centralized planning and cooperative approach to rapidly expand commercial development on and around airports. Airport operators at domestic airports with scheduled airline service rely on revenue from two types of activities: aeronautical and nonaeronautical. Aeronautical activities at an airport occur on the airfield or in the terminal areas where airlines operate. For purposes of this report, revenue generated from aeronautical activities includes fees airports charge airlines to operate within the airport and other fees paid or collected by aircraft operators, including Passenger Facility Charges (PFCs), and Airport Improvement Program (AIP) grants. The Passenger Facility Charge (PFC) was introduced in 1990, Omnibus Reconciliation Act of 1990. Pub.L. No. 101-508, § 9110, 104 Stat. 1388, codified as amended at 49 U.S.C. § 40117. financed through taxes on aviation fuel and passenger airline tickets. Airports may also receive capital funds through state and local sources in addition to federal funds. Airport operators may also borrow the funds needed to finance capital projects through municipal bond markets. Airport revenue, including PFCs, may be used to pay debt service on bonds issued for eligible projects. Because funds from bonds are issued based on projected airport revenue, they are not considered by FAA to be a separate source of airport revenue. Nonaeronautical activities include food and beverage, retail concessions, and parking, automobile rentals, and rent on land and non-terminal facilities, such as manufacturing, warehousing, and freight forwarding. Nonaeronautical revenue may be used to reduce payments by airlines and may also be used to maintain and improve commercial services. (See fig. 2.) At some airports, terminals used by specific airlines are also financed and built through agreements between the airlines and the airports. Some airport operators are pursuing public-private partnerships (P3s) to finance their commercial development efforts, outside of their financial agreements with airlines. P3s are negotiated contractual agreements between public entities, such as an airport, and a private entity such as a contractor or developer. P3 contractual arrangements can allow developers to build and operate a facility and then transfer the facility to the airport, although there are various types of P3 arrangements. Under one of these types of arrangements, the private developer provides all or part of the financing and intends to capture its development or management fees. This type of P3 arrangement can provide the airport with the most leverage for commercial development. Based on our research, we found that officials from airports and jurisdictions considered the following factors when pursuing airport-centric development: (1) development at the airport, (2) air and surface connectivity, (3) funding sources for development, (4) development in the region, and (5) collaboration among stakeholders. (See fig. 3.) “Development at the airport” refers to existing infrastructure already in place and the actions of airport operators to enhance the viability of their airport by focusing on commercial activities to increase airports’ aeronautical and nonaeronautical revenue. “Air and surface connectivity” includes the routes taken by passengers or cargo to and from the airport to and from other destinations that may be enhanced by highway, rail, and port construction and additional airline routes. “Funding sources for development” include the funding for airport- centric developments as well as airport operations. “Development in the region” involves leveraging existing regional assets, expanding existing assets, or attracting new employment opportunities and business activity. “Collaboration among stakeholders” refers to the various actions that stakeholders can take to reach the goals and objectives that may further airport-centric development. An airport’s ability to generate revenue and contribute to the regional economy depends on its ability to attract airline service, passengers, and cargo shipments. Airport operators’ development efforts occur on airport property and involve: (1) providing services that directly support airline operations; (2) providing an expanded number and type of services within the airport terminals for passengers and visitors from the region; and (3) developing services for passengers and businesses, including airlines, on airport property but outside of the terminal areas. Officials at most of the airports in our review believed that their ability to attract and retain airlines was necessary to spur airport development. In particular, airline operators pay for their use of airport services ranging from the use of runways and cargo facilities, to the use of gates and ticket counters. Revenue from airlines for these services constitutes an important component of total airport revenue. In an effort to attract airlines and generate additional revenue, most airport operators we interviewed are expanding the number and type of services they offer, and some offer financial incentives. For example, some airport operators have begun offering services such as catering, maintenance, and warehousing that airlines or other third parties previously provided. Miami International Airport officials said that they waive landing fees for international and low cost carriers for the first 2 years in which the airlines schedule flights to Miami—forgoing some current airport operational revenue to help increase future airline operations while capturing more nonaeronautical revenue. As airport operators seek to attract revenue from passengers and visitors, they are renovating their terminals or improving their physical designs to improve the flow of people to the shops, concessions, and gates. The operators are also increasing the number and quality of retail and services—such as wine bars, massage spas, health care clinics, and high fashion shops—offered to passengers and, in some cases, visitors from the local area. (See fig. 4.) For example, the Miami International Airport was named one of the top 10 U.S. airports for dining and one of the world’s top 10 airports for retail shopping. The new $1.7 billion Tom Bradley International Terminal at Los Angeles International Airport is to contain 140,000 square feet for premier dining, retail shopping, and airline club lounges. Also, the Atlanta City Council approved a $3-billion concession contract for 126 food and beverage locations at 24 retail locations at the Hartsfield-Jackson Atlanta International Airport. Airport officials we spoke with in Miami and Los Angeles International Airports have cited the importance of passengers who arrive at or depart from their airports for the regional economy rather than those passengers connecting to other flights. Tourism is an important draw to each of these regions, and airport officials have improved airport facilities to more efficiently process and admit international visitors and tourists through security and immigration and customs checkpoints. Airport officials at these airports said that Customs and Border Protection (CBP) staffing can be insufficient at peak travel times and were concerned that international travelers might avoid their airports because of screening delays. These officials also believe that improvements to their airports could increase the rate of passenger and cargo processing if a sufficient number of CBP agents were available to staff inspection booths at peak travel times. However, many U.S. airports lack the space to expand their security facilities, and therefore, may need to identify innovative approaches to overcoming screening delays. Many airport operators are also developing airport property outside the terminal area to attract businesses and to use available land to generate revenue. Some have organized their management structures to include development or real estate offices to coordinate with airport management, developers, and public agencies. They are establishing commercial services and activities, such as hotels, parking facilities, and logistics parks, or leasing land for short-and medium-term use until it is needed in the future. Most airport officials we spoke with said that the amount of available land on the airport property was a factor in their ability to attract commercial activities to the airport. Officials varied in the type and extent of commercial activities or land uses they were pursuing. Airport officials from Hartsfield-Jackson Atlanta International Airport, Baltimore/Washington International Thurgood Marshall Airport, Los Angeles International Airport, and Miami International Airport said that the amount of land they had available limited their development options on airport property. According to officials in Miami, state law enabled the Florida Department of Transportation and airport officials to obtain land adjacent to the airport property for the development of an intermodal transportation center. This center contains the rental car facility and connections to the Metrorail and TriRail commuter rail systems that service Miami and nearby cities. Airport officials at Indianapolis International Airport partnered with a community college to develop a worker-training program in logistics and distribution at the airport to meet an anticipated growing need for this skill. Officials at Lambert-St. Louis International Airport would like to develop cargo services, including warehouses and cold storage facilities, to attract cargo operations that could generate revenue at the airport. Their goal is to use cargo revenue to lower the cost of passenger flights in an attempt to increase passenger traffic. In addition, airports with land not being used for operation have found ways to generate revenue through temporary or short-term leases of airport property while also reserving the land for future aeronautical needs. We found that Indianapolis and Denver International Airports plan to develop solar energy farms on airport property; Denver International Airport produces more solar energy than any other airport with scheduled airline service in the United States. Officials at Dallas/Fort Worth International Airport have leased a portion of the airport property for oil extraction. See figure 5 for examples of such airport land use and development. In two localities, airport and regional officials indicated that activities in the region were contributing to the commercial viability of the airport. Regional officials told us that, in part, because of the U.S. automotive industry’s presence in Detroit, several of the leading Asian automobile manufacturers have established research and development facilities in the Detroit metropolitan area. In addition, these officials said that a vibrant Asian community, the availability of highly skilled engineering workforce, and access to institutions of higher education offering degrees relevant to their careers attracted Asian automobile industry researchers. According to regional officials, these activities have increased the traffic at Detroit Metropolitan Wayne County Airport. Officials from Miami International Airport and the region discussed the symbiotic relationship between Miami International Airport and the Port of Miami. According to an official, the Port of Miami is the busiest cruise port in the world with over 4 million passengers annually. Of these passengers, 60 percent arrived in Miami through Miami International Airport. The construction of rail connections between the port, downtown area, and airport is expected to facilitate connectivity. These officials noted that the expansion of the Panama Canal to accommodate larger ships in 2014 will benefit Miami because large ships can use the Port of Miami and is to help to promote trade with Asia. If this expected growth in cargo operations occurs, then, according to port officials, the Miami region will benefit and, in turn, contribute to the growth of Miami International Airport by expanding its potential passenger and cargo markets. Most stakeholders we spoke with believe that a region’s ability to connect to a variety of domestic and international locations by air is key to attracting businesses, tourists, and cargo to the region. Airport and regional officials sought to increase the number and frequency of flights to a variety of locations by establishing new relationships with foreign airports and business groups and offering incentives to airlines for additional destinations by the airlines. For example, airport and regional officials in Atlanta and Paris have begun cooperating on ways to promote their airport areas for business exchanges. Similarly, Miami International Airport officials visited and reciprocally hosted South African business groups to encourage business development and the flow of passengers and cargo between their respective regions. In July 2012, airport operators at the Memphis International Airport began a $1 million incentive program to attract new, non-stop domestic and international routes. Despite efforts by airports to maintain good air connectivity to many locations, it is airlines that make decisions about what routes to fly. Most airport officials noted that the airlines’ decisions to change or eliminate routes can sometimes negatively affect the region’s level of air connectivity. In recent years, the General Mitchell International Airport in Milwaukee experienced a period of growth followed by a decline because of airline business decisions. According to the airport operator, the presence of three low-cost carriers increased the number of flight offerings, but the subsequent merger of two of those airlines and relocation of the third resulted in fewer flights. Similarly, officials in Memphis were concerned about potential loss of air services after a major airline announced plans to decrease its passenger services at Memphis International Airport in response to low demand making the route uneconomical for the airline. Among our selection of airports, international connectivity varied. For example, as of February 2013, 33 airlines served Dulles International Airport, offering direct flights to more than 40 destinations in Canada, Mexico, the Caribbean, and parts of Europe, South America, Asia, Africa, and the Middle East. By comparison, direct flights from General Mitchell International Airport in Milwaukee were limited to destinations in the United States, Mexico, and the Caribbean. Because cargo may be transported below the passenger decks of airplanes, a decline in international passenger flight offerings may affect a region’s potential to directly provide cargo access to international markets for those businesses that rely on air services. In addition to air connectivity, officials we spoke with discussed the need to improve the connectivity of their surface transportation system to attract businesses, especially those that handle time-sensitive or high- value goods such as perishable items or electronic components. These officials cited the importance of identifying and marketing the various transportation modes of a particular region. For example, as mentioned, officials at the Port of Miami estimated that 60 percent of its cruise ship passengers arrive by air. This, they said, highlights the importance of an efficient connection between the airport and the seaport for moving tourists to and from cruise ships. Miami International Airport officials also highlighted the importance of an airport to highway connections for importing and distributing perishable items, including flowers and produce, from Latin America. They noted that many trucks transport cargo from the airport to the federal highway system daily, helping to distribute perishable food and produce imports to the United States. These officials also said that a viaduct dedicated to truck traffic was being built to stem a projected loss of $1 billion in revenue by 2015 because of congestion on the roads between the cargo area inside the airport and the warehouses and freight forwarders in the nearby city of Doral. At Indianapolis International Airport, officials cited the region’s rail and highway connectivity and the presence of FedEx facilities as important infrastructure to support a growing logistics, freight-forwarding, and distribution industry. Airport officials and other regional stakeholders in Memphis market its “Four Rs”—road, river, rail, and runway—to appeal to businesses that may rely on intermodal transportation. Figure 6 illustrates intermodal transportation systems and surface transportation connectivity. With access to multiple modes of transport, businesses can determine shipping routes and methods that are cost-effective and meet customer requirements. (See table 1 for examples of multimodal transportation at airports.) FedEx, for example, determines which mode is the most cost effective based on fuel prices, distance traveled, and time of travel and selects the mode to use. Stakeholders in the Memphis region also noted that the airport’s geographic location provides companies with timely access to major U.S. markets and many places around the world. According to a FedEx official, their ability to reach two-thirds of the U.S. population within 12 hours and most international locations overnight was a key factor in locating in Memphis. Many stakeholders told us that a region cannot fully benefit from an efficiently run airport if the surface transportation needed to access the airport is congested. Surface congestion can increase costs, contribute to system inefficiencies, and delay on-time freight delivery. These stakeholders also considered ways to increase public transportation options to relieve congestion from roads while providing alternate transportation options to travelers and airport workers. Most regions in our review offer local bus services to their airports and many also offer local rail services, or plan to offer new rail connections between the airport and the central business district, downtown, for example, in Miami, Washington, D.C., Los Angeles, and Denver. Alternatively, a well- integrated surface transportation network can provide the basis for an efficient logistics and distribution services within a region. Many experts we spoke to agreed that intermodal networks consisting of highways, rail capabilities, or waterways, linked to the airport may facilitate airport- centric development by improving mobility and allowing more people and cargo to access the airport. Some experts we spoke with, as well as literature sources,advancements in high speed rail and the potential for code sharing across modes of transportation as a way to free up additional capacity at the airport in some congested regions of the country and extend the region served by an airport. Under a code sharing arrangement, integrated air- rail ticketing would allow a passenger to use both modes of travel through one purchase transaction. Some experts believe that high speed rail development could help contribute to the commercial development of airports. California is considering options as it develops its high-speed rail capabilities, with one potential plan to link San Diego through Los Angeles to Sacramento and San Francisco and airport officials in Miami told us that they believed a high speed rail link between Miami and Orlando would increase the number of passengers at Miami International Airport. Other experts, however, believe that high-speed rail could divert demand from air transport and reduce the need for commercial development at airports, especially if high speed rail is not directly linked to airports. Currently, roads and light rail affect airport development more than high speed rail. Transportation improvements for airport-centric development may entail large capital-intensive projects that generally require pooling money from different sources. Federal funds are often sought, but airport and regional officials also seek other sources of funds for their development efforts, particularly intermodal funding and public-private partnership funding. The failure to obtain adequate funding can prevent or inhibit the growth of these airport-centric projects. Officials from the City and County of St. Louis and the State of Missouri were unable to obtain the funding they needed for airport-centric development after airport and the private sector representatives formed the Midwest-China Hub Commission in 2008. After establishing a freight and commercial logistics facility at the airport, the commission sought to attract regularly scheduled freight service to Asia and Latin America and obtain foreign direct investment. Members of the commission visited China, established an office in Beijing, and hosted visitors from China. A Chinese cargo airline began scheduled flights to St. Louis in 2011; however, its operations were not sustainable without financial assistance, according to airport officials at Lambert-St. Louis International Airport. Stakeholders sought to obtain $480 million in state funding to: (1) subsidize the cost of initially flying goods out of the St. Louis region to China (2) provide tax breaks to companies engaging in foreign trade at the airport, and (3) subsidize the cost of constructing millions of square feet of warehouse and factory space in locations across the region. The commission was unable to obtain state funding and is delaying their airport-centric development efforts while it seeks funding from other sources. According to an April, 2011, evaluation of the Global TransPark logistics the TransPark received airport-centric effort in Kinston, North Carolina,a total of $248 million in funding from local, state, federal, and private sources—far short of the estimated $733 million total cost of the complex. Evaluators found that the TransPark Authority was unable to repay a $25 million loan that had been made in 1993, because operations at the TransPark did not generate sufficient funds to repay the loan. The balance of the loan—$39.9 million because of interest accrual as of February 2011—is to be repaid by the state of North Carolina. One expert attributed the Global TransPark’s failure to attract sufficient business activities to recover costs in a timely manner to an original project design that was too optimistic and the financial risks surrounding large-scale projects. As shown in table 2, the federal government has a number of programs designed to support regional transportation infrastructure development, which some regions have leveraged as part of their airport-centric development efforts. Although federal sources of funding—such as those identified above— can sometimes be used to develop intermodal capabilities at U.S. airports, the primary planning and development responsibilities for these efforts rest with state and local government agencies. State and locally generated money—such as state transportation trust funds, dedicated sales taxes, and highway tolls—have been used to match federal funds. For example, contributions from the Commonwealth of Virginia, the Metropolitan Washington Airports Authority, Fairfax and Loudoun Counties, and toll revenues from the Dulles Toll Road will be used to pay for the Washington Metropolitan Area Transit Authority Metrorail connection to Dulles International Airport. A “transportation improvement district” was also established to help fund the Metrorail extension from downtown Washington, D.C. to the airport. States may also have their own credit assistance programs. For example, Florida used funds from its credit assistance bank to provide loans to help develop the Miami Intermodal Center at the Miami International Airport. The Miami Intermodal Center has levied a customer facility charge on car rentals to pay for its consolidated rental car facility. Some airport operators and an expert with whom we spoke said that FAA’s grant assurances and obligations—that is, requirements on the use of federally administered funds—can limit the airport operator’s ability to fund certain types of intermodal projects. For example, airport operators may use PFCs or AIP grants to fund rail access at airports, if the project is owned by the airport, located on airport property, and used exclusively by airport passengers and employees. PFCs may be used to fund related activities when they are a necessary part of an eligible access road or facility. This requirement on the use of PFCs exists to avoid revenue diversion-the use of airport revenue for other than airport purposes. According to an expert we spoke with, the failure to meet these conditions may preclude an airport from using such funding to connect with a transit line that connects communities on each side to the airport because FAA would require that riders on the transit line begin or end their journey at the airport, rather than bypassing the airport. There are also federal restrictions on the development and sale of airport- owned land and the use of revenues generated from an airport’s land because of the grant assurances an airport accepts as a condition of receiving federal land or funds. Other funding sources on which airport operators generally rely to improve or commercially develop their airports, such as state grants and bonds, also involve various assurances. A public-private partnership (P3) for airport-centric development usually refers to a contractual agreement formed between a public airport and private sector developers for the developers to renovate or construct and operate or manage an airport’s facilities on airport land. Some airport operators view public-private partnership arrangements to commercially develop airports as an alternative or supplementary funding source to funds that may be limited by federal restrictions or grant assurances. The particular arrangements of public-private partnerships vary considerably, but developers may finance, design, build, operate, and maintain an enterprise (including charging fees) for a specific time period, after which ownership of the enterprise reverts back to the airport in most P3 arrangements. The Port Authority of New York and New Jersey partnered with the private sector for the $1.2-billion expansion of Terminal 4 at John F. Kennedy International Airport, which when it opened in 2001, represented the largest P3 of its kind at a North American airport. In October 2012, the Port Authority issued a request for qualifications for a P3 to replace LaGuardia’s main terminal in addition to new roads and taxiways with anticipated construction beginning in 2014. debt already incurred to renovate the airport’s terminals. We have previously reported on the benefits and trade-offs of P3s and have expressed concern about how the public interest is protected in these projects. The most recent surface transportation reauthorization, MAP- 21, requires the Secretary of Transportation to develop standardized P3 agreements, identify best practices, and provide technical assistance to P3 project sponsors. Denver International Airport illustrates another possible type of public- private funding, although a type that is likely to be of limited use to most U.S. airports. Denver International Airport, as part of the Denver Department of Aviation, receives funds from the sale and development of Denver’s previous airport, Stapleton. This land is being zoned as mixed- use and being developed primarily as residential communities. To develop the Stapleton site, a private non-profit corporation established by the city of Denver and the Denver Urban Renewal Authority, had to re- grade the site to provide adequate storm water drainage; install water, sewer, and other utility lines; develop roads and interchanges; plan and develop parks and trails; preserve wetlands; and install community facilities, such as fire stations, a recreation center, a branch library, and schools. Most local government and private sector officials with whom we spoke promoted their region’s existing assets and proximity to the airport to attract or expand businesses that benefit from air connectivity. Officials identified a variety of mechanisms to attract businesses, such as (1) linking airport development to commercial activities in the region, (2) identifying and leveraging unique cultural aspects of the region and promoting tourism or the general quality of life offered by the area, (3) developing industry clusters,incentives to attract businesses to the region. and (4) designing policies and providing Local government officials at many localities indicated that activities on airport grounds contributed to development in the region around the airport. Officials at Hartsfield-Jackson Atlanta International Airport told us that based on a 2009 economic impact study of the airport; they expected a new international terminal to increase airport operations and attract businesses and create jobs in the region surrounding the airport. These airport officials noted that regional stakeholders have already established two new hotels, an office building, and the Georgia International Convention Center on property near the airport. The Mayor of Denver has said that development at the Denver International Airport has the potential to spur commercial development in the Denver region for decades, including development along a planned commuter rail corridor that connects the airport with downtown Denver. The Regional Transportation DistrictAirport; airport officials are planning to build the terminal station and, in conjunction with the city of Denver, one or two additional stations to encourage development in the resulting corridor between the city of Denver and the airport. is building electrified commuter rail to Denver International Cargo service airports can also contribute to regional development. For example, Alliance Global Logistics Hub, an industrial cargo airport near Dallas/Fort Worth International Airport, developed in 1982, has attracted more than $7 billion in investments and 290 corporate residents, including 50 companies listed on the Fortune 500, Global 500 or Forbes’ Top List of Private Firms. Although the North Carolina Global TransPark in Kinston, North Carolina, has not attracted the initial investment or new jobs initially envisioned, some development has taken place. The TransPark has, as of February 2013, attracted 13 tenants. One of the tenants received a Job Development Incentive Grant from the North Carolina Department of Commerce, and is expected to employ more than 1,000 workers by 2014. An official representing the TransPark said that new developments like the TransPark take time to fully install supporting transportation infrastructure and utilities to attract tenants, but hopes that additional companies will locate at the TransPark. Officials at Los Angeles and Miami International Airports cited cultural ties to other regions of the world and tourism as important drivers of passengers and cargo traffic. For example, airport officials in Los Angeles said that the city has large Korean and Iranian populations, and Miami airport officials spoke of the city’s close cultural ties with Latin America. Officials in Los Angeles said that the large population of Asians in the Los Angeles region has reinforced strong cultural ties to Asian countries and has helped to support trade with these countries. Similarly, officials in Miami said that tourists from Latin America and the Caribbean visit Miami, in part, because of the cultural familiarity, the access to world-class tourist attractions and cruise ships, and the shopping options that may be unavailable in their countries of origin. In both locations, the regional aspects helped to attract visitors who use the airport and spend money in the region. Officials in Memphis said that some of their development efforts, such as developing Elvis Presley Boulevard and the potential redevelopment of downtown Memphis, are intended to increase tourism and attract more passenger flights to the area. They also noted that by drawing visitors to the region, the airport would generate additional revenue and airlines might offer more flights to the region. In some regions, local officials told us that they were trying to attract complementary businesses to form industry clusters that might benefit from the availability of a skilled and interchangeable, or transferrable, workforce. For example, officials in Miami have been fostering growth in the region’s banking, insurance and legal services, by promoting its multicultural and multilingual workforce and its direct air connectivity to Latin America and the Caribbean. Stakeholders involved with business development in the Baltimore region expected that the influx of military jobs at Fort Meade would result in the growth of defense contracting jobs in the region. These officials anticipate that defense contracting jobs will, in turn, lead to additional growth in the region. For example, one regional stakeholder noted that the growing number of government consultants rely heavily on air transportation and area hotels. Executives at a private corporation in Detroit told us that they are trying to attract compatible businesses that could leverage the region’s strength in research and development in automobile electronics. Most local officials we spoke with have implemented state, regional, or local tax-based incentives and land use policies to attract businesses and developers to their regions. For example, airport officials in Indiana, Maryland, Missouri, North Carolina, Texas, and Virginia have applied to the U.S. Department of Commerce for foreign trade zone designation at and around their airports to support tax-free manufacturing. Stakeholders in Detroit leveraged state-approved tax incentives to attract businesses that rely on the airport for commerce to a 60,000 acre area around the Detroit Metropolitan Wayne County and Willow Run Airports. Local planning officials have affected particular land uses near airports through planning policies, including policies related to noise, environmental quality (air, water, wetland, species protection), and zoning restrictions. This can help with airport-centric development because it prioritizes limited developable land for uses that are compatible with airport operations and compliant with local, state, and federal requirements. At the Hartsfield- Jackson Atlanta International Airport, a private developer cleaned up an abandoned industrial site east of the airport and sold a portion of the land to the City of Atlanta for the airport’s use and sold another portion to a high-end auto manufacturer. The auto manufacturer expressed interest in purchasing more land from the developer to attract another high-end auto manufacturer to develop and share a track on the site to attract prospective buyers to fly into the region to test drive cars. Airport officials in three of the regions in our study said they have considered the potential to utilize one airport primarily for passengers with a nearby airport for cargo; however, those officials also identified potential challenges to splitting passenger and cargo operations. For example, officials from Los Angeles International Airport told us that it would be inefficient to move their cargo operations to nearby Ontario Airport in the Los Angeles region because much of the cargo passing through their airport travels in the lower deck of passenger planes. Officials at Detroit Metropolitan Wayne County Airport said they use nearby Willow Run industrial airport for air cargo to complement passenger and cargo services offered at Detroit Metropolitan Wayne County Airport, but cited limitations in the airport’s runway length and condition. An official at Alliance Global Logistics Hub, on the other hand, said that the passenger services offered at nearby Dallas/Fort Worth International Airport complemented the cargo-only services offered at Alliance Global Logistics Hub because Dallas/Fort Worth International Airport offers passenger services most major cities in the United States, Mexico and Canada within 4 hours. While the economic viability of all-cargo operations at Ontario and Willow Run Airports is not yet known, we have previously identified regional airport planning as one approach to constrained capacity. Some metropolitan planning organizations (MPO) conduct regional airport planning as a part of their activities. In 2010, we found metropolitan- planning organizations that conduct regional airport planning have no authority to determine the priorities of airport improvement projects in their regions; MPOs do have authority over surface transportation projects. As a result, the regional airport plans that MPOs produce have little direct influence over airport capital investment and other decisions.Support and funding of regional airports depend on the FAA’s assessment of the project. Thus, GAO recommended that FAA develop a review process for regional airport system planning. According to FAA officials, FAA agreed to review its Airport System Planning guidance and revise or clarify it, if necessary, although the agency believed it current guidance was adequate. Airport-centric development efforts in the regions we studied span multiple jurisdictions and involve stakeholders from the airport, the private sector, and the government sector. Based on our review of literature, our previous work,visited, collaboration among various stakeholders can help achieve specific goals. Consultation with residents near the airport and with city officials representing the interest of their constituents is an important step in the airport-centric development process. Without collaboration or agreement among stakeholders, development plans may be difficult to implement. Los Angeles International Airport officials, for example, would like to further expand the airport’s northern airfield to address safety and efficiency issues related to aircraft operations (including accommodating and discussions with stakeholders in the regions we larger aircraft); however, given the proximity of the airport to residential areas and community opposition to potential noise issues, there has been little public or political support for the airport’s expansion. As our previous work has shown early and continuous community involvement is critical to efficient and timely project implementation. For example, some airport operators are using AIP funds for long-term environmental planning; continuously self-monitoring their environmental footprints to help prepare for and address environmental issues; and soliciting community concerns to anticipate and address environmental issues. In addition, several FAA processes have been established to help airports address environmental concerns such as a streamlined environmental review process for airport projects where expansion is critical for handling the growth of air traffic. In the future, as airports like Los Angeles International Airport learn to better manage their environmental impacts, airports may be better able to garner community support for airport expansion. The stakeholders we spoke with gave examples of the ways in which they collaborated with other stakeholders, such as establishing new groups to promote airport-centric development. Regional stakeholders in Baltimore, Detroit, Indianapolis, Memphis, Milwaukee, and St. Louis formed multilateral committees including stakeholders representing the airport, the public sector, and the private sector. While these committees all had a general focus on airport-centric development, they focused on different aspects of airport-centric development and functioned in different ways, for example: The BWI Partnership is a business-development advocacy group, representing the airport and approximately 200 developers, hotels, law firms, banks, and local government members, and focused on supporting business development and efficient transportation in the airport region. The Detroit Region Aerotropolis Development Corporation (ADC) is a public-private economic development agency that works to attract businesses that rely on air cargo and passenger services. It is comprised of and funded by stakeholders representing seven local communities, two counties, and the airport. The Next Michigan Development Act,jurisdictional cooperation among local entities and allows them to create tax incentive zones targeted at businesses in the transportation and logistics sectors. According to the ADC the act contains a clause to prevent job displacement from another region of the State of Michigan. a state incentive program, encourages inter- Airport officials led airport-centric development efforts at and around the Indianapolis International Airport. In their early planning stages, airport officials invited representatives from nine neighboring jurisdictions, including the City of Indianapolis, to sign a nonbinding memorandum of understanding to explore potential targeted airport- centric development opportunities, based on land availability and existing assets and infrastructure, such as warehouses, and rail connections that might be utilized to support airport-centric development; the officials increased the number of stakeholders with whom to collaborate by expanding the area of consideration, from a 5- mile radius to an 8-mile radius; and they began monthly stakeholder meetings. Airport officials also partnered with a local community college to establish a supply-chain logistics and freight-forwarding technical school on airport property to meet the anticipated demand for skilled workers in this trade. In 2006, the Greater Memphis Chambers of Commerce created the Memphis Aerotropolis Steering Committee, comprised of public and private sector stakeholders, to coordinate development efforts in selected targeted development areas surrounding the airport. This group has established various work groups to focus on gateways and beautification, marketing and branding, corridor business development, and access and transportation. The City of Memphis was awarded $1.26 million from the U.S. Department of Housing and Urban Development to partner with the Greater Memphis Chamber of Commerce, the University of Memphis, and Shelby County to develop a master plan for airport-centered economic development efforts. The federal funds were matched with $900,000 in local funds and in-kind services. The Airport Gateway Business Association (AGBA) was created in 2005 to provide leadership in planning, promoting, and developing the vitality of the area around the General Mitchell International Airport in Milwaukee, marketed as the “Gateway to Milwaukee.” Funding is provided through an Airport Gateway Business Improvement District, managed by AGBA, and stakeholders represent the State of Wisconsin, the City and the County of Milwaukee, the Milwaukee 7 (seven counties united around an agenda to grow, expand and attract world-class businesses and talent), Visit Milwaukee, and the General Mitchell International Airport. Officials representing customs brokers and freight forwarders in Miami indicated the importance of collaboration between stakeholders. That is, the industry depends on well-established relationships between those responsible for importing and inspecting cargo and those routing it to its final destinations. Infrastructure is also necessary, including refrigerated storage, fumigation facilities, and information technology systems. Another official, representing floral importers of Florida, said that the established infrastructure and relationships needed to support a supply chain are not easy to replicate and help to ensure that Miami International Airport does not lose its position to another airport as the primary gateway for most of the flowers imported into the United States. Officials at Lambert-St. Louis International Airport also explained that freight forwarders serve as “gatekeepers,” determining what route freight takes to get from its origin to its destination. Based on our review of studies and discussion with regional stakeholders, we have developed an organizational framework that describes 5 factors—development at the airport, air and surface connectivity, funding mechanisms for development, development in the region, and stakeholder collaboration—to consider when approaching airport-centric development. We could not determine if each of these factors are needed or if one factor could be substituted for another. However, consideration of these factors may be helpful as government officials and private-sector developers develop their plans and analyses when considering undertaking airport-centric development or projects supporting airport- centric development. Some countries enjoy the concurrent green field development of airports, the regions, and the facilities that comprise airport-centric development. In the United States, however, where there are many long-established airports, most airport-centric development is implemented through a series of targeted projects and activities to build upon what already exists. The success of these projects or activities does not ensure the success of an entire airport-centric development. Similarly, the presence of an economically viable airport in an economically successful region does not necessarily mean that targeted airport-centric development efforts were responsible for success. We provided a draft of this report to the Federal Aviation Administration (FAA), the Department of Commerce (DOC), the Department of Housing and Urban Development (HUD), the Environmental Protection Agency (EPA), and representatives from the Airports Council International (ACI), Airlines for America (A4A), the Cargo Airline Association (CAA), and academic experts for review and comment. We invited airport and regional stakeholders to comment on the portions of this report draft that pertained to them. FAA, HUD, and EPA provided technical comments on the various programs under their purview that we incorporated as appropriate. ACI, A4A, CAA, and the academic experts, generally, agreed with the approach and information in the draft. One expert indicated that an evaluative approach would have been more useful for policymakers. Some stakeholders provided technical information that we incorporated as appropriate. We are sending copies of this report to the Secretary of Transportation, the appropriate congressional committees, and others. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-2834 or dillinghamg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Airport-centric development is occurring in many countries of the world, including the U.S. According to Kasarda, for example, Asia is second to North America (21 versus 40) in the number of airport-centric developments; Europe has 20 airport-centric developments. Globally, Kasarda identified 35 airport cities and 56 aerotropolises (see table 3). Some countries with developing economies, including China, India, South Korea, and the UAE, are building new airports in conjunction with planned cities on the airport property, called “airport cities,” and beyond airport property (“aerotropolises”) to provide services for travelers and shippers. Appendix II: Profiles of U.S. Airport Regions According to a regional business journal, the Maryland Board of Public Works approved and awarded a $10 million design contract to design a new connection between two of the airport’s terminals. Airport officials said that the expansion , which will allow passengers to pass through security checkpoints to access shops and restaurants at the airport’s A, B, and C terminals without passing through any further checkpoints, is expected to cost about $100 million to complete. (Sernovitz, Daniel J., “BWI Airport awarded $10 million for terminal expansion,” Baltimore Business Journal, Baltimore, MD: April 12, 2011), accessed November 14, 2011, http://www.bizjournals.com/baltimore/print-edition/2011/08/12/bwi-airport-awarded-10-million.html.). According to the BWI Business Partnership website, the Partnership is a business development and transportation advocacy organization with nearly 175 business and government agency members representing local and regional businesses, and local, state and federal government agencies. customs tariffs until the goods leave the zone and are formally entered into U.S. Customs Territory. Merchandise that is shipped to foreign countries from FTZs is exempt from duty payments. This provision is especially useful to firms that import components in order to manufacture finished products for export. This report describes the factors that our research and airport operators, government officials, developers, and other stakeholders identified as key considerations for airport-centric development. Specifically, this study describes the activities of stakeholders who are engaged airport-centric development and the motivations and beliefs of those stakeholders with respect to their efforts. To determine the key characteristics of airport centric development, in the United States and internationally, we first conducted a bibliographic search of relevant articles and books cited in the following data-bases. (See table 18.) We supplemented the citations we obtained from this search with those from the bibliographies of other studies we had obtained, and recommendations from experts we interviewed. After screening the abstracts of these studies for relevance, we collected information from these studies for further analysis. To supplement the information obtained from our literature review, we spoke with federal officials at the Department of Transportation and its Federal Aviation Administration; the Economic Development and International Trade Administrations within the Department of Commerce; and the Environmental Protection Agency. We also spoke with experts in transportation, trade, logistics, and community development about airport-centric development issues. To obtain information about airport-centric activities, we selected a purposeful sample of airports based on the number of passengers and amount of cargo served; expert recommendations; and geographical representation. This selection procedure yielded of the following 12 scheduled airline and 2 industrial airports for closer study (see fig. 7). From this purposeful sample of 14 airports, we selected 7 sites to visit to understand the activities and perceptions of stakeholders; we conducted telephone interviews with representative of and stakeholders involved with the other 7 airports. See table 19. To obtain a full range of relevant stakeholder perspectives on the airport- centric development efforts, we interviewed airport officials; executives from businesses located adjacent or near to airports; representatives of real estate development organizations; local and regional economic development specialists, and federal, state, and local government officials. We attempted to identify critics of airport-centric development in each airport region, but we were generally unable to identify critics. We conducted our interviews using a semi-structured approach that allowed our interviewees to respond to provide the information that was most relevant for their airport and region in each of several broad areas. These areas included: challenges interviewees had experienced in their development efforts; ways they had or might address those challenges; the likely success of their development efforts; factors that might facilitate or hinder development; any lessons learned or advice the interviewees identified for others interested in such development efforts; their assessment of the impact of the considerations for their initiative; and illustrative examples of how their development efforts had proceeded. This approach permitted stakeholders at each site to tailor information based on their own experiences, but does not allow for generalizations about how the considerations may impact on the progress of all airport- centric developments or whether such development should be considered at any given locality. To understand the level of development planned and efforts underway, we also reviewed available plans related to the airport- centric development efforts including project plans and airport master plans. Based on our literature review and the interviews we conducted with experts, agency officials, and stakeholders, we identified the following factors considered by stakeholders at selected U.S. airports and regions when pursuing airport-centric development: 1) development at the airport, 2) air and surface connectivity, 3) funding sources for development, (4) development in the region, and (5) collaboration among stakeholders. In this report, we use these 5 factors to discuss how these considerations generally relate to airport-centric developments and provide our observations about how particular localities applied these considerations. Throughout the report we use the indefinite quantifiers, “some”, “many”, and “most” to inform the reader of the approximate quantity of stakeholder or interviewee type within the regions where we interviewed that agreed with the particular statement or idea, without actually stating the specific number of those in agreement in each case. To determine when to use each indefinite quantifier, we split the total of each type of stakeholder group into thirds, so that “some” would refer to more than one but fewer than or equal to one-third of the group; “many” would refer to more than one-third but fewer than or equal to two-thirds of the group; and, “most” would refer to more than two-thirds of the group but not the full group. The corresponding numeric range of values for each stakeholder group can be found in the table below. For example, most of the airport representatives would refer to between 10 and 13 (of the total 14). In addition to the contact named above, Maria Edelstein, Assistant Director; Amy Abramowitz; Leia Dickerson; John Healey; William King; Kirsten Lauber; and Richard Scott, Ph.D made key contributions to this report. Appold, Stephen J., and John D. Kasarda. “The appropriate scale of US airport retail activities.” Journal of Air Transport Management, vol. 12, no. 6 (2006): 277-287. Bogue, Donald Joseph, and Ernest Watson Burgess. Contributions to urban sociology. Chicago: University of Chicago Press, 1964. Button, K., “Air Transportation Infrastructure in Developing Countries: Privitization and Deregulation,” In Aviation Infrastructure Performance: A Study in Comparative Political Economy, edited by Clifford Winston and Gines de Rus, 193-221. Washington, D.C.: Brookings Institution Press, 2008. Chaudhuri, Sumana, “Impact of Privatization on Performance of Airport Infrastructure Projects in India: A Preliminary Study.” International Journal of Aviation Management, vol.1, no.1&2 (2011): 40-57. Conway, H.M. The Airport City: Development Concepts for the 21st Century, Revised Edition, Atlanta, GA: Conway Publications, Inc., 1980. Freestone, Robert. "Planning, Sustainability and Airport-Led Urban Development." International Planning Studies, vol.14, no. 2 (2009). Greis, Noel P., and John D. Kasarda. “Enterprise Logistics in the Information Era.” California Management Review, vol. 39, no. 4 (1997) 55-78. Ishutkina, M.A. and R. John Hansman, “Analysis of Interaction between Air Transportation and Economic Activity (Cambridge, MA: International Center for Air Transportation, MIT, 2008). Kasarda, John D. “Airport cities & the aerotropolis: New planning models.” Airport Innovation, (2007): 106-110. Kasarda, John D. “Airport Cities.” Urban Land, April (2009): 56-60. Kasarda, John D. “Airport-Related Industrial Development.” Urban Land, June (1996): 54-55. Kasarda, John D. “Asia’s emerging airport cities.” International Airport Review, vol. 10, no. 2 (2004): 63-66. Kasarda, John D. “Shopping in the airport city and aerotropolis.” Research Review, vol. 15, no. 2 (2008): 50-56. Kasarda, John D. “The aerotropolis and global competitiveness.” Diplomatic Courier, December (2011): 16-19. Kasarda, John D. “The Global TransPark: Logistical Infrastructure for Industrial Advantage.”Reprint Urban Land, (1998). Kasarda, John D. “The Rise of the Aerotropolis.” Next American City, Spring 2006. Accessed June 24, 2011. Kasarda, John D. “Time-Based Competition & Industrial Location in the Fast Century.” Real Estate Issues, (1998/99) 24-29. Kasarda, John D. and Greg Lindsay. Aerotropolis: The Way We’ll Live Next, 1st edition. New York, NY: Farrar, Straus and Giroux, 2011. Kasarda, John D., “The Evolution of Airport Cities and the Aerotropolis,” Chapter 1 in Airport Cities: The Evolution (London: Insight Media, 2008). Kasarda, John D., and David L. Sullivan. “Air Cargo, Liberalization, and Economic Development.” Annals of Air and Space Law, vol. XXXI (May 2006). Kasarda, John D., and Jonathan Green. “Air cargo as an economic development engine: A note on opportunities and constraints.” Journal of Air Transport Management, vol. 11, no. 6 (2005): 459-462. Kasarda, John D., and Greg Lindsay, Aerotropolis: The Way We’ll Live Next, (New York, NY: Farrar, Straus and Giroux, 2011). Morrison, Stephen A. and Clifford Winston, ”Delayed: U.S. Aviation Infrastructure Policy at A Crossroads,” In Aviation Infrastructure Performance: A Study in Comparative Political Economy, edited by Clifford Winston and Gines de Rus, 7-35. Washington, D.C.: Brookings Institution Press, 2008. Oliver Clark and John D. Kasarda, editors, Global Airport Cities (Twickenham, London: Insight Media, 2010). Peneda, Mauro José Aguiar; Vasco Domingos Reis, Maria do Rosário M.R. Macário. "Critical Factors for Development of Airport Cities." Transportation Research Record: Journal of the Transportation Research Board, no. 2214 (2011): 1-9. Sheffi, Yossi, Logistics Clusters: Delivering Value and Driving Growth, Boston, MA: MIT Press, 2012. Stock, Gregory N.; Noel P. Greis, and John D. Kasarda. “Logistics, strategy and structure: A conceptual framework.” International Journal of Operations & Production Management, vol. 18, no. 1 (1998): 37-52. Vastag, Gyula; John D. Kasarda, and Tonya Boone. “Logistical Support for Manufacturing Agility in Global Markets.” International Journal of Operations & Production Management, vol. 14, no. 11 (1994): 73-85. Vespermann, Jan; Andreas Wald. "Long-term perspectives of intermodal integration at airports." Journal of airport management, vol.4, no. 3 (2010): 252-264. Yeung, J.H.Y., Waiman Cheung, Michael Ka-yiu Fung, Xiande Zhao, and Min Zhang, “The Air Cargo and Express Industry in Hong Kong: Economic Contribution and Competitiveness.” International Journal of Shipping and Transport Logistics, vol. 2, no.3 (2010): 321-345.
GAO was asked to examine airport-centric development and the activities of airport operators and regional stakeholders to facilitate such development. In an effort to increase airports' efficiency in moving passengers and cargo while bolstering the economies of regions surrounding airports, some airport operators, government officials, and business owners are exploring opportunities to strategically develop airports and the regions around them. This report describes the factors considered and actions taken by airport operators, government officials, developers, and others to facilitate airport-centric development. To do this work, GAO identified five factors that facilitate airport-centric development from relevant literature, interviews with experts, and observations at selected U.S. airports and their surrounding regions. GAO examined these factors by reviewing relevant documents and interviewing stakeholders, including airport officials, business owners, representatives of development organizations, and federal, state, and local government officials. GAO selected 14 airports for more in-depth study. These airports were selected based on annual passenger enplanements and cargo amounts, and experts' recommendations. The findings from these 14 airports cannot be generalized but provide insights that may be of interest to stakeholders in other regions. GAO is not making recommendations in this report. The Department of Transportation, the Federal Aviation Administration, and others provided technical comments, which were incorporated as appropriate. GAO found that airport operators, government officials, real estate developers, and other regional stakeholders are taking actions consistent with five factors when pursuing airport-centric development (development on the airport property to enhance the airport's nonaeronautical revenue and development outside the airport that leverages a region's proximity to the airport). Development at the airport. Airport operators are developing or enhancing the number and types of services within airport terminals for passengers and visitors such as upscale shops and personal services; they are also developing services for passengers and businesses outside of the terminal areas but on airport property such as hotels and business centers. Air and surface connectivity. Most stakeholders GAO spoke with noted that a region's ability to connect to a variety of domestic and international destinations by air is important in attracting businesses, tourists, and cargo to the region. In addition to air connectivity, the routes taken by passengers or cargo to and from the airport may be enhanced by efficient highway, rail, and port connections. One example is the Metrorail extension, which will connect Dulles International Airport with downtown Washington DC. Funding sources. Transportation improvements for airport-centric development may entail large capital-intensive projects that generally require pooling money from different sources. The federal government has a number of programs, such as grants from the Economic Development Administration, designed to support regional transportation-infrastructure development. State and locally generated money--such as state transportation trust funds, dedicated sales taxes, and highway tolls--have been used to match federal funds. Stakeholders in Memphis, for example, were awarded a $1.26 million grant from the Department of Housing and Urban Development, matched with $900,000 in local funds and in-kind services, to develop a master plan for their airport-centric development efforts. The private sector may also provide funding through a public-private partnership agreement. Development in the region. Stakeholders GAO spoke with identified a variety of mechanisms to attract businesses, such as linking airport development to commercial activities in the region; identifying and leveraging unique cultural, tourist, or general qualities of the region; developing industry clusters (groups of complementary businesses); and designing policies or providing incentives to attract businesses to the region. Stakeholder collaboration. Collaboration among various stakeholders can help achieve specific airport-centric goals. Consultation with residents near the airport and with committee composed of representatives from the airport and the public and private sectors is important; the lack of such consultation can make it difficult to implement development plans. GAO found that multilateral committees representing airport, public-sector, and private-sector groups had been established to promote airport-centric development.
FPS was established in 1971 as the uniformed protection force of GSA government-occupied facilities. The mission of FPS is to render federal properties safe and secure for federal employees, officials, and visitors in a professional and cost-effective manner by deploying a highly trained and multi-disciplined police force. FPS was originally located within GSA’s Public Buildings Service (PBS). As part of PBS, FPS was responsible for providing law enforcement and security services to GSA’s tenants and the public at federal buildings nationwide. The Homeland Security Act of 2002 established DHS to prevent and mitigate the damage from terrorist attacks within the United States, which includes terrorism directed at federal facilities. Under the act, FPS was transferred from the GSA to DHS. DHS later placed it within ICE. The President’s fiscal year 2010 budget requested the transfer of FPS from ICE to NPPD. Language in the budget request stated that FPS responsibilities, such as providing physical security and policing of federal buildings, establishing building security policy, and ensuring compliance, are outside the scope of ICE’s immigration and customs enforcement mission and are better aligned with NPPD’s mission. The transfer of FPS to NPPD became effective when the fiscal year 2010 DHS appropriations act was signed into law on October 28, 2009. Figure 1 shows FPS’s move within DHS from ICE to NPPD. To accomplish its mission, in 2011 FPS has a total budget authority of about $1 billion, currently employs 1,225 federal staff, and about 13,000 contract guard staff to secure over 9,000 GSA owned or leased facilities. FPS conducts law enforcement activities as well as risk assessments to reduce facility vulnerability to criminal and terrorist threats and helps to ensure that facilities are secure and occupants are safe. For the transition, FPS, NPPD, ICE, and DHS headquarters components formed a Senior Working Group, co-chaired by the Senior Counselor to the Under Secretary of NPPD, the ICE Deputy Assistant Secretary for Management, and the FPS Director. DHS developed a transition plan, the August 2009 FPS-NPPD Transition Plan, which describes DHS’s overall transition planning process and milestones for completing the transition, among other things. The plan shifted FPS’s mission and responsibility for all of its mission-support functions, with the exception of financial accounting services and firearms and tactical training, from ICE to NPPD or other DHS components. While FPS has its own law enforcement personnel to perform its mission responsibilities, it does not perform all of its mission-support functions such as payroll, travel services, and contracting. For this reason, FPS has traditionally relied on GSA and ICE to carry out these functions. For example, while under GSA, FPS’s contracting functions were handled by the contracting component of GSA’s Public Buildings Service, and under ICE, by its Office of Acquisition. The transition plan noted that most transition tasks would be completed by October 2010. In addition, the transition plan noted staff-level working groups were formed that consisted of subject matter experts from each of the agencies, FPS, NPPD, and ICE, to plan in detail the transfer of FPS’s mission and each mission-support function. The working groups were tasked with planning, tracking issues related to the FPS transition, and reporting progress on the transition. Initially, 16 working groups were formed to carry out the transition in 18 mission-support functions, as reflected in figure 2. According to the transition plan, until the transition is complete, ICE is to continue to provide necessary management and operational services through continued agreements in support of FPS or until individual MOAs, MOUs, or SLAs are concluded with NPPD and other DHS headquarters components. For example, for fiscal year 2010, FPS and NPPD signed 12 SLAs with ICE, covering services such as training and development, security management, and IT services, 1 MOA for legal services, and 1 MOU for financial services. These agreements were meant to ensure that there were no lapses in services while mission-support functions were being transferred to either NPPD or DHS headquarters components. In October 2009, FPS’s facility protection mission transferred and its reporting channels were shifted from ICE to NPPD. The Under Secretary of NPPD—through delegation from the Secretary of Homeland Security— assumed operational control of FPS and its mission from ICE with the enactment of the fiscal year 2010 DHS appropriations act. Similarly, the Under Secretary delegated the authority and responsibility to the Director of FPS to continue FPS’s physical security and law enforcement services mission, consistent with the law enforcement authority for the protection of federal property. Upon its transition to NPPD, FPS became a component within the directorate. Figure 3 shows the location of FPS within NPPD’s organizational structure. According to FPS headquarters and regional officials we interviewed, the transition of FPS’s mission from ICE to NPPD occurred without degradation to the mission, and there has been minimal, if any, disruption to FPS’s field operations. Moreover, the regional officials said that the transition has not had an impact on the way FPS performs its mission on a daily basis. FPS officials stated that FPS continued to lead DHS’s security and law enforcement services at more than 9,000 GSA facilities nationwide, and its operational activities, such as conducting facility security assessments, conducting criminal investigations, and responding to critical incidents, continued uninterrupted during and after the transition. Since taking operational control of FPS in October 2009, NPPD and other DHS components have assumed responsibility for 13 of 18 FPS mission- support functions, but the transfer of the remaining 5 mission-support functions from ICE to NPPD or other DHS components has been delayed. In August 2009, DHS reported to Congress that the transition would be completed by October 2010 and estimated it would cost $14.6 million. However, DHS now reports that the transfer of 4 functions will not be completed until the end of fiscal year 2011 or start of fiscal year 2012 and one of these functions will not be transferred until October 2012. For the delayed functions, ICE continues to provide mission support to FPS, and new or revised SLAs were developed to articulate the continuing time frames and services that ICE would provide to FPS. The 18 mission- support functions and their transfer status are presented in table 1. According to DHS officials responsible for executing the FPS transition, the transfer of the 5 mission-support functions will take longer than originally reported to Congress due to a number of factors, including unanticipated costs associated with building the infrastructure within NPPD and other DHS components to support areas such as IT services. As reflected in table 2, the delays in the transition schedule for the delayed mission-support functions range from almost 1 to 2 years. DHS officials explained that the transfer of four mission-support functions— business continuity and emergency preparedness, personnel security, facilities, and Equal Employment Opportunity (EEO)—are on track to transfer by the end of fiscal year 2011 or start of fiscal year 2012. Specifically, DHS officials explained the following  All activities for the transfer of business continuity and emergency preparedness have been completed but are waiting on NPPD to complete the building of a continuity of operations site, which according to NPPD officials, will be complete by October 2011.  NPPD has moved a Senior Executive Service (SES)-level director into position, and is in the process of establishing an Office of Compliance and Security, which will provide compliance investigations, program review, personnel security, interior physical security, information security, and special security program services throughout NPPD. According to the Acting Director of the Office of Compliance and Security, the goal is to establish this office by October 2011.  NPPD has hired three of the five positions that were created to support FPS facilities management. These personnel, according to the officials, are working with ICE to transfer projects and all of them are expected to transfer by the end of fiscal year 2011.  The only activity required for the transfer of EEO services is to hire staff needed to support FPS within NPPD, which should be completed by the end of fiscal year 2011. While DHS has successfully transferred FPS’s mission and the majority of its mission-support functions, deficiencies in the transition schedule for the transfer of IT services could limit DHS’s ability to ensure the timely transition of this important function. DHS’s transition plan called for working groups to develop comprehensive project management plans (i.e., detailed schedules) with detailed tasks and end dates for the individual mission-support functions to ensure critical path activities were identified, managed, and resourced. DHS did not develop these schedules for all the mission-support functions since, according to DHS officials, in some cases the transfer of a function was relatively easy and did not need a schedule, such as public affairs and legislative affairs. However, the transfer of FPS’s nationwide IT infrastructure and field support is more complex. Because of the complexity of transferring IT services, DHS developed a detailed schedule to manage the transfer of IT services, as called for in the transition plan. As we have previously reported, the success of fielding any program depends in part on having a reliable schedule that defines, among other things, when work activities will occur, how long they will take, and how they are related to one another. As such, the schedule not only provides a road map for systematic execution of a program, but also provides a means by which to gauge progress, identify and address potential problems, and promote accountability. Among other things, best practices and related federal guidance cited in our cost estimation guide call for a program schedule to be program-wide in scope, meaning that it should include the integrated breakdown of the work to be performed, and expressly identify and define relationships and dependencies among work elements and the constraints affecting the start and completion of work elements. Table 3 presents a summary of best practices we have identified for applying a schedule as part of program management. Our analysis of the IT schedule found that it did not reflect our best practices for scheduling, as seen in table 4. We shared the results of our analysis with responsible DHS IT transition officials, who stated that they have taken note of the deficiencies and are taking steps to improve the schedule using the scheduling practices. According to these officials, they plan to work closely with staff in another NPPD component agency with the expertise necessary to improve the IT transition schedule. Nevertheless, if the schedule does not fully and accurately reflect the project, it will not serve as an appropriate basis for analysis and may result in unreliable completion dates, time extension requests, and delays. With regard to the transfer of the IT services function, it would be difficult for DHS to accurately predict the completion date for the IT transition without a more reliable schedule. Moreover, completing projects within projected time frames helps ensure agencies do not incur additional costs, which is especially important in a fiscally constrained environment. Ultimately, incorporating scheduling best practices into the IT transition schedule could help DHS better manage the completion of the transition and help provide reasonable assurance that the transfer is complete within its projected timeframe. According to best practices for cost estimates, in addition to a reliable schedule, a reliable cost estimate is critical to the success of any program. A reliable cost estimate provides the basis for informed investment decision making, realistic budget formulation and program resourcing, meaningful progress measurement, proactive course correction when warranted, and accountability for results. Such an estimate is important for any agency, but especially an agency like FPS that is solely fee funded and has faced projected shortfalls in fee collections to cover operational costs. Federal financial accounting standards state that reliable information on the costs of federal programs and activities is crucial for effective management of government operations and recommend that full costs of programs or activities be reported so that decision makers have information necessary to make informed decisions on resources for programs, activities, and outputs, and to help ensure that they get expected and efficient results. Drawing from federal cost-estimating organizations and industry, our cost estimation best practices list four characteristics of a high-quality and reliable cost estimate that management can use for making informed decisions—comprehensive, well-documented, accurate, and credible. In July 2008, the DHS Under Secretary for Management signed a memorandum stating DHS will standardize its cost-estimating process by using the best practices we identified. To implement the FPS transition, DHS, in 2009, estimated it would cost $14.6 million to complete the transition of FPS from ICE to NPPD. DHS’s estimate provided for costs into three categories—personnel, financial management, and IT services. In 2011, the department revised the estimate for each of the three categories, which totaled $18.5 million. At the time of our review, FPS had spent about $1.9 million of its operating revenue for transition-related expenses. Table 5 reflects estimated and actual costs for personnel, financial management services, and IT services associated with the FPS transition. DHS has successfully transferred the majority of mission-support functions, which includes oversight of financial management services, and, according to DHS officials, is on track to hire most of the remaining new personnel by the beginning of fiscal year 2012 to provide services previously provided by ICE. However, DHS has not yet transferred IT services and does not expect to complete the transfer until October 2012. Having a reliable and valid cost estimate is important for enabling managers to make informed decisions and facilitate tracking progress against estimates to effectively manage the transfer of IT services. While DHS committed to using GAO’s best practices in preparing cost estimates in July 2008, our analysis of the cost estimate for the transfer of IT services found that it only partially met one of the four characteristics of a reliable cost estimate and minimally met the other three, as table 6 illustrates. DHS officials stated that there are no plans to revise the IT transition estimate. According to DHS officials, rather than revising the estimate, the department plans to report actual costs once the transition is complete. However, incorporating cost estimating best practices into the IT transition cost estimate could provide an improved basis for remaining IT transition investment decisions and could facilitate tracking of actual costs against estimates, both of which are fundamental to effectively managing the transfer of IT services. Since 2007, we have reported that FPS faces significant challenges with protecting federal facilities, and in response, FPS has started to take steps to address some of them. For example, our July 2009 and April 2010 reports on FPS’s contract guard program identified a number of challenges that the agency faces in managing its contract guard program, including ensuring that the 15,000 guards that are responsible for helping to protect federal facilities have the required training and certification to be deployed at a federal facility. In response to our July 2009 report, FPS took a number of immediate actions with respect to contract guard management, including increasing the number of guard inspections it conducts at federal facilities in some metropolitan areas and revising its guard training. Further, in our April 2010 report, we recommended, among other things, that the Secretary of Homeland Security direct the Under Secretary of NPPD and the Director of FPS to develop a mechanism to routinely monitor guards at federal facilities outside metropolitan areas and provide building-specific and scenario-based training and guidance to its contract guards. As of August 2010, FPS was in the process of implementing this recommendation. Additionally, in July 2009 we reported that FPS did not have a strategic human capital plan to guide its current and future workforce planning efforts. Among other things, we recommended that FPS develop and implement a long-term strategic human capital plan that will enable the agency to recruit, develop, and retain a qualified workforce. DHS concurred with our recommendation and is taking action to address it. In June 2008, we reported on FPS’s funding challenges, and the adverse implications its actions taken to address them had on its staff, such as low morale among staff, increased attrition, and the loss of institutional knowledge. We recommended that FPS evaluate whether its use of a fee-based system or alternative funding mechanism was the most appropriate manner to fund the agency. FPS concurred with our recommendation; however, as of May 2011, FPS had not begun such an analysis. Finally, in our 2009 High-Risk Series, and again in 2011, we designated federal real property as a high-risk area, in part, because FPS has made limited progress and continues to face challenges in securing real property. If successfully managed, the transfer of FPS to NPPD could provide DHS the opportunity to better advance progress towards addressing FPS’s challenges. The Under Secretary of NPPD and the former FPS Director, in written statements for the November 2009 congressional hearing on the FPS transfer, noted that the transition to NPPD would better leverage and align infrastructure protection resources and competencies to maximize their value. Further, the transition plan noted that the transfer would improve the mission effectiveness of both FPS and NPPD. According to NPPD officials, the agency has undertaken actions that serve as a foundation for integrating FPS into NPPD. First, NPPD officials explained that efforts undertaken by the senior working group and the staff working groups have served to move the transition forward, and integrate the FPS organization into the larger NPPD structure. These officials explained that FPS has been established as a component within NPPD, thereby aligning FPS’s infrastructure protection mission within NPPD’s critical infrastructure protection mission. As noted in the transition plan, NPPD chairs the operations of the Interagency Security Committee, a group that includes the physical security leads for all major federal agencies and whose key responsibility is the establishment of governmentwide security policies for federal facilities. As further noted in the transition plan, these missions are complementary and mutually supportive, and the alignment resulting from the transfer improves and advances the mission effectiveness of both FPS and NPPD. Second, NPPD officials stated that FPS has begun to develop a new strategic plan to align FPS’s activities and resources to support NPPD mission-related outcomes. Our work has shown that in successful organizations, strategic planning is used to determine and reach agreement on the fundamental results the organization seeks to achieve, the goals and measures it will set to assess programs, and the resources and strategies needed to achieve its goals. Third, NPPD officials noted that NPPD has monthly meetings with FPS to review open GAO recommendations and is assisting FPS in closing out these recommendations. For example, in consultation with NPPD, FPS is developing a human capital strategic plan. A human capital strategic plan, flowing out of a new strategic plan, could help facilitate efforts to address previously identified challenges. Further, as we have previously reported, strategic human capital planning that is integrated with broader organizational strategic planning is critical to ensuring agencies have the talent they need for future challenges. Finally, according to the Senior Counselor to the Under Secretary of NPPD, NPPD has established a Field Force Integration Working Group among a set of five other integration working groups to pursue integration activities across the new and larger NPPD, and across DHS as a whole. In addition, the Senior Counselor noted that the purpose of the group is to examine capabilities and resources from across the NPPD components to gain efficiencies and economies of scale in support of all NPPD field operations. The official further noted that the FPS’s workforce and regional structure is by far the largest and most established of the NPPD components. FPS’s field structure and capabilities will be used as comparative models and resources as NPPD works toward continued integration of its operating entities. While these are encouraging steps, it is too early to tell if these planned actions will help address the challenges we have previously identified. With its critical role in protecting federal facilities against the threat of terrorism and other criminal activity, it is important that FPS’s transfer to NPPD and its related integration are successful. DHS has implemented a number of scheduling and cost estimating best practices in the FPS transition and has successfully transferred 13 of the 18 mission support functions. Nevertheless, DHS could better manage the transfer of the IT services mission-support function, and help inform DHS, NPPD, FPS, and congressional investment decision making. Establishing a reliable schedule and incorporating cost estimation best practices in the estimate for the transfer of IT services could help provide DHS enhanced assurance that this delayed function will be transferred in accordance with its projected time frames. To help ensure that DHS and Congress have reliable, accurate information on the timeframes and costs of transferring FPS from ICE to NPPD, we recommend that the Secretary of Homeland Security direct the Under Secretary for NPPD, in consultation with the Director of FPS and the Director of ICE, to improve the schedule for transferring IT services, in accordance with the transition plan, and to reflect scheduling best practices, and  update the IT transition cost estimate, in accordance with cost- estimating best practices. We received written comments on a draft of this report from DHS. DHS concurred with our recommendations and stated that it is currently taking actions to implement them. With respect to improving the schedule for transferring IT services, DHS indicated that NPPD held working sessions with subject matter experts from DHS, ICE, and FPS Chief Information Officer (CIO) teams to capture all transition activities in greater detail and identify areas for implementation of best practices into schedule updates. DHS also noted that NPPD consulted with NPPD/United States Visitor and Immigrant Status Indicator Technology (US-VISIT) and adopted recommendations for schedule improvements, leveraging US-VISIT’s lessons learned toward better alignment with GAO best practices, acquisition of scheduling expertise, and acquisition of specific software tools, among other things. Regarding updating the IT transition cost estimate, DHS noted that NPPD is researching and resolving cost- estimating deficiencies identified in the GAO report in collaboration with the DHS CIO. The department also noted that NPPD plans to identify an alternative network design solution that may reduce transition cost, and will refine the cost estimate after discussing network design discussions with subject matter experts and incorporating cost-estimating best practices. Written comments from DHS are reprinted in appendix II. As agreed with your office, unless you publicly announce the contents of the report, we plan no further distribution for 30 days from the report date. At that time, we will send copies of this report to the Secretary of Homeland Security, the Under Secretary of the National Protection and Programs Directorate, the Director of the Federal Protective Service, the Director of the Immigration and Customs Enforcement, and appropriate congressional committees. In addition, this report will be available at no charge on the GAO web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact David C. Maurer at (202) 512-9627 or maurerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. We examined the transition of the Federal Protective Service (FPS) from Immigration and Customs Enforcement (ICE) to the National Protection and Programs Directorate (NPPD). We address the following questions: (1) to what extent has the FPS transition been implemented and what related challenges, if any, did FPS and NPPD face in implementing the transition and (2) to what extent will the transition help address previously identified challenges to protecting federal facilities? To determine the extent to which the FPS transition has been implemented and what challenges, if any, FPS and NPPD faced in implementing the transition, we reviewed documents related to the transition, including the August 2009 FPS-NPPD Transition Plan, all transition plan updates, DHS delegations of authority related to the execution and administration of FPS, and Memorandum of Agreement, Memorandum of Understanding, and all service level agreements signed among FPS, NPPD, and ICE. We interviewed FPS officials directly affected by the transition—including the FPS Deputy Director and Chief of Staff headquartered in Washington, D.C., and in each of 6 of FPS’s 11 regional offices, the Regional Director, Deputy Director for Operations, and Mission Support Chief. We chose these offices on the basis of geographical dispersion. They included: the Northwest/Arctic Region (Federal Way, Washington); the Greater Southwest Region (Grand Prairie, Texas); the Heartland Region (Kansas City, Missouri); the Great Lakes Region (Chicago, Illinois); the National Capital Region (Washington, D.C.); and the New England Region (Boston, Massachusetts). Among other things, we asked questions about their experiences regarding the transition of FPS’s mission and mission- support functions from ICE to NPPD. While the results of these interviews provided examples of FPS officials’ experiences and perspectives, they cannot be generalized beyond those we interviewed because we did not use statistical sampling techniques in selecting the regional offices, headquarters officials, and regional staff. Additionally, we met with members of the transition senior working group, including the NPPD Senior Counselor to the Under Secretary and the FPS Director, as well as interviewed members of all 16 staff-level working groups to discuss the extent to which FPS’s 18 mission-support functions had transferred from ICE to NPPD. The working groups included officials from FPS, NPPD, ICE, and in some groups, DHS headquarters. We compared the FPS information technology (IT) transition schedule, the IT transition cost estimate, and related documents to the practices in our Cost Estimating and Assessment Guide. We focused on the IT mission- support function because it required a significant commitment of resources, oversight, and time by DHS to complete the transition. For the IT transition schedule and the cost estimate, we scored each best practice as either being Not met—DHS provided no evidence that satisfies any of the criterion; Minimally met—DHS provided evidence that satisfies a small portion of the criterion; Partially met—DHS provided evidence that satisfies about half of the criterion; Substantially met—DHS provided evidence that satisfies a large portion of the criterion; and Met— DHS provided complete evidence that satisfies the entire criterion. We provided the results of our schedule and cost analyses to DHS officials and met with them to confirm the results. Based on the interviews and additional documentation provided by DHS officials, we updated the results of our analyses, as needed. We reviewed financial documentation provided by all three components reflecting transition costs such as salaries, benefits, and expenses for new personnel hired to support the FPS transition, financial management services provided by ICE, and IT deployment. To assess the reliability of this documentation, we (1) performed electronic testing for obvious errors in accuracy and completeness; (2) compared the data with other sources of information, such as payroll reports to payroll data, cost data from the ICE Office of Financial Management and documentation from the Intra- Governmental Payment and Collection (IPAC) system; and (3) interviewed agency officials knowledgeable about financial management and budgeting at all three agencies to discuss transition-related expenses incurred at the time of our review, and to identify any data problems. When we found discrepancies (such as data entry errors) we brought them to the officials’ attention and worked with them to correct discrepancies before concluding our analysis. We found the cost data to be sufficiently reliable for the purposes of this review. To determine the extent to which the transition will help address previously identified challenges to protect federal facilities, we reviewed prior GAO reports and testimonies related to FPS’s facility protection efforts, and spoke with NPPD officials about FPS’s ongoing challenges in this regard. We also reviewed and analyzed documentation, such as the transition plan, testimony from key senior leaders in NPPD and FPS provided for a hearing on the FPS transition, FPS’s strategic plan, and NPPD’s strategic activities report. Finally, we interviewed the Senior Counselor to the Under Secretary of NPPD, and FPS Deputy Director for Operations and Chief of Staff, and discussed actions underway or planned to further integrate FPS into NPPD. We conducted this performance audit from October 2010 through July 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Sandra Burrell, Assistant Director, and Valerie Kasindi, Analyst-in-Charge, managed this assignment. Don Kiggins made significant contributions to the work. Gary Mountjoy provided expertise on IT issues and Jack Warner provided expertise on financial management issues. Tracey King provided legal support. Michele Fejfar assisted with design and methodology and Karen Richey provided expertise on cost estimation and scheduling best practices. Katherine Davis provided assistance in report preparation and Robert Robinson developed the report’s graphics.
Events such as the February 2010 attack on the Internal Revenue Service offices in Texas, and the shooting in the lobby of the Nevada federal courthouse, demonstrate the vulnerabilities of federal facilities and the safety of the federal employees who occupy them. The Federal Protective Service (FPS) is the primary agency responsible for the security of over 9,000 federal government facilities across the country. The fiscal year 2010 DHS appropriations act transferred FPS from Immigration and Customs Enforcement (ICE) to the National Protection and Programs Directorate (NPPD), within the Department of Homeland Security (DHS). This report addresses (1) the extent to which the FPS transition has been implemented and any remaining related challenges, and (2) the extent to which the transition will help address previously identified challenges to protecting federal facilities. GAO reviewed the 2009 FPS-NPPD transition plan; agreements between FPS, NPPD, and ICE, and best practices for scheduling and cost estimating; and interviewed DHS officials. Since October 2009, FPS's facility protection mission and 13 of 18 mission-support functions have transferred from ICE to NPPD; however, the transition schedule for the 5 remaining mission-support functions has been delayed. For example, while functions such as human capital and budget formulation have been transferred, information technology (IT) services, business continuity and emergency preparedness, facilities, personnel security, and equal employment opportunity have not. In August 2009, DHS reported to Congress that the transition of these functions would be completed by October 2010. DHS now reports that it plans to complete the transfer of 4 of the 5 remaining mission-support functions by September or October 2011, and estimates that the transfer of IT services will not be complete until October 2012. DHS developed a transition plan to guide the planning and execution of the transfer. Among other things, the plan called for schedules with detailed tasks and end dates to be developed for all mission-support functions to ensure critical path activities were identified, managed, and resourced. DHS also developed a detailed schedule to manage the transfer of IT services, as called for in the transition plan. However, GAO's analysis of the schedule found that it did not reflect GAO's best practices for scheduling such as capturing, sequencing, and assigning resources to all activities necessary to accomplish the work. When a schedule does not accurately reflect the project, it will not serve as an appropriate basis for analysis and may result in unreliable completion dates and delays. As of May 2011, DHS estimated that it would cost $6.2 million to complete the IT transition. GAO's analysis of this cost estimate found it did not meet all the characteristics of a reliable cost estimate. For example, the estimate was not well documented because it was not supported by detailed explanation describing how the estimate was derived and did not include sufficient detail so that GAO could corroborate it. By incorporating cost estimation best practices for the IT transition cost estimate, DHS could enhance the estimate's reliability and better inform decisions about the cost to complete the transition. The transfer of FPS to NPPD could provide DHS the opportunity to better advance progress towards addressing FPS's challenges to protecting federal facilities that have been previously identified by GAO. Since 2007, GAO has reported that FPS faces significant challenges with protecting federal facilities. The transition plan noted that the transfer of FPS to NPPD would improve the mission effectiveness of both agencies. NPPD officials explained that the agency has undertaken actions that serve as a foundation for integrating FPS into NPPD. For example, FPS has begun to develop a new strategic plan to align FPS's activities and resources to support NPPD mission-related outcomes. Additionally, NPPD is assisting FPS in developing a human capital strategic plan, as recommended by GAO in July 2009. These steps are encouraging, but it is too early to tell if these planned actions will help address challenges previously identified by GAO. GAO recommends that DHS improve the schedule for transferring IT services to reflect scheduling best practices, and update the IT transition cost estimate, in accordance with cost-estimating best practices. DHS concurred with GAO's recommendations.
DOD’s S&T community—including research laboratories, test facilities, industry, and academia—conducts initial research, development, and testing of new technologies to improve military operations and ensure technological superiority over potential adversaries. Key expectations DOD places on its S&T community include the following: expand scientific knowledge and investigate technologies that may provide new warfighting capabilities, anticipate technological needs for an uncertain future, and produce relevant and feasible technologies that can transition into weapon system programs or go directly to the warfighter in the field. As a result, some investments focus on conducting research to generate scientific knowledge, exploring new technologies, demonstrating the feasibility of a technology concept, and pursuing other science and technology endeavors. We have previously reported that the challenge is finding the right balance between developing breakthrough or “disruptive” technologies—those considered to be innovative—and investing in moderate, “incremental” technology enhancements. Figure 1 below provides a notional picture of how DOD’s S&T community manages technology investment, development, and transition to a user. Following technology development, DOD’s acquisition community manages the next phase, product development, in which technologies are further advanced and system development begins. DOD has long reported the existence of a chasm between its S&T community and the acquisition community, which often precludes effective transitioning of technologies out of the S&T environment into weapon systems. In a series of reports, we found that technologies may not leave the lab because their potential has not been adequately demonstrated or recognized, acquisition programs may be unwilling to fund final stages of development, or private industry chooses to develop the technologies itself. Further, we found that the acquisition community frequently integrates technologies too early and takes on the task of maturing technologies—an activity that is the primary responsibility of the S&T community—at the start of an acquisition program. These challenges, in part, contribute to cost growth, schedule delays, and performance shortfalls that we have frequently found and reported on in DOD weapon programs. DOD funds technology and product development activities under its research, development, test, and evaluation (RDT&E) budget, which DOD groups into seven budget activity categories for its annual budget estimates. The categories follow a mostly sequential path for developing technologies from basic research to operational system development, as is shown in figure 2. The first three budget activity categories generally represent activities undertaken by DOD’s S&T enterprise to advance research and develop technology, while the remaining budget activity categories are typically associated with product development for acquisition programs. See Appendix II for a description of each budget activity. Selected leading companies that we reviewed follow six key practices that together reflect a disciplined approach to managing their R&D activities— those akin to DOD’s S&T activities. First, they define their corporate strategy by identifying desired markets. Next, they invest in technology programs to penetrate those desired markets. Effective management of these portfolios requires balancing investments between two types of R&D efforts: incremental R&D, which is tied to near-term products; and disruptive R&D, which is intended to deliver innovative technologies that can provide longer-term growth. According to company representatives, this balance is driven by the business imperative of sustaining current markets while also developing future ones. Leading companies align their goals for incremental technology development with product development, while also providing independent paths for developing disruptive technologies not tied to product development. In addition, these companies identify stakeholders outside the scientific realm and collaborate extensively with them to ensure that technologies are relevant and can be efficiently integrated into marketable products. Among the key R&D stakeholders are representatives from the business units who are responsible for identifying customer needs and getting products to market. They also scale the rigor in project oversight based on the amounts of time and money invested. Nonetheless, leading companies expect all R&D projects to include prototyping or other demonstrations to prove out the technology before it is integrated into a product for the company to sell. Figure 3 below summarizes the general management process these leading companies use to plan and execute their R&D investments. Among the eight leading companies we reviewed, each manages R&D investments that are underpinned by defined strategies, markets, and financial goals. In addition, each company sets aside a percentage of company revenues to fund R&D. The company’s strategic direction is set by the Chief Executive Officer (CEO), in coordination with the company’s top executives, including the Chief Technology Officer (CTO) or other senior R&D executives. These corporate strategies balance near-term profitability with long-term growth potential and market expansion. Companies stay competitive by dividing their collection of R&D projects, also known as their R&D portfolios, into two categories: 1. Incremental R&D: lower-risk projects to be integrated quickly into near-term products. 2. Disruptive R&D: projects that carry a higher risk of failure, but offer significant rewards for the company in the long-term. These investments may lead to non-incremental innovations that become an important piece of their portfolio. In some cases, these technologies render competing products obsolete by creating new markets or displacing existing product lines. According to representatives of leading companies, around 80 percent of R&D funding is spent on incremental development, while the balance is spent on disruptive projects. Corporate leadership determines this percentage based on tolerance for risk and the company’s financial standing. In addition to these two portfolio types, leading companies provide scientists with the flexibility to work on lower-cost exploratory projects. Such work is conducted by a few scientists or researchers and is not part of the annual process for approving projects within each portfolio. However, the work derived from these efforts could eventually become part of incremental or disruptive R&D portfolios. In determining an appropriate balance between incremental and disruptive R&D investments, company leaders consider long-term scenarios based on current trends and technologies in the market. For example, R&D leaders at Siemens reported that they conduct an annual “Innovation Review” of the company’s entire technology development portfolio for the purpose of informing top leadership’s strategic decisions. These reviews evaluate Siemens’ technological competitiveness, strategic resource allocation, and long-term corporate strategy. During this review, Siemens asks a number of questions, including the following: Are the overall resource allocations for R&D investments appropriate? Is the business unit’s technology position competitive, and will planned investments safeguard the business unit’s technological competitiveness? Is there a convincing long-term strategy for how to translate these investments into sustainable business success? Is there an adequate strategy for translating new technologies into winning offerings? Siemens executives reported that they understand that these factors directly impact their ability to grow and profit as a company, which is why the desire to remain technologically competitive drives corporate strategy decisions. Siemens also creates forecasting tools called “Pictures of the Future” that provide graphical representations of how future technologies could be used by customers 7 to 15 years in the future. Figure 4 provides an example of the elements that Siemens includes in a Picture of the Future. According to Siemens, it uses Pictures of the Future to assess societal, technological, and other trends to guide visionary concepts for potential new markets and customer needs; consider existing product lines, technologies, and customer needs; analyze the opportunities and risks for the company’s core business; identify what is required to allow the company to act upon potential future scenarios. Most importantly, company representatives stated that these pictures help develop consensus within Siemens regarding the technologies the company needs to develop to drive innovation and remain a market leader. After leading companies settle upon their corporate strategies for R&D investment, different units are charged with sponsoring—approving and funding—incremental and disruptive R&D projects. Incremental R&D projects are typically sponsored by business units, who are also responsible for product development. Disruptive R&D is often sponsored by a corporate research organization, which makes project investment decisions independently from the business units. Figure 5 shows this division of R&D that we observed in the private sector. Selected leading companies we reviewed align plans for developing new incremental technologies with plans for developing future products, which companies sometimes refer to as roadmapping. Individual business units are responsible for product sales in a company and have their own executive management teams that are charged with generating profits from their product lines. Business units sponsor incremental development projects intended to yield technologies to meet identified customer needs. Incremental R&D generally adds new capabilities to current products or next generation versions of existing products; therefore companies expect these to have lower risk of failure. Depending on the industry, business units generally do not look beyond a 5-year timeframe when making decisions about these new technologies due to the unpredictable nature of the markets in which they operate. These leading companies document planned future products in product roadmaps, while the technologies that are to be integrated into those products are documented in technology roadmaps. By aligning these plans, business units can better identify and prioritize technology development investments. To develop these plans, companies solicit ideas and information from people across the organization to determine the composition of incremental R&D portfolios. Ultimately, however, technology development decisions come down to the management’s qualitative judgements regarding the merits of individual R&D projects, as well as on quantitative metrics like potential return on investment. Once a project is approved, it may be immediately funded and executed. The process and the number of people involved in these R&D investment decisions vary depending on the company. In general, these leading companies solicit input from top leadership responsible for setting the company’s overall strategy and funding, including the CEO, CTO, and other corporate leaders; representatives from business units responsible for getting relevant products to market; and scientists and technologists who plan for future technology development and identify when technologies are ready for integration into products. Technology and product development teams at Honeywell Aerospace, for example, complete an annual roadmapping process to align incremental technology development activities with the company’s product plans. As part of this process, business units identify customer needs; marketing and product management staff review market trend information, and determine future products and when they must be completed; and Honeywell’s corporate research organization reviews external technology trends and creates technology roadmaps. Honeywell Aerospace’s roadmapping process is illustrated in figure 6. Honeywell’s roadmapping process focuses on needs and trends projected for the next 10 years, with less emphasis on the more distant future, due to the difficulty in making reliable predictions that far in advance. In general, the leading companies we met with consider this timeframe to be realistic and manageable for planning incremental R&D investments. To assist with management decisions regarding what incremental R&D projects to start or change, Honeywell considers potential revenues in the next 5 years. However, certain technologies, such as those associated with jet propulsion engines, require longer-term plans because technology development takes many years to complete. Although the company’s technology development plans do not extend beyond the next 10 years, company officials reported they do review industry trends potentially leading to new developments further in the future and develop concept ideas for new technologies based on these trends. Honeywell obtains input from a variety of sources to ensure technologies will be feasible for integration into future products and relevant to customer needs. Technology development plans, or roadmaps, are periodically revised based on changes in customer needs, prototyping results not meeting expectations, and other changes in circumstances requiring additional consideration. According to company representatives, Honeywell stays in close communication with customers to ensure the company’s understanding of market needs remains accurate, which helps them avoid wasting time and money on projects that have lost relevancy. Selected leading companies we reviewed also ensure that a portion of their R&D is independently focused on futuristic concepts, which are intended to keep the companies competitive in the long term. Disruptive R&D includes significant technology development efforts addressing the anticipated customer needs of the future, potentially leading to products that render the competition’s products irrelevant in the marketplace. The disruptive R&D portfolio is initiated separately from incremental portfolios and often managed by a corporate research organization. Corporate research looks for solutions that provide customers with capabilities they may not realize they need or want. By separately organizing disruptive R&D from incremental, companies are able to protect funds from near-term-focused business unit managers. Generally speaking, companies in our review ensure that disruptive R&D is planned and executed by management not averse to taking risks when significant long-term rewards are possible. Due to their near-term focus, business unit managers usually begin having significant influence over disruptive technologies only after they have been demonstrated and are ready to begin transitioning into products. Allowing exploratory and disruptive technology development to occur without requiring product development approval helps prevent the company’s products from becoming obsolete and gives potential to capture new markets. These companies use various approaches to leverage the ideas of their own staff and external partners to innovate for futuristic technologies that look beyond product roadmaps. These approaches include challenging R&D staff to come up with feasible ideas to create disruptive technologies leading to entirely new product lines, and investing in external startup companies or leveraging externally developed technologies. Companies seek to ensure longer-term competiveness by challenging their R&D staff to make scientific advances that will make existing products irrelevant in the future. IBM, for example, issues grand challenges to R&D staff to develop these kinds of technologies. Company representatives stated that about 15 to 20 percent of IBM’s R&D funding goes toward development of disruptive new technologies not aligned with known customer needs. While all of IBM’s disruptive technology development is executed by its corporate research organization, IBM business units provide funding to sponsor these efforts. To generate ideas for disruptive R&D projects, IBM management issues “Grand Challenges” asking for project proposals. Figure 7 below provides information on how IBM’s corporate R&D organization independently developed the IBM Watson supercomputer as a result of one such Grand Challenge. Amazon also provides a number of avenues for individual staff members to submit innovative R&D project proposals that are not necessarily tied to defined customer needs or market trends. For example, company representatives explained that Amazon holds week-long innovation forums, where R&D staff collaborates to develop new project ideas. R&D staff then vote on the best ideas, which are submitted to top company leadership for approval. Individual scientists or technicians can also propose new projects or ideas directly to their supervisors, who may help them develop formal proposals. GM initiates disruptive projects through their corporate research organization by funding “Internal Startup” projects. These are disruptive technology development projects initiated solely at the discretion of GM’s CTO and the director of GM’s R&D labs. Aside from the project teams, these are the only GM employees that know the details of these projects. This level of confidentiality allows GM to take risks. GM expects these projects to have features that develop phenomenal value to customers, make existing technology obsolete, and make the competition irrelevant. Any R&D staff member can propose this kind of project, although its potential value to the company must be defined in the proposal. GM also provides these projects with more funding than typical projects so that R&D work can progress about three times faster than usual. As these projects are generally high in technical risk, GM officials stated that they have about a 50 percent success rate, which they considered acceptable given the value associated with successful projects. Leading companies also encourage their scientists to explore and initiate low-cost research projects, either as unfunded side-projects or using limited resources following approval from a supervisor. Providing this flexibility allows scientists to be creative, while also affording them access to the company’s laboratory or other resources. These less intensive and inexpensive projects are not required to be approved by senior management to avoid unnecessary administrative burden with scientists during exploratory development. If scientists and management deem a technology to be feasible, it will be referred to the company’s R&D project approval process for additional funding. Sometimes leading companies develop technologies that may be useful in products beyond their preferred markets. In such instances, these companies seek to maximize the value of their R&D investments through external partnerships. This may occur through investments in startup companies or licensing arrangements. For example, Siemens provides alternate paths for innovative technologies to move into products outside of their own product lines by co-founding start-up companies that could turn them into products, thereby allowing Siemens to benefit financially when technologies they developed are used in other companies’ products. Siemens also licenses some of its technologies to other companies so they can be used in their products. Conversely, sometimes leading companies seek technologies from outside firms that show promise for use in the company’s own products. The Dow Chemical Company uses a Corporate Venture Fund to make investments in companies that have formed to commercialize new technologies. In many cases, Dow assumes a minority position, although in some cases, Dow may elect to acquire the company or partner without financial investment to mutually develop technologies. Dow scientists may work alongside R&D staff from these companies or Dow may just choose to be an investor. Dow employs technology scouts around the world responsible for finding opportunities to bring innovative technologies into Dow. These scouts are responsible for learning everything they can about a company before Dow proceeds with the investment or partnership. At the selected leading companies we reviewed, once an R&D project is initiated, the research team in charge of the project actively collaborates with stakeholders outside the R&D office to help assist and inform project execution efforts. These stakeholders typically include product development staff and engineers who understand technical requirements for technologies to eventually transition beyond R&D, marketing staff familiar with how products might fit into the outside business unit staff who interface directly with customers, and potential users of the technology. The level and timing of stakeholder involvement can vary based on the type of project. Stakeholders, such as product development staff, typically become involved early in incremental R&D projects. On the other hand, those same stakeholders might not get involved in disruptive projects until later phases of development. Collaboration between stakeholders continues even after the technology development effort concludes and a business unit begins product development. R&D staff may continue to assist product development efforts after technology development is completed as products are customized for different types of customers. For example, figure 8 describes how a cross-functional team at Dow Chemical collaborated to develop a new polymer. Leading companies also look outside the company when undertaking an R&D project to gain insights from potential customers. These customers provide input and perspectives that help inform refinements to technologies. Figure 9 details the process Honeywell Aerospace used to seek customer input when developing its Synthetic Vision System. Company representatives explained that input from both internal stakeholders and potential customers helps R&D staff to transition emerging technologies into product roadmaps. This collaboration also helps the R&D project team obtain the requisite information and resources it needs to develop a technology that is feasible for use as part of future products, while also being relevant to future customer needs so it is accepted in the marketplace. At the selected leading companies we reviewed, R&D projects are more rigorously reviewed by higher levels of management as their needs for staff and funding grow. This leads to a subset of these projects continuing into later stages of development while others are ended. The Dow Chemical Company, for example, uses a stage-gate process to oversee R&D projects requiring significant investments. At each stage of development the project’s funding increases and the hurdles for moving forward become greater. Dow requires only minimal oversight for low-cost exploratory research. Once a scientist proves a technology’s feasibility, the project enters Dow’s normal processes for integrated project oversight and portfolio management involving all of the relevant Dow stakeholders. Later stage development projects generally have higher budgets and are more closely monitored to ensure they meet specific technical criteria and time-based milestones, according to company representatives. While specific review processes vary among leading companies, figure 10 outlines the key principles that all these different processes embody. IBM also emphasizes relevant stakeholder participation in reviews for R&D projects progressing beyond exploratory or disruptive corporate research. After projects are initiated, IBM leadership reviews R&D projects at least quarterly during business reviews, although company representatives noted they may be reviewed more frequently in some cases when more attention is warranted. IBM leadership believes— according to representatives we interviewed—that it must have agile review processes that facilitate timely adjustments or, in certain cases, terminations to projects. The selected leading companies we reviewed consider technology demonstrations, or prototyping, an inherent part of R&D and use demonstrations for a variety of purposes, including to create demand by convincing stakeholders or customers of the potential value of a future technology; and obtain feedback from potential end-users to add knowledge and improve technologies. Both incremental and disruptive R&D projects receive funds to demonstrate technologies. Figure 11 depicts the technology demonstration process of the leading companies. Leading companies demonstrate concepts for new disruptive technologies to stakeholders to generate demand so business units will contribute to technology development. Specifically, once corporate research organizations believe technology components are sufficiently mature for product development, they offer demonstrations to show their potential value to stakeholders in the business units. For example, Siemens use prototypes once a technology reaches the point that its components can be validated in a laboratory or relevant environment. Siemens representatives explained that should the demonstration prove successful, a business unit assumes development responsibilities. Leading companies also demonstrate these concepts for disruptive technologies to potential customers if business units are hesitant to invest in further development. For example, Qualcomm corporate research representatives told us they must sometimes overcome internal resistance to accepting new technologies by demonstrating their value to both internal and external customers. Barriers to transition of disruptive technologies may be even more prevalent when developing technologies that could lead to dropping an existing product or feature. In their role as a R&D component, not a business unit focused on product lines, corporate research works with both internal and external customers without committing to future products but with a clear pathway to adoption if successful. Qualcomm representatives explained that if these early demonstrations and advocacy for disruptive technologies proves successful, then customers may ask product developers to use them in future products. Once a concept is proven, leading companies use technology demonstrations to inform developers how a technology needs to be improved. Company representatives explained that rapidly developing and demonstrating a series of iterations of a new technology provides early opportunities for improving the technology, rather than taking longer amounts of time to develop technologies without testing them. Figure 12 depicts the iterative technology development and demonstration process that leading companies use. When Amazon pursues a product or technology, the company already has an idea of its customers’ needs, but does not consider this information to be complete in the absence of user feedback from prototype demonstrations. To ensure technologies address these needs, Amazon representatives stated that the company builds iterations of prototypes during technology development in a facility specifically designed for doing so quickly and at minimal cost. Figure 13 illustrates how these early prototypes are generally used by Amazon employees in real world settings to help inform technology development. Similar to Amazon, IBM’s development method uses prototyping during technology and product development. IBM developers show early iterations of new technology to customers, improve the design based on feedback, and then produce another prototype after that. This process repeats until a releasable product is completed. IBM representatives found that development models with longer sequential steps and a single deliverable are less useful than faster paced and smaller deliverables under shorter timeframes. Companies may also use external demonstrations of maturing technologies to prove they work in realistic environments. For example, Valvoline produces and tests small volumes of new motor oil formulas in its own laboratory or at a few select customers’ facilities. To obtain customer insights from prototyping, Valvoline develops data and uses cameras on test engines to demonstrate the uses customer test engines to conduct tests and provide specific data to customers on how new formulas perform in their engines; and provides test formulas to outside customers and obtains feedback through sales staff and focus groups. Offering external customers an opportunity to test new technologies provides important feedback for companies such as Valvoline. By taking steps like these to involve outside customers in prototype testing, representatives of leading companies stated they are able to identify a new technology’s tangible benefits, while also encouraging eventual customer acceptance of new products. While some DOD S&T practices closely mirror those of the selected leading companies we reviewed, DOD’s funding policies and culture limit its ability to adopt other practices for managing its S&T investments. Unlike the companies we reviewed, DOD does not organize and fund incremental and disruptive innovation separately. Nor does its leadership provide guidance on or assess how these innovation investments should be or are mixed. Instead, S&T officials explained that DOD labs face pressures to prioritize near-term requirements at the expense of potentially disruptive technologies. As a result of DOD funding policies, projects are planned 2 years in advance, which can slow innovation and limit lab directors’ autonomy to initiate work. While Congress has provided authority that, as implemented, has enabled the military department lab directors to initiate work outside of the normal lengthy process, DOD has not fully utilized these flexibilities. Additionally, we found that divided responsibilities for technology versus product development contribute to a culture that does not encourage collaboration between DOD’s S&T and acquisition communities and limits the S&T community’s ability to conduct advanced prototyping. These issues are not insurmountable, however, as demonstrated in pockets of each military department. In recognition of these and other issues, Congress has required that DOD create a new Under Secretary of Defense for Research and Engineering charged with advancing defense technology and innovation and establishing policies on technology development, prototyping, and experimentation, among other responsibilities, by February 2018. Some of DOD’s practices for managing and executing S&T investments closely resemble those employed by the selected leading companies we reviewed. DOD has a corporate research organization for disruptive innovation and its leadership defines S&T strategies to guide investments which are consistent with elements that the companies we met with used to manage investments in incremental and disruptive portfolios. DOD project oversight is scaled based on the scope of investment, which aligns closely with leading company practices. While differences exist between how these practices are implemented in DOD and at the companies we reviewed, we found that the outcomes are the same. The Defense Advanced Research Projects Agency (DARPA), for example, closely resembles the corporate research organization that many leading companies employ to foster disruptive innovation. In the President’s budget submission for fiscal year 2017, DARPA requested $2.9 billion, about 23 percent of DOD’s S&T budget request. Similar to a company’s corporate research organization, DARPA’s projects are generally not tied to existing DOD weapon systems or a specific military department requirement. Instead, their mission is to produce disruptive innovation that could support any military department. The DARPA Director makes all funding decisions and project prioritizations. DOD’s market research, which informs its S&T strategy, is based on near and far-term adversarial threats, capability needs, and warfighter requirements. While these inputs may differ from the companies we reviewed, they are likewise used to prioritize S&T projects. DOD does not necessarily use the same metrics as companies to evaluate projects, but its labs similarly scale the scope of their project reviews based on the maturity of the technology and scope of investment. For example, a lab typically reviews basic research projects once a year, while officials said more mature, larger investments are reviewed multiple times per year by the lab, its customers, and military department leadership. Similarly, both DOD and the companies we reviewed assess projects based on their cost, schedule, and technical performance requirements. One inherent difference is that leading companies are concerned with potential financial returns on investment, whereas DOD prioritizes for other reasons, such as whether the technology carries potential to reduce risk to the warfighter. Although our review of selected leading companies found that they define in strategy their annual mix of investments in incremental and disruptive innovation, the military departments do not do this, nor do they assess such a mix. The office of the ASD(R&E)—the Office of the Secretary of Defense (OSD) organization responsible for establishing DOD S&T policy and guidance—does not provide guidance to the military departments on the mix of incremental and disruptive S&T investments. The military departments are responsible for defining their own S&T strategies, formulating and managing budgets, and developing technologies. Their S&T strategies, however, do not define the mix of incremental and disruptive investments each department should make annually. Instead, DOD S&T investments are organized and funded based on budget activities (BA) that reflect stages of technology maturity. DOD uses this approach, in part, because the Financial Management Regulation dictates how R&D activities are identified for the purposes of budgeting. We found that one limitation of funding under BAs is a lack of visibility into whether individual projects that labs and research centers invest in are geared toward disruptive or incremental innovation. The Financial Management Regulation, however, does not preclude DOD from developing investment targets for both incremental and disruptive R&D. Military department lab and center officials we interviewed, however, identified certain projects they were working on that could lead to disruptive technologies. Officials from these labs and centers acknowledged that they struggle to determine the right balance between disruptive and incremental innovation projects. They expressed concern that military department leadership responsible for setting requirements for and approving S&T spending, at times, are more focused on near- term, less risky, more incremental types of innovation investments at the expense of long-term, disruptive innovation. The Navy is one military department that has taken steps to ensure funding for some investments in disruptive innovation. The Navy organizes S&T investments around “strategic buckets” to ensure it maintains investments in both near- and long-term projects and protects funding for potentially disruptive projects. The distribution of resources is determined by senior leadership based on the Navy’s S&T strategy. The strategy maps out roughly the minimum percentage of funding that the Navy plans to request for high-priority, disruptive projects within its S&T portfolio, as reflected in figure 14 below. In fiscal year 2017, the Navy plans to invest more than $313 million for Leap Ahead Innovations which are intended to be disruptive technologies and deliver transformational warfighting capabilities. These are in addition to investments in other disruptive technologies that are categorized, but not quantified, under its other strategic buckets. In a June 2017 report, we recommended that DOD take steps—such as the Navy’s—to help ensure adequate investments in innovation that align with DOD-wide strategy to overcome the department’s risk-averse culture and pressures to focus on near-term projects. In comparison to the practice at selected leading companies we reviewed, which annually align their investments to product goals, DOD’s process for prioritizing and funding projects takes longer—almost 2 years to complete—which we found can slow innovation. Like every other good and service DOD acquires, all S&T investments must follow DOD’s planning and budgeting policy. This policy is underpinned by DOD’s Planning, Programming, Budgeting, and Execution (PPBE) process. The PPBE process for S&T investments includes the following stages: Planning: DOD leadership, in guidance and planning documents, identifies strategic priorities, weapon system requirements, and adversarial threats. Collectively, these serve as DOD’s broad requirements for technology development. Programming: S&T organizations give consideration to those requirements and propose technology development projects to address them. Proposed projects and associated costs are documented in Program Objective Memorandums (POM). Each organization is tasked with determining which projects to propose in the POM, while maintaining balance across their portfolios of investment, as well as maintaining an appropriate mix of funding based on BA. POM documents are reviewed by senior officials across DOD—including those responsible for setting requirements and the budget—who also have a role in prioritizing S&T investments. Budgeting: Each S&T organization’s POM is used to formulate their respective military department’s Budget Estimate Submission (BES), which outlines the total funding needed, including how much it will need by budget activity. After the President’s budget is submitted, Congress enacts an appropriation. Once funds are appropriated, each S&T organization is provided funding for the projects approved in the POM and BES. Execution: S&T organizations carry out funded projects. Figure 15 illustrates the notional time frames for DOD’s PPBE process. In total, it can take almost 2 years from the time a project is proposed in the POM to the time it is funded. In contrast, the companies we reviewed reported that they planned projects in the same year they were executed, which helped them quickly respond to leaps in technology development. S&T officials we met with stated that the 2-year project planning process reduces their ability to be as nimble as the companies with whom we met. For example, if an unexpected technology breakthrough is identified through 6.1 or 6.2 research, the labs may have to wait up to 2 years before they may begin work on a follow-on project. DOD S&T executives expressed the need for greater flexibility with initiating new projects because the pace of technology development can be rapid and planning for S&T spending 2 years in advance can hinder innovation. They stated, however, that the PPBE process provides Congress with the information it needs to maintain oversight and ensure DOD meets its fiduciary responsibilities to the taxpayer. We found that laboratory and research center directors in the military departments have less authority to initiate S&T work that is not directly linked to defined near- or far-term capability needs as compared to the leading companies we reviewed. While the Director of DARPA approves every project the agency undertakes and is not beholden to address defined requirements, the military departments’ labs and centers do not control all of the S&T-related R&D work they perform annually. These labs and centers regularly undertake work on behalf of acquisition community customers, such as a major defense acquisition program, who provide funding in support of the project. This work comes in addition to “direct funded” projects approved and funded to the lab or center through PPBE that are intended to address S&T requirements outlined in strategy. As a result, both the direct funded and customer-funded projects compete for lab resources, such as staff, and must be balanced. In fiscal year 2015, for example, direct funded projects accounted for 19 percent of Naval Research Laboratory’s (NRL) $1.2 billion of funding. The other 81 percent was customer-funded work from Navy, other DOD, or other governmental sources. Despite having direct funding, the POM review and approval process may constrain which projects are ultimately funded. For example, we found that the projects a lab or center proposes in its POM submission may be reviewed by as many as four different organizations before it is submitted to OSD. It is during this review process that lab officials explained that the culture within DOD is, at times, to focus on near-term, customer-driven projects, at the expense of far-term disruptive projects. Regardless of the source of the funding, a senior ASD(R&E) official explained that S&T investments are intended to address some defined capability need. This means that the military department’s disruptive technology projects are roadmapped to requirements, which differs from the practice at the leading companies we reviewed. This may limit the labs’ ability to address undefined customer needs through other potentially disruptive technologies. Section 219 of the Duncan Hunter National Defense Authorization Act for Fiscal Year 2009, as implemented, has provided defense lab directors with some limited flexibility to initiate S&T projects, including those that are not roadmapped to defined requirements, outside of the normal 2- year planning process. Specifically, as amended, the law directs the Secretary of Defense, in consultation with the Secretaries of the military departments, to establish mechanisms under which the director of a defense laboratory may use an amount of funds equal to a certain percentage of all funds available to the laboratory for the following purposes: innovative basic and applied research that is conducted at the defense laboratory and supports military missions; developing programs that support the transition of technologies developed by the defense laboratory into operational use; workforce development activities that improve the capacity of the defense laboratory to recruit and retain personnel with needed scientific and engineering expertise; and revitalization, recapitalization, or minor military construction of laboratory infrastructure. While this authority directed the creation of a mechanism that may provide lab directors with the means to fund projects they consider to be a priority, the military departments have not maximized their use of these authorities. Until the passage of the National Defense Authorization Act for Fiscal Year 2017, the director of a defense lab could use funds equal to not more than 3 percent of all funds available to the defense laboratory for S&T activities. Each of the services has unique strategies for executing section 219 authorities but DOD reported that the full 3 percent of funds available to the labs has not been used. DOD officials told us that the full 3 percent available to each defense laboratory has not been used for a number of reasons, including due to competing S&T funding priorities. Additionally, DOD officials indicated that labs had concerns about charging customers a fee to fund such S&T activities, which was a factor in different amounts of funds available for section 219 purposes, as shown in figure 16. During our review, Congress amended the Section 219 authority in the National Defense Authorization Act for Fiscal Year 2017 to permit lab directors to use an amount of funds not less than 2 percent but not more than 4 percent of all funding available to the lab. As a result of the change, the military departments can increase the amount of section 219 funds that the labs may obtain. The extent to which they choose to fund new projects could help them to initiate projects, including those which may be “off-roadmap,” faster than through the PPBE process. While the selected leading companies we reviewed ensure close collaboration between stakeholders in technology and product development, cultural barriers have limited such collaboration within DOD. DOD’s funding policies reflect cultural barriers to collaboration between the S&T community and its product development stakeholders. Under DOD’s funding model, the labs are responsible for technology development associated with BAs 6.1 through 6.3, while the stakeholders in the acquisition community are traditionally responsible for product development, which begins with prototype activities under BA 6.4. Although we found that S&T projects funded by the acquisition community obtain collaborative input from those same eventual customers, this was not the case for direct-funded projects—those initiated by the lab. Lab officials explained that for direct-funded projects, they may consult with potential customers in the acquisition community to gauge interest before starting a project or to present results after it is complete, but those customers are not part of the development team. This approach is, in part, attributable to these organizations being separate—both in mission and in the type of funding they receive. For example, unlike the companies we reviewed, S&T officials explained that they do not transition scientists and engineers along with the technologies they developed to the acquisition programs. The companies we reviewed reported that they set the expectation that both technology and product development staff work together. DOD, however, has not established a formal policy on how these two communities should collaborate on projects to overcome these cultural barriers. DOD has processes to help its research labs and centers collaborate on S&T work, but these processes do not emphasize collaboration between the S&T and acquisition communities. In 2014, ASD(R&E) revitalized its “Reliance 21” framework—a joint planning and coordination process that is intended to ensure DOD’s S&T community provides solutions and advice to the departments’ senior-level decision makers, warfighter, Congress, and other stakeholders. This is to be accomplished, in part, through groups of technical experts organized around 17 technical areas referred to as Communities of Interest (COI). Originally formed in 2009, COIs provide DOD with a mechanism for experts in technical areas, such as cyber or space, to coordinate and communicate what S&T-related R&D each military department is working on and identify areas for collaboration. Each COI is supposed to conduct a portfolio review every 2 to 3 years—depending on the technical area—to assess those gaps and their impacts and to make recommendations to the S&T Executive Committee. We found that each COI has documented at least one such assessment since 2014. According to an ASD(R&E) official, each COI also provides updates to the S&T Executive Committee during its annual S&T strategy meeting. DOD’s S&T executives stated that there is a need to improve how and when the acquisition community is brought in to contribute. One S&T executive pointed out that bringing in external stakeholders earlier in the process is a way to facilitate disruptive innovation. They described one instance in which the Air Force overcame existing cultural barriers to collaboration by funding technology maturation efforts with both S&T and acquisition community support, as described in figure 17. This different approach was the result of a deliberate decision by the Air Force to foster more collaboration between the S&T community and other stakeholders to increase the odds that new technologies end up in the hands of the warfighter, according to officials. While selected leading companies we reviewed provided funding for prototypes during technology development, DOD has only recently begun to fund advanced prototyping efforts within the labs. The acquisition community, as opposed to the S&T labs, traditionally bears the responsibility of maturing technology through advanced prototyping, which is in contrast to how the companies with whom we met operate. DOD’s S&T labs and centers typically do not control the BA 6.4 funding that would allow them to conduct such prototyping. We found that the S&T community typically matures technologies to, at most, a prototype that is close to final form, fit, and function and tested in a relevant environment. This creates strong incentives for S&T project teams to identify technology transition partners in the form of major acquisition programs early in development, which may ultimately restrict disruptive innovation and push S&T projects to be more incremental to satisfy potential customers’ near-term needs. For example, applied research project proposals at the Naval Research Lab and the Army’s Engineer Research and Development Center both identify and consider potential transition partners as part of the selection criteria, regardless of whether it was for incremental or disruptive technologies. As we previously stated, it is difficult to transition a technology or identify a partner when the technology is disruptive. Companies recognized this and funded disruptive technology development projects through demonstration to help obtain a customer. In June 2017, we reported that DOD’s approach to prototyping contributes, in part, to DOD’s broad challenges with transitioning technology from the labs into the hands of the warfighter. We further reported that prototyping that is not directly tied to acquisition programs can be seen as a way to “test the waters” because it does not require the level of commitment associated with starting acquisitions. Each military department has recently undertaken efforts to fund more advanced prototypes for incremental and disruptive technologies in S&T labs. For example, since 2012, the Army has used funding typically associated with acquisition programs to conduct higher-fidelity prototyping and further mature technology outside of those programs through its Technology Maturation Initiative. In the President’s fiscal year 2017 budget submission, the Army requested approximately $70 million for these efforts. In May 2016, the Air Force established the Strategic Development Planning and Experimentation Office, in part, to run Air Force experimentation initiatives to achieve specific technology development objectives. Air Force officials explained that the intent of these initiatives is to develop more agile approaches to innovation by creating a learning organization that can rapidly take new innovative approaches, can quickly initiate new projects and is not hampered by the traditional 2-year planning process, is composed of acquisition and S&T representatives to promote collaboration, and conducts prototyping and demonstrations without a direct requirement from an acquisition community stakeholder to reduce risk and mature technologies. In the President’s fiscal year 2017 budget request, the Air Force requested $62 million to fund these experimentation efforts. Currently, two technology development areas are addressed through experimentation initiatives and two more are planned. According to Air Force officials, these initiatives reflect its leadership’s desire to embrace a culture of encouraging and formulating innovative strategic choices independent of major weapon system acquisition programs. The Department of Navy is pursuing similar efforts through its Rapid Prototyping, Experimentation, and Demonstration (RPED) initiative. RPED projects use prototyping to rapidly develop and assess new technologies and engineering innovations to address priority naval warfighting needs. The Navy expects that RPED projects will assist in developing new capability concepts, informing and refining requirements, addressing priority needs by demonstrating, and enabling quicker transition of technologies to naval programs. The Navy developed its policy for RPED projects in December 2016 and requested $40 million to fund projects in the President’s fiscal year 2017 budget submission. In our June 2017 report, we recommended that DOD develop a strategy to better coordinate and communicate the goals of these and other prototyping efforts to ensure these efforts gain traction and achieve success. An October 2013 Defense Science Board report reinforces the military departments’ focus on experimentation as an innovation enabler. The Defense Science Board found that DOD cannot continue to rely on technological superiority unless it adopts methods that allow it to anticipate, assess, and gain experience with new technological capabilities before its potential adversaries do. DOD’s organizational structure and incentives contribute to why it does not fully implement the S&T management practices that the selected leading companies we reviewed follow. This includes DOD’s budget environment, funding model, and the manner in which DOD is organized to execute technology development. As we have previously reported, the critical differences between the environments and cultures of private companies and DOD must be recognized before tangible progress can be made in establishing more efficient practices in S&T management. Further, we concluded that changing the mechanics of the processes, without changing the environment that determines incentives, may not produce better outcomes. Specifically: Companies operate in an environment where profitability is a constant business imperative. As such, leading companies we reviewed devoted a portion of their R&D investments toward futuristic concepts, which are intended to keep them competitive in the long-term, instead of just near-term products. Disruptive R&D includes significant technology development efforts addressing the anticipated customer needs of the future, potentially leading to products that render competitive products irrelevant. By separately organizing disruptive R&D from incremental, companies are able to protect funds from the near-term focused business unit managers. In the DOD environment, budget pressures and urgent requirements often drive military departments to focus on near-term needs over long-term innovation. For instance, a 2016 Air Force Studies Board report found that in much of the Air Force, little or no space for innovation exists. Because innovation is focused on future needs, the report found that Air Force organizations decide they can wait on addressing the needs of tomorrow. The report found that across the Air Force as a whole, insufficient processes existed to support “rapid-cycle” innovation with the same intensity and pace Air Force personnel regularly bring to bear to fulfill other missions. Companies fiercely compete with one another for customers and in ever-changing market conditions. This environment requires agility in how they direct their technology and product investments, which includes rapidly initiating new projects and truncating underperforming ones. DOD, on the other hand, operates under different conditions. Its budget environment may incentivize starting and sustaining programs rather than discontinuing underperforming ones. In April 2014, we found that budgets to support major acquisition program commitments must be approved well ahead of when the information needs to support the decision is available. DOD’s S&T community operates under similar pressures and incentives as its acquisition community. In this environment, we found that it is easier to sustain a program until its funding expires, even if technical performance is lacking. According to DOD S&T officials, current budgeting and funding processes restrict, rather than encourage, innovation. DOD S&T executives told us that they want more flexibility outside of the cumbersome 2-year PPBE process to initiate and discontinue projects. Companies set their own budgets internally for various activities, including R&D. Conversely, as a government agency, DOD can influence, but not set, its annual budget. Ultimately, Congress determines what level of funding to appropriate DOD, including for S&T-related activities. Overcoming many of these challenges may ultimately be the responsibility of the yet-to-be-created Office of the Under Secretary of Defense for Research and Engineering (USD(R&E)). The National Defense Authorization Act (NDAA) for Fiscal Year 2017 calls for the establishment of the position of USD(R&E) to serve as the CTO and elevate and enhance the mission of defense technological innovation. This office will have greater responsibilities than the current ASD(R&E) and will focus on innovation, oversight, and policy for defense research and engineering, technology development and transition, prototyping and experimentation, and testing activities. Specifically, where ASD(R&E) has taken a more hands-off approach to developing S&T policy, Congress legislated that the new office take a larger role in establishing policies to overcome the challenges DOD currently faces with promoting innovation. The USD(R&E) will also be responsible for the allocation of resources for defense research and engineering, and unifying these efforts across DOD. The fiscal year 2017 NDAA requires this position to be created by February 2018. In March 2017, DOD reported that it would submit final plans for creating this position to Congress no later than August 1, 2017, as required by law. DOD’s S&T investments are key to maintaining our nation’s technological superiority over our adversaries. Congress has raised questions about DOD being innovative enough to maintain future technology superiority. The business imperatives that world-class technology companies must operate under force them to manage their S&T portfolios and projects to produce better outcomes for evolving current products, as well as well as develop disruptive technologies for the future. Leading companies have shown they do this by organizing and funding R&D to avoid the pressures to focus on incremental innovation at the expense of maintaining their technological edge in the future. With its focus on meeting warfighter needs, DOD does not operate under similar business imperatives; it has not, at a department-wide level, emphasized the need to invest in disruptive technologies. Instead, each military department’s S&T organizational construct and funding processes increase emphasis on investing in technologies that will support the near-term requirements of a major weapon system acquisition program at the expense of investing in innovative technologies that are not linked to a requirement. As DOD determines the roles and responsibilities for its new Under Secretary of Defense for Research and Engineering, it is uniquely positioned to rethink its policies that govern technology development. While it may not be practical for each military department to organize its technology development as leading companies do, there are pockets within each department that are implementing some aspects of leading company practices. However, more needs to be done to facilitate more systematic adoption of these practices across DOD. Doing so can position DOD to develop more innovative, disruptive technologies. By not taking steps to ensure the right balance of incremental and disruptive technology investments, DOD lacks visibility into whether the technologies it is developing will provide superior capabilities to counter future and emerging adversarial threats. Additionally, the limited collaboration with product developers, limited use of existing flexible approaches to fund S&T projects outside of the 2-year planning process, and limited advanced prototyping of new technologies by the labs creates added barriers to innovation. To ensure that DOD is positioned to counter both near and far term threats, consistent with its S&T framework, we recommend that the Secretary of Defense direct the new Under Secretary of Defense for Research and Engineering annually take the following two actions: define the mix of incremental and disruptive innovation investments for each military department, and assess whether that mix is achieved. To ensure that DOD is positioned to more comprehensively implement leading practices for managing science and technology programs, we recommend that the Secretary of Defense direct the new Under Secretary of Defense for Research and Engineering to define, in policy or guidance, an S&T management framework that includes the three following actions: emphasizes greater use of existing flexibilities to more quickly initiate and discontinue projects to respond to the rapid pace of innovation; incorporates acquisition stakeholders into technology development programs to ensure they are relevant to customers; and promotes advanced prototyping of disruptive technologies within the labs so the S&T community can prove these technologies work to generate demand from future acquisition programs. We provided a draft of this report to the DOD for review and comment. DOD’s written comments are reprinted in appendix III of this report and summarized below. In its comments, DOD did not concur with each of our recommendations, citing that it is premature to get ahead of the Secretary of Defense’s final decisions on the role of the new Under Secretary of Defense for Research and Engineering (USD(R&E)) until that position is established, which is required by no later than February 1, 2018. We believe, however, as the roles and responsibilities of the USD(R&E) are in the process of being deliberated, that it is appropriate and timely for the Secretary of Defense to ensure that the USD(R&E) be responsible for implementing our recommendations. Although it did not concur, DOD identified actions that it could take that are generally responsive to our recommendations. Specifically, in response to our recommendations that the Secretary of Defense direct the USD(R&E) to define and assess the mix of incremental and disruptive innovation investments for each military department, DOD stated that it would need to coordinate with each military department to establish appropriate goals for those investments. DOD further noted that it could assess whether that mix is achieved during its annual S&T Strategic Overview meeting. We continue to believe that such actions are necessary to ensure that DOD is positioned to counter both near and far term threats. In response to our recommendation that the USD(R&E) define an S&T framework that emphasizes greater use of flexibilities to more quickly initiate and discontinue projects to respond to the rapid pace of innovation, DOD identified the Laboratory Quality Enhancement Program as an activity to leverage existing flexibilities. This program—which DOD implemented in response to the National Defense Authorization Act for Fiscal Year 2017—requires DOD to create panels of experts to make recommendations to the Secretary of Defense on matters related to S&T policy and practices. DOD, however, did not explain how this program would help the labs make greater use of existing flexibilities to initiate projects, such as those granted under Section 219 of the Duncan Hunter National Defense Authorization Act for Fiscal Year 2009. We continue to believe that greater use of existing authorities, such as those provided under Section 219, could help labs to more quickly initiate projects outside of the normal planning cycle, which can take nearly two years for a project to be funded. In response to our recommendation that the USD(R&E) define an S&T framework that incorporates acquisition stakeholders into technology development programs, DOD identified that it expects the USD(R&E) to provide policy and guidance that will include increased engagement with acquisition stakeholders. We continue to believe that enhancing collaboration between the S&T and acquisition communities is critical to ensuring that technologies in development will be relevant to potential customers. In response to our recommendation that the USD(R&E) define an S&T framework that promotes advanced prototyping of disruptive technologies within the labs, DOD noted the benefits of prototyping and that it is a critical piece of the larger research and engineering strategy. It did not, however, identify if any such strategy would be revised to promote earlier prototyping so the S&T community can prove technologies work and generate demand from future acquisition programs. We continue to believe that establishing an S&T framework that emphasizes prototyping outside of acquisition programs is needed. Additionally, in response to our recommendations that the Secretary of Defense direct the USD(R&E) to define the three elements above in an S&T management framework, DOD also noted that Reliance 21 is expected to continue serving as the overarching framework for the S&T joint planning and coordination process. We continue to believe, however, that this framework does not fully address our recommendations and that further actions, such as those they outlined above, are necessary for DOD to ensure it is positioned to more comprehensively implement leading practices for managing S&T. We are sending copies of the report to the appropriate congressional committees; the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology and Logistics; and the Secretaries of the Army, Navy, and Air Force; and to the eight leading companies we interviewed about their practices for this report. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-4841 or sullivanm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. We used a case study approach to identify leading commercial companies’ research and development (R&D) practices. We selected and visited eight large (Fortune 500-listed) companies that were U.S. companies or equivalent foreign companies, or were owned by those companies. Our primary goal was to select large companies—those more comparable with the size of the Department of Defense (DOD) than smaller ones—from a range of different industries, that were profitable and that received two or more industry awards or other recognition for technology innovation since 2014. We used the following sources to identify companies that have received awards or other recognition: Boston Consulting Group, PwC, MIT Technology Review, American Business Awards Gold Stevie awards, and Thomson-Reuters. These organizations either provide annual lists of leading innovator companies or select top innovator companies for awards on an annual basis. These groups are positioned to be knowledgeable regarding who the “leading companies” are for the purposes of the case study selection for our review. We used available corporate stock information at Morningstar.com to determine whether a company has been profitable, which is an indicator of their degree of success in their science and technology development efforts. Below are descriptions of the eight companies featured in this report. Amazon.com sells consumer electronics, operates retail websites serving over 100 countries, and provides cloud-computing services to hundreds of thousands of organizations in 190 countries around the world. Lab126 is Amazon’s inventive research and development company that designs and engineers high-profile consumer electronics, including Kindle Fire tablets, Fire TV and Amazon Echo. Amazon’s recent recognitions include being included among the Boston Consulting Group’s (BCG) “Most Innovative Companies,” PwC’s “10 Most Innovative Companies,” Thomson-Reuters’s “Top 100 Global Innovators” and MIT Technology Review’s “50 Smartest Companies.” Ashland Global Holdings, Inc. is a global chemicals company serving customers in a wide range of consumer and industrial markets, including adhesives, architectural coatings, automotive, construction, energy, food and beverage, personal care, and pharmaceutical. Valvoline Inc., a leading producer and retailer of automotive lubricants that made the first trademarked American motor oil, was an Ashland subsidiary at the time we met with Valvoline company representatives. Ashland currently owns a controlling interest in Valvoline after it became a separate public company in 2016. Ashland’s recent recognitions include a Bronze Innovation Zone Functional Ingredient Award and Ringier Coatings Technology Innovation award and a Composites and Advanced Materials Exposition award for “Unsurpassed Innovation.” The Dow Chemical Company delivers a broad range of technology- based products and solutions to customers in 175 countries. Dow drives innovations that extract value from material, polymer, chemical and biological science to help address many of the world’s most challenging problems, such as the need for fresh food, safer and more sustainable transportation, clean water, energy efficiency, more durable infrastructure, and increasing agricultural productivity. Dow’s recent recognitions include receiving multiple R&D 100 awards, and being included among BCG’s “Most Innovative Companies” and Clarivate Analytics (formerly Thomson-Reuters’s) “Top 100 Global Innovators.” Honeywell International, Inc. invents and commercializes technologies that address some of the world’s most critical challenges around energy, safety, security, productivity and global urbanization. Honeywell’s Aerospace division, which is discussed in this report, is a leading provider of aircraft engines, integrated avionics, systems and service solutions, and related products and services for aircraft manufacturers, and turbochargers to improve the performance and efficiency of passenger cars and commercial vehicles. Honeywell’s recent recognitions include receiving American Business Awards for “Most Innovative Company” and “Most Innovative Technology Company,” and being included among Thomson-Reuters’s “Top 100 Global Innovators.” General Motors Co. and its partners produce vehicles in 30 countries, including the Chevrolet, Cadillac, Baojun, Buick, GMC, Holden, Jiefang, Opel, Vauxhall and Wuling brands. GM develops innovative new technologies offering vehicle electrification, autonomous driving, vehicle health management, and alternative fuel usage. GM’s recent recognitions include an Edison Award for Automotive Computing, an Automotive News “Pace Award,” and being included among Fast Company’s “World’s 10 Most Innovative Companies of Automotive.” International Business Machines Corporation (IBM) develops and markets cognitive systems, or computers that learn through interactions with people and data, as well as enterprise systems and software. IBM also provides cloud computing, consulting and information technology implementation services. IBM’s recent recognitions include receiving an R&D 100 award, and being included among BCG’s “Most Innovative Companies” and MIT Technology Review’s “50 Smartest Companies.” Qualcomm is a leader in the commercialization of digital communication technologies, including Code Division Multiple Access (CDMA), and Long Term Evolution (LTE), for cellular wireless communication applications. They also develop and commercialize numerous technologies used in handsets and tablets. They also own intellectual property contributing to other commercial technologies like wireless local area network, global positioning system, near field communication, and Bluetooth. Qualcomm’s recent recognitions include receiving an R&D 100 award, and being included among MIT Technology Review’s “50 Smartest Companies” and Thomson- Reuters’s “Top 100 Global Innovators.” Siemens AG is one of the world’s largest producers of electrification, automation, and digitalization technologies. Siemens’s products include gas, steam, and wind turbines, integrated power plant solutions, power grid systems, building technologies, rail technologies, medical imaging and diagnostics, and other systems for industrial use. The company’s recent recognitions include receiving two R&D 100 awards and being included among BCG’s “Most Innovative Companies.” Siemens is also category leader in the Dow Jones Sustainability Index ranking, with 100 out of 100 points for innovation management. For each of the companies, we conducted semi-structured interviews with senior management officials and other company representatives knowledgeable about research and development activities to gather consistent information about processes and practices companies use to manage technology development. In particular, we discussed their (1) organizational structure and management culture, (2) R&D portfolio management and investment strategy, (3) R&D project management practices, and (4) their technology transition process, including when transition occurs, the organizations involved, and how technology is funded throughout the transition phase. We synthesized each company’s processes and created summary documents, which the companies then reviewed for accuracy and completeness to validate our assessment of their specific practices. Using this validated information, we identified the practices that were consistent among the selected companies and which company representatives considered key to promoting innovation. We also presented our analysis of these leading practices to senior DOD Science and Technology (S&T) executives within the Office of the Secretary of Defense, the military services, and other defense research organizations to obtain their views on the practices. To identify the extent to which DOD can employ these leading commercial practices, we interviewed officials responsible for the management, execution, and oversight of DOD’s S&T enterprise. At the Office of the Secretary of Defense and military department headquarters level, those responsible for the management and oversight of S&T activities, we met with officials from the Office of the Assistant Secretary of Defense for Research and Office of the Deputy Assistant Secretary of the Army for Research and Office of the Deputy Assistant Secretary of the Air Force for Science, Technology, and Engineering; Office of the Deputy Assistant Secretary of the Navy for Research, Development, Test, and Evaluation; and Office of Naval Research. We also met with military department laboratory officials responsible for the management and execution of S&T activities from the Army Research Laboratory; Army Armament Research, Development, and Engineering Center; Army Engineer Research and Development Center; Air Force Research Laboratory; Naval Research Laboratory; and Naval Undersea Warfare Center—Division Newport. Finally, we met with officials from the Defense Advanced Research Projects Agency (DARPA) responsible for the planning and oversight of their S&T activities. We conducted semi-structured interviews at each laboratory and DARPA to gather consistent information about processes and practices these organization used to manage S&T activities. In particular, we discussed their (1) organizational structure and management culture, (2) S&T portfolio management and investment strategy, (3) S&T project management practices, and (4) their technology transition process. We compared and contrasted those practices with the practices identified through our meetings with leading commercial companies to determine the extent to which DOD is employing these practices. Where appropriate, we reviewed relevant regulations, policies, and guidance that establish the framework for how DOD S&T organizations plan, budget and execute S&T activities, including the Assistant Secretary of Defense for Research and Engineering’s Reliance 21 Operating Principles and DOD’s Financial Management Regulation. To further our understanding of the S&T management practices being used at the military department labs, we reviewed at least two recent S&T projects by each of these labs. These projects were identified by lab officials and included projects deemed successful as well as ones identified as unsuccessful. Finally, we hosted a forum of DOD S&T executives in December 2016 to identify potential opportunities for DOD to adopt leading commercial practices in S&T management, as well as any barriers to adopting these practices. Forum participants included the following: Ms. Mary Miller, Principal Deputy to the Assistant Secretary of Defense for Research and Engineering; Dr. Melissa Flagg, Deputy Assistant Secretary of Defense for Mr. Michael Holthe, Acting Director of Technology, Office of the Deputy Assistant Secretary of the Army for Research and Technology; Dr. David Walker, Deputy Assistant Secretary of the Air Force for Science, Technology, and Engineering; Dr. Phil Perconti, Acting Director, Army Research Laboratory; Dr. Jeff Holland, Director, Army Engineer Research and Development Mr. Jyuji Hewitt, Executive Deputy, Army Research, Development, Mr. John Uscilowicz, Director Plans, Programs, Analysis, and Evaluation, Army Medical Research and Materiel Command; Dr. Morley Stone, Chief Technology Officer, Air Force Research Dr. Edward Franchi, Acting Director, Naval Research Laboratory; Dr. Stephen Russell, Director of Science and Technology, Space and Naval Warfare Systems Command; and Mr. Ellison Urban, Special Assistant to the Director, Defense Advanced Research Projects Agency. In addition to the contact named above, Christopher R. Durbin, Assistant Director; Marie Ahearn; Emily Bond; Jared Dmello; Lorraine Ettaro; Rich Hung; Justin Jaynes; Ron La Due Lake; Sean Seales; Brian Smith; and Robin Wilson made significant contributions to this report.
DOD relies on innovative technologies to ensure the superiority of its weapon systems and planned to invest about $12.5 billion in fiscal year 2017 to achieve this aim. Recently, DOD's leadership role in fostering innovation has been supplanted by the commercial sector. This has changed DOD's approach to technology development by relying more on commercial innovation. Conference Report 112-329 included a provision for GAO to review DOD's S&T enterprise. This report assesses (1) the practices leading companies employ to manage technology development and (2) the extent to which DOD can incorporate these practices into its own. GAO interviewed eight large, profitable, leading technology companies (Amazon, Dow Chemical, Honeywell, General Motors, IBM, Qualcomm, Siemens AG, and Valvoline) to identify practices they used to manage, prioritize, and assess their technology portfolios. GAO also met with DOD organizations that manage and execute S&T funds to identify their practices. The eight leading companies whose practices GAO assessed take a disciplined approach to organizing and executing their technology development activities by grouping them into two portfolios: incremental and disruptive, as shown in the figure. Incremental development improves product lines whereas disruptive development is for riskier innovative and potentially market-shifting technologies. By separating these two portfolios, companies reported that they could promote existing product lines in the short term while exploring opportunities to remain competitive in the long term, and mitigate the financial risk associated with disruptive technology development. Moreover, GAO found that leading companies also ensure technologies will be relevant in the marketplace by engaging a wide range of internal stakeholders. These companies also reported that they gain leadership buy-in by prototyping technologies before committing to further development and product integration. While some Department of Defense (DOD) practices closely mirror those of the companies GAO reviewed, DOD's ability to adopt leading commercial practices in its approach to managing science and technology (S&T) investments is limited by its funding policies and culture. Unlike the companies GAO reviewed, DOD leadership does not provide guidance on or assess the mix of incremental and disruptive innovation. As a result, officials reported that DOD labs struggle to find the right balance between these investment areas. Under DOD's budget policy, projects are planned up to 2 years in advance, which can slow innovation and limit lab directors' autonomy as compared to companies. Congress has provided a means for lab directors to initiate work outside of this lengthy process, but it has not been fully utilized. Additionally, responsibilities for technology versus product development also contribute to a culture that discourages collaboration and limits labs' ability to prototype. Yet these issues are not insurmountable, as pockets of each military department have demonstrated, such as through recent efforts to expand advanced prototyping in the labs. Further, Congress has required that by February 2018 DOD create a new Under Secretary of Defense for Research and Engineering (USD(R&E)), which will be charged with developing policies to improve innovation. This position creates an opportunity to develop policies that further promote adoption of leading commercial practices. GAO recommends that DOD annually define and assess the mix of innovation investments and define, in policy or guidance, an S&T management framework that comprehensively employs leading commercial practices. DOD did not agree with the recommendations, citing its ongoing deliberations on the new USD R&E's role, but did identify some planned actions. GAO believes its recommendations are valid as discussed in the report.
To help protect against threats to federal systems, FISMA 2002 set forth a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. This framework created a cycle of risk management activities necessary for an effective security program. It was also intended to provide a mechanism for improved oversight of federal agency information security programs. To ensure the implementation of this framework, FISMA 2002 assigned specific responsibilities to agencies, their inspectors general, OMB, and NIST. FISMA 2002 required each agency in the executive branch to develop, document, and implement an information security program that includes the following components: periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems; policies and procedures that (1) are based on risk assessments, (2) cost-effectively reduce information security risks to an acceptable level, (3) ensure that information security is addressed throughout the life cycle of each system, and (4) ensure compliance with applicable requirements; subordinate plans for providing adequate information security for networks, facilities, and systems or a group of information systems, as appropriate; security awareness training to inform personnel of information security risks and of their responsibilities in complying with agency policies and procedures, as well as training personnel with significant security responsibilities for information security; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, to be performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in the information security policies, procedures, and practices of the agency; procedures for detecting, reporting, and responding to security incidents; and plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. In addition, each of the agencies in the executive branch were to report annually to OMB, certain congressional committees, and the Comptroller General on the adequacy and effectiveness of information security policies, procedures, and practices, and their compliance with the act. FISMA 2002 also required each agency inspector general, or other independent auditor, to annually evaluate and report on the information security program and practices of the agency. OMB’s responsibilities included developing and overseeing the implementation of policies, principles, standards, and guidelines on information security in federal agencies except with regard to national security systems. FISMA 2002 also assigned responsibility to OMB for ensuring the operation of a federal information security incident center. The required functions of this center are performed by DHS’s United States Computer Emergency Readiness Team (US-CERT), which was established to aggregate and disseminate cybersecurity information to improve warning and response to incidents, increase coordination of response information, reduce vulnerabilities, and enhance prevention and protection. OMB is also responsible for reviewing, at least annually, and approving or disapproving agencies’ information security programs. Since it began issuing guidance to agencies in 2003, OMB has instructed agency chief information officers and inspectors general to report on a variety of metrics in order to satisfy reporting requirements established by FISMA 2002. Over time, these metrics have evolved to include administration priorities and baseline metrics meant to allow for measurement of agency progress in implementing information security- related priorities and controls. OMB requires agencies and inspectors general to use an interactive data collection tool called CyberScope to respond to these metrics. The metrics are used by OMB to summarize agencies’ progress in meeting FISMA 2002 requirements and report this progress to Congress in an annual report, as required by FISMA 2002. NIST’s responsibilities under FISMA 2002 included the development of security standards and guidelines for agencies that include standards for categorizing information and information systems according to ranges of impact-levels (See Federal Information Processing Standards 199 and 200), minimum security requirements for information and information systems in risk categories, guidelines for detection and handling of information security incidents, and guidelines for identifying an information system as a national security system. During the 12 years FISMA 2002 was enacted into law and then largely replaced by FISMA 2014, executive branch oversight of agency information security has evolved. As part of its FISMA 2002 oversight responsibilities, OMB has issued annual instructions for agencies and inspectors general to meet FISMA 2002 reporting requirements. In July 2010, the Director of OMB and the White House Cybersecurity Coordinator issued a joint memorandum that gave DHS primary responsibility within the executive branch for the operational aspects of cybersecurity for federal information systems that fall within the scope of FISMA 2002. This memo stated that DHS would have these five responsibilities: overseeing implementation of and reporting on government cybersecurity policies and guidance; overseeing and assisting government efforts to provide adequate, risk-based, and cost-effective cybersecurity; overseeing agencies’ compliance with FISMA 2002; overseeing agencies’ cybersecurity operations and incident response; annually reviewing agencies’ cybersecurity programs. The OMB memo further stated that, in carrying out these responsibilities, DHS was to be subject to general OMB oversight in accordance with the provisions of FISMA 2002. In addition, the Cybersecurity Coordinator would lead the interagency process for cybersecurity strategy and policy development. In accordance with guidance contained in the memo, DHS, instead of OMB, issued guidance to agencies and inspectors general on metrics used for reporting agency performance of cybersecurity activities and privacy requirements, while OMB continued to provide more general reporting guidance. Specifically, DHS provided guidance to agencies for reporting on the implementation of security requirements in areas such as continuous monitoring, configuration management, incident response, security training, and contingency planning, among others. The guidance also instructs inspectors general on reporting the results of their annual evaluations and instructs senior agency officials for privacy on reporting their agencies’ implementation of privacy requirements. As previously mentioned, DHS is also responsible for ensuring the operation of a federal information security incident center to improve warning and response to incidents, increase coordination of response information, reduce vulnerabilities, and enhance prevention and protection. Within DHS, the Federal Network Resilience division’s Cybersecurity Performance Management Branch is responsible for (1) developing and disseminating FISMA 2002 reporting metrics, (2) managing the CyberScope web-based application, and (3) collecting and reviewing federal agencies’ cybersecurity data submissions and monthly data feeds to CyberScope. In addition, the Cybersecurity Assurance Program Branch is responsible for conducting cybersecurity reviews and assessments at federal agencies to evaluate the effectiveness of agencies’ information security programs. To further improve cybersecurity and clarify oversight responsibilities, Congress passed FISMA 2014. FISMA 2014 is intended to address the increasing sophistication of cybersecurity attacks, promote the use of automated security tools with the ability to continuously monitor and diagnose the security posture of federal agencies, and provide for improved oversight of federal agencies’ information security programs. Specifically, the act clarifies and assigns additional responsibilities to OMB, DHS, and federal agencies in the executive branch. These new responsibilities include: Preserves OMB’s oversight responsibilities, but removes the requirement for OMB to annually review and approve agencies’ information security programs. Requires OMB to include in its annual report to Congress a summary of major agency information security incidents, an assessment of agency compliance with NIST standards, and an assessment of agency compliance with breach notification requirements. For two years after enactment, OMB is to include in its annual report an assessment of agencies’ adoption of continuous diagnostic technologies and other advanced security tools. Requires OMB to update data breach notification policies and guidelines periodically and require notice to congressional committees and affected individuals. Expands exemptions from OMB oversight for certain national security- related systems. States that OMB shall, in consultation with DHS, the Chief Information Officers Council, the Council of Inspectors General on Integrity and Efficiency, and other interested parties as appropriate, ensure the development of guidance for evaluating the effectiveness of an information security program and practices. Establishes DHS responsibility, in consultation with OMB, to administer the implementation of agency information security policies and practices for information systems other than national security systems, the Department of Defense, and the Intelligence community’s “debilitating impact” systems. Requires DHS to develop, issue, and oversee implementation of binding operational directives to agencies. Such directives include those for incident reporting, contents of annual agency reports, and other operational requirements. Gives DHS responsibility to operate the federal information security incident center, deploy technology to continuously diagnose and mitigate threats, compile and analyze data, and develop and conduct targeted operational evaluations, including threat and vulnerability assessments of systems. Requires agencies to comply with DHS operational directives in addition to OMB policies and procedures and NIST standards. Requires agencies to ensure that senior officials carry out assigned responsibilities and that all personnel are held accountable for complying with the agency’s information security program. Requires agencies to use automated tools in periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices. Requires agencies to report major security incidents to Congress within 7 days. Agencies are also to include a description of major incidents in their annual report to Congress. FISMA 2014 also requires that the annual independent evaluation include an assessment of the effectiveness of the information security policies, procedures, and practices of the agency. This replaces the previous FISMA 2002 requirement that the independent annual evaluation include an assessment of agency compliance with the requirements of the act and related policies, procedures, standards, and guidelines. In addition, FISMA 2014 reiterates the previous requirement for federal agencies to develop, document, and implement an agency-wide information security program. Each agency and its Office of Inspector General are still required to report annually to OMB, selected congressional committees, and the Comptroller General on the adequacy of the agency’s information security policies, procedures, practices, and compliance with requirements. During fiscal years 2013 and 2014, federal agencies continued to experience weaknesses in protecting their information and information systems. These systems remain at risk as illustrated in part by the evolving array of cyber-based threats and the increasing numbers of incidents reported by federal agencies. (See app. II for additional information on cyber threats and exploits.) At the same time, weaknesses in their information security policies and practices hinder their efforts to protect against threats. Furthermore, our work and reviews by inspectors general highlight information security control deficiencies at agencies that expose information and information systems supporting federal operations and assets to elevated risk of unauthorized use, disclosure, modification, and disruption. Accordingly, we and agency inspectors general have made hundreds of recommendations to agencies to address these security control deficiencies. The number of information security incidents affecting systems supporting the federal government has continued to increase. Since fiscal year 2006, the number rose from 5,503 to 67,168 in fiscal year 2014: an increase of 1,121 percent. Figure 1 illustrates the increasing number of security incidents at federal agencies from 2006 through 2014. Similarly, the number of information security incidents involving PII reported by federal agencies has more than doubled in recent years, from 10,481 in 2009 to 27,624 in 2014. Of the incidents occurring in 2014 (not including those reported as non- cyber incidents) scans/probes/attempted access was the most widely reported type of incident across the federal government. This type of incident can involve identifying a federal agency computer, open ports, protocols, service, or any combination of these for later exploit. As shown in figure 2, these incidents represented 19 percent of the various incidents reported to US-CERT in fiscal year 2014. These incidents and others like them can pose a serious challenge to economic, national, and personal privacy and security. Recent examples highlight the impact of such incidents: In June 2015, OPM reported that an intrusion into its systems affected the personnel records of about 4.2 million current and former federal employees. The Director of OPM also stated that a separate but related incident affected background investigation files and compromised OPM systems related to background investigations for 21.5 million individuals. In June 2015, the Commissioner of the Internal Revenue Service testified that unauthorized third parties had gained access to taxpayer information from its “Get Transcript” application. According to officials, criminals used taxpayer-specific data acquired from non-department sources to gain unauthorized access to information on approximately 100,000 tax accounts. These data included Social Security information, dates of birth, and street addresses. In an August 2015 update, the Internal Revenue Service reported this number to be about 114,000, and that an additional 220,000 accounts had been inappropriately accessed, which brings the total to about 330,000 accounts. In April 2015, the Department of Veterans Affairs’ Office of Inspector General reported that two contractors had improperly accessed the agency’s network from foreign countries using personally owned equipment. In February 2015, the Director of National Intelligence stated that unauthorized computer intrusions were detected in 2014 on the networks of the Office of Personnel Management and two of its contractors. The two contractors were involved in processing sensitive PII related to national security clearances for federal employees. In September 2014, a cyber intrusion into the United States Postal Service’s information systems may have compromised PII for more than 800,000 of its employees. In October 2013, a wide-scale cybersecurity breach involving a U.S. Food and Drug Administration system occurred that exposed the PII of 14,000 user accounts. Our work at federal agencies continues to highlight information security deficiencies in both financial and nonfinancial systems. We have made hundreds of recommendations to agencies to address these security control deficiencies, but many have not yet been fully implemented. The following examples describe the risks we found at federal agencies, our recommendations, and the agencies’ responses to our recommended actions. In March 2015, we reported that the Internal Revenue Service had not installed appropriate security updates on all of its databases and servers, and had not sufficiently monitored control activities that support its financial reporting and protect taxpayer data. Also, the agency had not effectively maintained secure settings or separation of duties by allowing a developer unnecessary access to a key application. In addition to 51 recommendations made in prior years that remain unimplemented, we made 19 additional recommendations to help the agency more effectively implement elements of its information security program and address newly identified control weaknesses. The Internal Revenue Service agreed to develop corrective action plans, as appropriate, to address these recommendations. In January 2015, we reported that the Federal Aviation Administration had significant security control weaknesses in the five air traffic control systems we reviewed. These systems perform functions such as determining and sharing precise aircraft location, streaming flight information to cockpits of aircraft, providing telecommunications infrastructure for NextGen, and are necessary for ensuring the safe and uninterrupted operation of the national airspace system. We identified numerous weaknesses in controls intended to prevent, limit, and detect unauthorized access to computer resources, such as controls for protecting system boundaries, identifying and authenticating users, authorizing users to access systems, encrypting sensitive data, and auditing and monitoring activity on its systems. The agency also had not fully implemented an agency-wide information security program, in part due to not having fully established an integrated, organization-wide approach to managing information security risk. We made 168 recommendations to the agency to mitigate control deficiencies and 17 recommendations to fully implement its information security program and establish an integrated approach to managing information security risk. The Federal Aviation Administration concurred with our recommendations, described actions that it was taking to improve its information security, and indicated that it would address the recommendations. In November 2014, we reported that the Department of Veterans Affairs had not taken effective actions to contain and eradicate a significant incident detected in 2012 involving a network intrusion. Further, the department’s actions to address vulnerabilities identified in two key web applications were insufficient. Additionally, vulnerabilities identified in workstations (e.g., laptop computers) had not been corrected. We made eight recommendations to address identified weaknesses in incident response, web applications, and patch management. The department concurred with our recommendations and provided an action plan for addressing the identified weaknesses. Similar to our work, independent reviews at the 24 agencies continued to highlight deficiencies in their implementation of information security policies and procedures. Specifically, for fiscal year 2014, 19 agencies reported that information security control deficiencies were either a material weakness or a significant deficiency in internal controls over their financial reporting. This reflected an increase from fiscal year 2013, when 18 agencies reported that information security control deficiencies were either a material weakness or a significant deficiency in internal controls over their financial reporting. Further, 23 of 24 inspectors general for the agencies cited information security as a “major management challenge” for their agency, reflecting an increase from fiscal year 2013, when 21 inspectors general cited information security as a major challenge. The inspectors general made numerous recommendations to address these issues, as discussed later in this report. Our reports, agency reports, and inspectors general assessments of information security controls during fiscal years 2013 and 2014 revealed that most of the 24 agencies had weaknesses in each of the five major categories of information system controls: (1) access controls, which limit or detect access to computer resources (data, programs, equipment, and facilities), thereby protecting them against unauthorized modification, loss, and disclosure; (2) configuration management controls, intended to prevent unauthorized changes to information system resources (for example, software programs and hardware configurations) and assure that software is current and known vulnerabilities are patched; (3) segregation of duties, which prevents a single individual from controlling all critical stages of a process by splitting responsibilities between two or more organizational groups; (4) contingency planning, which helps avoid significant disruptions in computer-dependent operations; and (5) agencywide security management, which provides a framework for ensuring that risks are understood and that effective controls are selected, implemented, and operating as intended. While the number of agencies exhibiting weaknesses decreased slightly in two of five categories, deficiencies were prevalent for the majority of them, as shown in figure 3. In the following subsections, we discuss the specific information security weaknesses agencies reported for fiscal years 2013 and 2014. Agencies use electronic and physical controls to limit, prevent, or detect inappropriate access to computer resources (data, equipment, and facilities), thereby protecting them from unauthorized use, modification, disclosure, and loss. Access controls involve the six critical elements described in table 1. For fiscal years 2013 and 2014, we, agencies, and inspectors general reported weaknesses in access controls for 22 of the 24 agencies. In fiscal year 2014, 12 agencies had weaknesses reported in protecting their networks and system boundaries, a reduction from the 17 agencies that had weaknesses in fiscal year 2013. For example, we found that 1 agency component’s access control lists on a firewall had not prevented traffic coming or initiated from the public internet protocol addresses of a contractor site and a U.S. telecom corporation from entering to its network. Additionally, for fiscal year 2014, 20 agencies had weaknesses reported in their ability to appropriately identify and authenticate system users, a slight increase from 19 of 24 in fiscal year 2013. To illustrate, in fiscal year 2014, 1 agency had not consistently applied proper password settings to mainframe service accounts, where those accounts were configured to never require password changes. Agencies also had weak password controls, such as using system passwords that had not been changed from the easily guessable default passwords. In fiscal year 2014, 18 agencies had weaknesses reported in authorization controls, a reduction from the 20 agencies that had weaknesses in fiscal year 2013. One example of this weakness for fiscal year 2014 was that 1 agency had not consistently or in a timely manner removed, transferred, and/or terminated employee and contractor access privileges from multiple systems. Another agency had granted access privileges unnecessarily, which allowed users of an internal network to read and write files containing sensitive system information, including passwords, that were used to support automated data transfer operations between numerous systems. In fiscal year 2014, 4 agencies had weaknesses reported in encryption, down from 7 in fiscal year 2013. In addition, 19 agencies had weaknesses reported in implementing an effective audit and monitoring capability. For instance, 1 agency had not effectively implemented audit and monitoring controls on a system where the servers and network devices were not sufficiently logging security- relevant events. Finally, 10 agencies had weaknesses reported in their ability to restrict physical access or harm to computer resources and protect them from unauthorized loss or impairment. For example, a contractor of an agency was granted physical access to a server room without the required approval of the office director. Without adequate access controls in place, agencies cannot ensure that their information resources are being protected from intentional or unintentional harm. Configuration management controls ensure that only authorized and fully tested software is placed in operation, software and hardware is updated, information systems are monitored, patches are applied to these systems to protect against known vulnerabilities, and emergency changes are documented and approved. These controls, which limit and monitor access to powerful programs and sensitive files associated with computer operations, are important in providing reasonable assurance that access controls and the operations of systems and networks are not compromised. To protect against known vulnerabilities, effective procedures must be in place, current versions of vendor-supported software installed, and patches promptly implemented. Up-to-date patch installation helps mitigate known flaws in software code that could be exploited to cause significant damage and enable malicious individuals to read, modify, or delete sensitive information or disrupt operations. In fiscal year 2014, 22 agencies had weaknesses reported in configuration management, a reduction from the 24 agencies that had weaknesses in fiscal year 2013. For fiscal year 2014, 17 agencies had weaknesses reported with installing software patches and implementing current versions of software in a timely manner, an improvement from the 23 reported in fiscal year 2013. One agency had not installed critical updates in a timely manner for several of its servers. Another agency was using an unsupported software application on its workstations, and a database system used to support the access authorization system was no longer supported. For fiscal year 2014, 14 agencies had weaknesses reported in authorizing, testing, approving, tracking, and controlling configuration changes. In fiscal year 2014, our work revealed that 1 agency had not effectively documented and approved configuration changes. Specifically, the agency did not request or approve 32 changes to mainframe production processing that had been recorded in the system logs. Without a consistent approach to testing, updating, and patching software, agencies increase their risk of exposing sensitive data to unauthorized and possibly undetected access. Segregation of duties refers to the policies, procedures, and organizational structure that help to ensure that one individual cannot independently control all key aspects of a computer-related operation and thereby take unauthorized actions or gain unauthorized access to assets or records. Key steps to achieving proper segregation are ensuring that incompatible duties are separated and employees understand their responsibilities, and controlling personnel activities through formal operating procedures, supervision, and review. In fiscal years 2013 and 2014, 15 agencies had weaknesses reported in implementing segregation of duties controls. For example, in fiscal year 2014, 1 agency had not implemented requirements for separating incompatible duties. Additionally, a developer from another agency had been authorized inappropriate access to the production environment of the agency’s system. Further, another agency had not adequately implemented segregation of duties controls for IT and financial management personnel with access to financial systems across several platforms and environments. Without adequate segregation of duties, agencies increase the risk that erroneous or fraudulent actions will occur, improper program changes will be implemented, and computer resources will be damaged or destroyed. In the event of an act of nature, fire, accident, sabotage, or other disruption, an essential element in preparing for the loss of operational capabilities is having an up-to-date, detailed, and fully tested continuity of operations plan. This plan should cover all key functions, including assessing an agency’s information technology and identifying resources, minimizing potential damage and interruption, developing and documenting the plan, and testing it and making necessary adjustments. If continuity of operations controls are faulty, even relatively minor interruptions can result in lost or incorrectly processed data, which can lead to financial losses, expensive recovery efforts, and inaccurate or incomplete mission-critical information. Eighteen agencies had weaknesses reported in continuity of operations practices for their agencies in fiscal years 2014 and 2013. Specifically, in 2014, 16 agencies did not have a comprehensive contingency plan. For example, 1 agency’s contingency plans had not been updated to reflect changes in the system boundaries, roles and responsibilities, and lessons learned from testing contingency plans at alternate processing and storage sites. Additionally, 15 agencies had not regularly tested their contingency plans. For example, 1 agency had not annually tested contingency plans for 10 of its 16 systems. Until agencies address identified weaknesses in their continuity of operations plans and tests of these plans, they may not be able to recover their systems in a successful and timely manner when service disruptions occur. An underlying cause for information security weaknesses identified at federal agencies is that they have not yet fully or effectively implemented an agency-wide information security program to help them manage their security process. An agency-wide security program, as required by FISMA 2002, provides a framework for assessing and managing risk, including developing and implementing security policies and procedures, conducting security awareness training, monitoring the adequacy of the entity’s computer-related controls through security tests and evaluations, and implementing remedial actions as appropriate. Without a well- designed program, security controls may be inadequate; responsibilities may be unclear, misunderstood, and improperly implemented; and controls may be inconsistently applied. Such conditions may lead to insufficient protection of sensitive or critical resources. In fiscal year 2014, 23 agencies had weaknesses reported in security management, while 24 had them in fiscal year 2013. In one example, an agency had not fully developed and implemented components of its agency-wide information security risk management program that met FISMA’s requirements. Specifically, the agency had established an enterprise risk management framework; however, security risks had not been fully communicated to data centers, regional offices, and medical facilities. In another example, the agency did not have effective procedures for testing and evaluating controls since the procedures did not prescribe effective tests of authentication controls. Until agencies fully resolve identified deficiencies in their agency-wide information security programs, they will continue to face significant challenges in protecting their information and systems. Over the last several years, we and agency inspectors general have made hundreds of recommendations to agencies aimed at improving their implementation of information security controls. These recommendations identify actions for agencies to take in protecting their information and systems. For example, we and inspectors general have made recommendations for agencies to correct weaknesses in controls intended to prevent, limit, and detect unauthorized access to computer resources, such as controls for protecting system boundaries, identifying and authenticating users, authorizing users to access systems, encrypting sensitive data, and auditing and monitoring activity on their systems. We have also made recommendations for agencies to implement their information security programs and protect the privacy of PII held on their systems. However, many agencies continue to have weaknesses in implementing these controls in part because many of these recommendations remain unimplemented. Until federal agencies take actions to implement the recommendations made by us and the inspectors general, federal systems and information as well as sensitive personal information about the public will be at an increased risk of compromise from cyber-based attacks and other threats. Due to the increase in cyber security threats, the federal government has initiated or continued several efforts to protect federal information and information systems. The White House, OMB, and federal agencies have launched several government-wide efforts that are intended to enhance information security at federal agencies. These key efforts are discussed here. Cybersecurity Cross-Agency Priority goals: Initiated in 2012, the cybersecurity Cross-Agency Priority (CAP) goals are an effort intended to focus federal agencies’ cybersecurity activity on the most effective controls. For fiscal years 2013 and 2014, these goals included: Trusted Internet Connections: Trusted Internet Connections (TIC) aims to improve the federal government’s security posture through the consolidation of external telecommunication connections by establishing a set of baseline security capabilities through enhanced monitoring and situational awareness of all external network connections. OMB established fiscal year 2014 targets of 95 percent for TIC consolidation and 100 percent for implementing TIC capabilities. OMB reported that agencies had achieved 95 and 92 percent implementation, respectively, for these TIC goals in fiscal year 2014. Continuous monitoring: Intended to provide near real-time security status and remediation, increasing visibility into system operations and helping security personnel make risk management decisions based on increased situational awareness. OMB established a fiscal year 2014 target of 95 percent implementation for continuous monitoring and reported that the agencies had achieved 92 percent implementation. Strong authentication: Intended to increase the use of federal smartcard credentials, such as personal identity verification and common access cards that provide multifactor authentication and digital signature and encryption capabilities. Strong authentication can provide a higher level of assurance when authorizing users’ access to federal information systems. OMB established a fiscal year 2014 target of 75 percent implementation for strong authentication. In its report on fiscal year 2014 FISMA implementation, OMB indicated that the 24 federal agencies covered by the CFO Act had achieved a combined 72 percent implementation of these requirements, but this number dropped to only 41 percent implementation for the 23 civilian agencies when excluding DOD. In fiscal year 2015, the administration added the anti-phishing and malware defense as a new goal for the CAP initiative. The National Cybersecurity Protection System (NCPS): NCPS is a system of systems (also known as EINSTEIN) that is intended to deliver a range of capabilities including intrusion detection and prevention, analytics, and information sharing. The goal of EINSTEIN is to provide the federal government with an early warning system, improved situational awareness of intrusion threats, near real-time identification, and prevention of malicious cyber activity. This system was created in 2003 by US-CERT to help reduce and prevent computer network vulnerabilities across the federal government. The capabilities of NCPS are to include network “flow,” intrusion detection, and intrusion prevention functions, as described in table 2. The Continuous Diagnostics and Mitigation (CDM) Program: CDM is intended to provide federal departments and agencies with a basic set of tools to support the continuous monitoring of information systems. According to DHS, the program is intended to provide federal departments and agencies with capabilities and tools that identify cybersecurity risks on an ongoing basis, prioritize these risks based on potential impacts, and enable cybersecurity personnel to mitigate the most significant problems first. These tools include sensors that perform automated searches for known cyber vulnerabilities, the results of which feed into a dashboard that alerts network managers. These alerts can be prioritized, enabling agencies to allocate resources based on risk. DHS, in partnership with the General Services Administration, has established a government-wide acquisition vehicle to allow federal agencies (as well as state, local, and tribal governmental agencies) to acquire CDM tools at discounted rates. The National Initiative for Cybersecurity Education (NICE): NICE is an interagency effort coordinated by NIST to improve cybersecurity education, including efforts directed at training, public awareness, and the federal cybersecurity workforce. This initiative is intended to support the federal government’s evolving strategy for education, awareness, and workforce planning and provide a comprehensive cybersecurity education program. To meet NICE objectives, efforts were structured into the following four components: 1. National cybersecurity awareness: This component included public service campaigns to promote cybersecurity and responsible use of the Internet and to make cybersecurity popular for children. It was also aimed at making cybersecurity a popular educational and career pursuit for older students. 2. Formal cybersecurity education: Education programs encompassing K-12, higher education, and vocational programs related to cybersecurity were included in this component, which focused on the science, technology, engineering, and math disciplines to provide a pipeline of skilled workers for private sector and government. 3. Federal cybersecurity workforce structure: This component addressed personnel management functions, including the definition of cybersecurity jobs in the federal government and the skills and competencies they required. Also included were new strategies to ensure federal agencies can attract, recruit, and retain skilled employees to accomplish cybersecurity missions. 4. Cybersecurity workforce training and professional development: Cybersecurity training and professional development for federal government civilian, military, and contractor personnel were included in this component. The Federal Risk and Authorization Management Program (FedRAMP): FedRAMP is a government-wide program intended to provide a standardized approach to security assessment, authorization, and continuous monitoring for cloud computing products and services. FedRAMP defines a set of controls for low and moderate impact-level systems according to the baseline controls in NIST SP 800-53 Revision 4 and includes control enhancements related to the unique security requirements of cloud computing. All federal agencies must meet FedRAMP requirements when using cloud services and the cloud service providers must implement the FedRAMP security requirements in their cloud environment. In addition, the cloud service providers must hire a FedRAMP-approved third-party assessment organization to perform an independent assessment to audit the cloud system and provide a security assessment package for review. The package will then be reviewed by the FedRAMP Joint Authorization Board, which may grant a provisional authorization. Federal agencies can leverage cloud service provider authorization packages for review when granting an agency authority to operate, where this reuse is intended to save time and money. After the cloud provider has received a FedRAMP authorization from the Joint Authorization Board or the agency, it must implement a continuous monitoring capability to ensure the cloud system maintains an acceptable risk posture. The Cyber and National Security Team (E-Gov Cyber): OMB created the Cyber and National Security Team, called the E-Gov Cyber Unit, to strengthen federal cybersecurity through targeted oversight and policy issuance. The unit and its partners, the National Security Council, DHS, and NIST, are to oversee agency and government-wide cybersecurity programs, and oversee and coordinate the federal response to major cyber incidents and vulnerabilities. OMB reported that the unit found that more than half of incidents occurring at federal agencies could have been prevented by strong authentication. In addition, the unit intends to monitor implementation of critical DHS programs such as NCPS and CDM. The 30-Day Cybersecurity Sprint: In June 2015, in response to the OPM security breaches and to improve federal cybersecurity and protect systems against evolving threats, the Federal Chief Information Officer launched the 30-day Cybersecurity Sprint. As part of this effort, the Federal Chief Information Officer instructed federal agencies to immediately take a number of steps to further protect federal information and assets and to improve the resilience of federal networks. Specifically, federal agencies were to: Immediately deploy indicators provided by DHS regarding priority threat actor techniques, tactics, and procedures to scan systems and check logs. Agencies were to inform DHS immediately if indicators return evidence of malicious cyber activity. Patch critical vulnerabilities without delay. The vast majority of cyber intrusions exploit well-known vulnerabilities that are easy to identify and correct. Agencies were to take immediate action on the DHS vulnerability scan reports they receive each week and report to OMB and DHS on progress and challenges within 30 days. Tighten policies and practices for privileged users. To the greatest extent possible, agencies were to minimize the number of privileged users; limit functions that can be performed when using privileged accounts; limit the duration that privileged users can be logged in; limit the privileged functions that can be performed using remote access; and ensure that privileged user activities are logged and that such logs are reviewed regularly. Agencies were to report to OMB and DHS on progress and challenges within 30 days. Dramatically accelerate implementation of multi-factor authentication, especially for privileged users. Intruders can easily steal or guess usernames/passwords and use them to gain access to federal networks, systems, and data. Requiring the use of a personal identity verification card or alternative form of multi-factor authentication can significantly reduce the risk of adversaries penetrating federal networks and systems. Agencies were to report to OMB and DHS on progress and challenges in implementation of these enhanced security requirements within 30 days. In addition to providing guidance to the agencies, the Federal Chief Information Officer established the Cybersecurity Sprint Team to lead a review of the federal government’s cybersecurity policies, procedures, and practices. According to OMB, the team is comprised of OMB’s E-Gov Cyber and National Security Unit, the National Security Council Cybersecurity Directorate, DHS, and DOD. At the end of the review, the Federal Chief Information Officer is to create and operationalize a set of action plans and strategies to further address critical cybersecurity priorities and recommend a federal civilian cybersecurity strategy. Key principles of the strategy are to include: Protecting data: Better protect data at rest and in transit. Improving situational awareness: Improve indication and warning. Increasing cybersecurity proficiency: Ensure a robust capacity to recruit and retain cybersecurity personnel. Increasing awareness: Improve overall risk awareness by all users. Standardizing and automating processes: Decrease time needed to manage configurations and patch vulnerabilities. Controlling, containing, and recovering from incidents: Contain malware proliferation, privilege escalation, and lateral movement. Quickly identify and resolve events and incidents. Strengthening systems Life-cycle security: Increase inherent security of platforms by buying more secure systems and retiring legacy systems in a timely manner. Reducing attack surfaces: Decrease complexity and number of things defenders need to protect. Successful implementation of these government-wide efforts will be key steps to improving cybersecurity at federal agencies. The extent of agencies’ implementation of FISMA 2002 requirements for establishing and maintaining an information security program from fiscal year 2013 to fiscal year 2014 varied. For example, according to the reports by the inspectors general of the 24 CFO Act agencies, the number of agencies implementing risk management activities and documenting policies and procedures increased while the number of agencies planning for security, providing security training, and testing controls decreased. In addition, agency inspectors general, NIST, and OMB, with support from DHS, continued to address their responsibilities under FISMA 2002, but opportunities remain for improving FISMA reporting. FISMA 2002 required that agencies periodically assess the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems. These risk assessments help determine whether controls are in place to remediate or mitigate risk to the agency. NIST has issued several guides for managing risk. According to NIST’s Guide for Applying the Risk Management Framework to Federal Information Systems, risk management is addressed at the organization level, the mission and business process level, and the information system level. Risks are addressed from an organizational perspective with the development of, among other things, risk management policies, procedures, and strategy. The risk decisions made at the organizational level are to guide the entire risk management program. In addition, the activities for the risks that are addressed at the mission and business process levels include, among other things, defining and prioritizing the agency’s mission and business processes and developing an organization-wide information protection strategy. There are various risk management activities for the risks that are addressed at the information system level, including categorizing organizational information systems, allocating security controls to organizational information systems, and managing the selection, implementation, assessment, authorization, and ongoing monitoring of security controls. For fiscal years 2014 and 2013, inspectors general reported that 12 agencies had addressed risk from an organization perspective. In fiscal year 2014, inspectors general reported that 16 of 24 agencies had addressed risk from a mission or business perspective compared to 14 in fiscal year 2013. According to inspectors general, for fiscal years 2013 and 2014, 16 agencies had addressed risk from an information system perspective. Figure 4 shows examples of agencies’ implementation of risk management program elements for fiscal years 2013 and 2014. However, work by the inspectors general revealed weaknesses in risk management. According to OMB, inspectors general at seven agencies reported that their agency did not have a risk management program in place. The inspector general for one agency reported that, although the agency had implemented a risk governance structure, it had not fully identified or mitigated the enterprise-wide risks with appropriate risk mitigation strategies. Another inspector general reported that its agency did not have a current risk assessment for three of the seven systems in the sample. Managing risk is the center of an effective information security program; without effective risk management, agencies may not be fully aware of the risks to essential computing resources and may not be able to make informed decisions about needed security protections. FISMA 2002 required agencies to develop, document, and implement policies and procedures that are based on risk assessments; cost-effectively reduce information security risks to an acceptable level; ensure that information security is addressed throughout the life cycle of each agency’s information system; and ensure compliance with FISMA 2002 requirements, OMB policies and procedures, minimally acceptable system configuration requirements, and any other applicable requirements. In fiscal years 2014 and 2013, most agency inspectors general reported that their agency had documented policies and procedures that were consistent with federal guidelines and requirements. Specifically, the number of agencies that documented policies and procedures increased in 8 of 11 categories, and remained the same in 3 categories since one inspector general did not report on these. Table 3 summarizes agencies’ performance for fiscal years 2013 and 2014. In our prior work, we have also identified weakness in agencies policies and procedures for information security. In fiscal year 2014, we reported that six agencies we reviewed had not fully developed comprehensive policies and procedures for incident response. For example, only two of the six selected agencies had fully implemented policies that addressed roles, responsibilities, and levels of authority for incident response. Similarly, we reported that several agencies had not established policies and procedures to oversee or assess the security of contractor systems. Further, we found that one agency component’s mainframe security policy did not address who can administer the security software configurations that control access to mainframe programs. We recommended that these agencies develop and update policies and procedures for these areas. The agencies generally concurred with our recommendations. Until all agencies properly document and implement policies and procedures, they may not be able to effectively reduce risks to their information and information systems, and the information security practices that are driven by these policies and procedures may be applied inconsistently. FISMA 2002 required agencies’ information security programs to include plans for providing adequate information security for networks, facilities, and systems or groups of information systems, as appropriate. According to NIST, the purpose of a system security plan is to provide an overview of the security requirements of the system and describe the controls in place or planned for meeting those requirements. The first step in the system security planning process is to categorize the system based on the impact to agency operations, assets, and personnel should the confidentiality, integrity, and availability of the agency’s information and information systems be compromised. This categorization is then used to determine the appropriate security controls needed for each system. Another key step is selecting a baseline of security controls for each system and documenting those controls in the security plan. In addition, NIST recommends that the plan be reviewed and updated at least annually. According to NIST, the security authorization package documents the results of the security control assessment and provides the authorizing official with essential information needed to make a risk- based decision on whether to authorize operation of an information system or a designated set of common controls. The package contains a security plan, security assessment report, and plan of action and milestones (POA&M). DHS’s fiscal year 2014 reporting instructions request inspectors general to report on their agencies implementation of certain program attributes such as whether (1) the agency has categorized information systems, (2) its security authorization package contained system security plan, security assessment report, POA&M, and accreditation boundaries, and (3) it has selected and implemented a tailored set of baseline security controls. In fiscal year 2014, agency inspectors general at 18 agencies reported that their agency had categorized information systems in accordance with federal policies, a decrease from fiscal year 2013, in which 19 inspectors general reported that their agency had categorized their systems. In addition, fewer agencies selected an appropriately tailored set of baseline security controls. For instance, in fiscal year 2014, 15 inspectors general stated that their agency had appropriately selected a baseline of security controls, while 16 had reported for fiscal year 2013. In addition, in fiscal year 2014, 13 inspectors general reported that their agency had implemented a tailored set of baseline security controls, another decrease from fiscal year 2013, in which 14 agencies were reported for such controls. For fiscal year 2014, according to the inspectors general, 15 agencies had completed a security authorization package that contained a system security plan; 8 had not completed one; and 1 inspector general responded that the question was “not applicable.” This is a decrease from fiscal year 2013, where 17 agencies had included such a security authorization package. In addition, inspectors general at 11 agencies reported that their agency had not always completed or properly updated their security plan. For example, a component of 1 agency had not completed one or more key elements of its system security plan, such as defining the system’s accreditation boundary. Further, at another agency, five systems had been placed into production without a system security plan. Until agencies appropriately develop and update their system security plans, officials will not be aware of system security requirements or whether controls are in place. FISMA 2002 required agencies to provide security awareness training to personnel, including contractors and other users of information systems that support the operations and assets of the agency. Training is intended to inform agency personnel of the information security risks associated with their activities and their responsibilities in complying with agency policies and procedures designed to reduce these risks. FISMA 2002 also requires agencies to train and oversee personnel who have significant information security responsibilities. Providing training to agency personnel is critical to securing information and systems because people are one of the weakest links when securing systems and networks. For fiscal year 2014, fewer agencies reported that at least 90 percent of their users had received security awareness training. The chief information officers for 22 agencies reported that they had provided annual security awareness training to at least 90 percent or more of their network users, which was a decrease from fiscal year 2013, when all 24 agencies reported that they had provided such training. Agency inspectors general reported similar results. For fiscal year 2014, inspectors general for 20 agencies reported that their agency had established a security awareness and training program, which was a decrease from fiscal year 2013, in which 21 agencies had established one. Similarly, they reported that fewer agencies had identified and tracked the status of security awareness training. Specifically, inspectors general for 16 agencies reported that their agency had identified and tracked the status of security awareness training in fiscal year 2014, a decrease from fiscal year 2013, in which 19 agencies had identified and tracked such training. For fiscal year 2014, the percentage of personnel with significant security responsibilities who received training decreased from the previous year. In February 2015, OMB reported that, for fiscal year 2014, the 24 agencies provided training to an average of 80 percent of personnel who have significant security responsibilities, which reflects a decrease from the 92 percent reported for fiscal year 2013. Without effective security awareness training, agency personnel may not have a basic understanding of information security requirements to protect the systems they use. In addition, personnel who did not take specialized training may lack the knowledge, skills, and abilities consistent with their roles to protect the confidentiality, integrity, and availability of the information housed within the information systems to which they are assigned. FISMA 2002 required that federal agencies periodically test and evaluate the effectiveness of their information security policies, procedures, and practices as part of implementing an agency-wide security program. This testing is to be performed with a frequency depending on risk, but no less than annually. Testing should include management, operational, and technical controls for every system identified in the agency’s required inventory of major systems. This type of oversight is a fundamental element that demonstrates management’s commitment to the security program, reminds employees of their roles and responsibilities, and identifies and mitigates areas of noncompliance and ineffectiveness. Although control tests and evaluations may encourage compliance with security policies, the full benefits are not achieved unless the results are used to improve security. For fiscal year 2014, inspectors general reported that fewer agencies had tested and evaluated security controls using appropriate assessment procedures to determine the extent to which the controls had been implemented correctly, operated as intended, and produced the desired outcome with respect to meeting the security requirements for the system. In fiscal year 2014, 16 inspectors general reported that their agency had assessed security controls, while 17 agencies had assessed such controls in fiscal year 2013. As part of government-wide efforts to improve the testing of controls, agencies have begun steps to implement continuous monitoring of their systems. According to NIST, the goal of continuous monitoring is to transform the otherwise static test and evaluation process into a dynamic risk mitigation program that provides essential, near real-time security status and remediation. NIST defines information system continuous monitoring as maintaining ongoing awareness of information security, vulnerabilities, and threats to support organizational risk management decisions. Since March 2012, continuous monitoring has also been designated as a cross-agency priority area for improving federal cybersecurity. Although OMB reported overall increases in the 24 agencies’ continuous monitoring (from 81 percent in fiscal year 2013 to 92 percent in fiscal year 2014) of controls, inspectors general reported that fewer agencies had continuously monitored controls for their systems. For example, for fiscal year 2014, 12 inspectors general stated that their agency had ensured information security controls were being monitored on an ongoing basis, including assessing control effectiveness, documenting changes to the system or its environment of operation, conducting a security impact analysis of the associated changes, and reporting the security state of the system to designated organizational officials. This is a decrease from fiscal year 2013, when 14 agencies had monitored security controls on an ongoing basis. If controls are not effectively tested or properly monitored, agencies will have less assurance that they have been implemented correctly, are operating as intended, and are producing the desired outcome with respect to meeting the security requirements of the agency. FISMA 2002 required agencies to plan, implement, evaluate, and document remedial actions to address any deficiencies in their information security policies, procedures, and practices. In addition, NIST guidance states that federal agencies should develop a POA&M for information systems to document the agency’s planned remedial actions to correct weaknesses or deficiencies noted during the assessment of the security controls and to reduce or eliminate known vulnerabilities in the system. Furthermore, the POA&M should identify, among other things, the resources required to accomplish the tasks, and scheduled completion dates for the milestones. According to OMB, remediation plans assist agencies in identifying, assessing, prioritizing, and monitoring the progress of corrective efforts for security weaknesses found in programs and systems. For fiscal year 2014, the number of agencies implementing certain elements of their remediation programs increased or remained the same. For fiscal year 2014, inspectors general reported that 16 agencies had tracked, prioritized, and remediated weaknesses, compared to 15 for fiscal year 2013. In addition, 11 agencies had established and adhered to milestone remediation dates in both fiscal years. Further, 16 agencies were reported having an effective remedial action plan in fiscal year 2014, an increase from fiscal year 2013, in which 14 reported having such a plan. For fiscal year 2014, 16 inspectors general reported that their agency had ensured resources and ownership were provided for correcting weaknesses, which is also an increase from 14 in fiscal year 2013. Figure 5 shows agencies’ remediation program efforts for fiscal years 2013 to 2014. In spite of these increases, inspectors general reported, that for fiscal year 2014, 19 agencies had established a remediation program, which was a slight decrease from fiscal year 2013, in which 20 inspectors general reported such a program. In addition, 18 agencies had weaknesses in remediating information security weaknesses in fiscal year 2014. For example, according to the inspector general, components of one agency had inaccurate milestones, did not identify resources to mitigate weaknesses, and had delays in resolving the weaknesses. The Inspector General of that agency also identified 517 milestones that were past due by 12 months. Without a sound remediation process, agencies have limited assurance that information security weaknesses are being corrected and addressed in a timely manner. FISMA 2002 required that agency security programs include procedures for detecting, reporting, and responding to security incidents and that agencies report incidents to US-CERT. According to NIST, incident response capabilities are necessary for rapidly detecting an incident, minimizing loss and destruction, mitigating the weaknesses that were exploited, and restoring computing services. From fiscal year 2013 to fiscal year 2014, agencies’ incident response efforts varied. For fiscal year 2014, inspectors general reported that 21 agencies had established an incident response program, which is a slight decrease from fiscal year 2013, in which 22 agencies had established a program. The number of agencies that had routinely reported security incidents to US-CERT within the established time frame also decreased from fiscal year 2013 to fiscal year 2014. Specifically, inspectors general reported that, for fiscal year 2014, 13 agencies had reported incidents to US-CERT within the established time frame, which was a decrease from fiscal year 2013, in which 17 agencies had reported in a timely manner. Similarly, the number of agencies responding to and resolving incidents also decreased. Specifically, inspectors general reported that, in fiscal year 2014, 15 agencies had responded to and resolved incidents in a timely manner, a decrease from fiscal year 2013, in which 19 agencies had done so. Similar to fiscal year 2013, in fiscal year 2014, according to the inspectors general, 18 agencies had sufficient incident monitoring and detection coverage. However, inspectors general reported that, in fiscal year 2014, 19 agencies reported incidents to law enforcement, an improvement from fiscal year 2013, in which 18 agencies had done so. Table 4 summarizes agency incident reporting and response practices for fiscal years 2013 and 2014. Also, 19 agencies had performed a comprehensive analysis, validation, and documentation of incidents in fiscal year 2014, an improvement of 1 agency over the 18 reported in fiscal year 2013, according to the inspectors general. Effectively implementing a comprehensive incident detection, reporting, and response program can help agencies better protect their information and information systems from cyber attacks. FISMA 2002 required federal agencies to implement plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. According to NIST, contingency planning is part of overall information system continuity of operations planning, which fits into a much broader security and emergency management effort that includes, among other things, organizational and business process continuity and disaster recovery planning. These plans and procedures are essential steps in ensuring that agencies are adequately prepared to cope with the loss of operational capabilities due to a service disruption such as an act of nature, fire, accident, or sabotage. According to NIST, these plans should cover all key functions, including assessing an agency’s IT and identifying resources, minimizing potential damage and interruption, developing and documenting the plan, and testing it and making the necessary adjustments. Similar to fiscal year 2013, in fiscal year 2014, according to the inspectors general, 17 agencies had established a business continuity and disaster recovery program that was consistent with FISMA 2002 requirements, OMB policy, and applicable NIST guidelines. The number of agencies that had fully implemented certain key elements of their business continuity and disaster recovery programs decreased, according to the inspectors general. For example, 12 agencies had documented business continuity and disaster recovery plans, a decrease from fiscal year 2013, in which 18 agencies had documented such plans. The inspectors general also reported that several agencies lacked other important elements of a continuity of operations program in fiscal year 2014. For example, 10 agencies had not tested their disaster recovery and business continuity plans, and half of the agencies had not tested system-specific contingency plans. In addition, 7 agencies had not developed or tested contingency plans, trained employees for contingencies, or conducted contingency planning exercises. Further, inspectors general reported that 6 agencies had not established an alternate processing site for some systems, and 4 agencies had not backed up information in a timely manner. Weaknesses in continuity of operations could lessen the effectiveness of agencies’ efforts to successfully recover their systems in a timely manner after a service disruption occurs. FISMA 2002 required agencies to maintain and update annually an inventory of major information systems (systems) operated by the agency or under its control, which includes an identification of the interfaces between each system and all other systems or networks, including those not operated by or under the control of the agency. For fiscal years 2013 and 2014, OMB required agencies to report the number of agency and contractor systems by impact levels. For fiscal year 2014, the 24 agencies reported a total of 9,906 systems, composed of 8,378 agency and 1,528 contractor systems, as shown by impact level in table 5. This represents a slight decrease in the total number of systems from fiscal year 2013, with the number of agency systems decreasing and the number of contractor systems increasing slightly. With respect to impact levels, the total number of low-impact systems decreased while all others, including the number of uncategorized systems, increased. Appendix III lists the number of systems by impact level for each agency, where all agencies reported having moderate-impact systems, five agencies reported not having any high-impact systems, and one agency reported not having any low-impact systems. Table 5 shows the number of agency and contractor-operated systems by impact level in fiscal years 2013 and 2014. In fiscal years 2013 and 2014, OMB also requested that inspectors general report on agencies’ management of contractor systems. Inspectors general reported that 14 agencies had obtained sufficient assurance that security controls of contractor-operated systems and services had been effectively implemented, compared to 13 in fiscal year 2013. In August 2014, we reported that five of six agencies we reviewed were inconsistent in overseeing assessments of contractors’ implementation of security controls, partly because the agencies had not documented security procedures for effectively overseeing contractor performance. We recommended that five of the six agencies develop procedures for the oversight of contractors. The five agencies generally agreed with the recommendations. Statutory requirements for the protection of personal privacy by federal agencies are primarily established by the Privacy Act of 1974 and the privacy provisions of the E-Government Act of 2002. In addition, FISMA 2002 addressed the protection of personal information in the context of securing federal agency information and information systems. In addition to these laws, OMB and NIST have issued guidance for assisting agencies with implementing federal privacy laws. In addition, as part of the annual FISMA reporting process, agencies are required by OMB to report on their progress in implementing federal requirements for protecting the privacy of PII. The requirements include reporting on the implementation of privacy policies and procedures and whether a privacy impact assessment was conducted for systems containing PII. Agencies reported making progress in implementing federal privacy requirements. For fiscal years 2013 and 2014, according to information from senior agency privacy officials, all 24 agencies reported having written policies and processes for their privacy impact assessment practices. According to OMB, in fiscal year 2014, 95 percent of applicable systems reported by the 24 agencies also had an up-to-date privacy impact assessment. Each year, OMB requires agencies to report how much their agency spends on information security. From fiscal year 2010 to fiscal year 2014, the 24 agencies reported spending anywhere between 10.3 and 14.6 billion dollars annually on cybersecurity, including 12.7 billion in fiscal year 2014, which is a 23 percent increase from fiscal year 2013 (see fig. 6). For fiscal years 2013 and 2014, agencies reported information security spending in areas that include 1) preventing malicious cyber activity; 2) detecting, analyzing, and mitigating intrusions; and 3) shaping the cybersecurity environment. The amounts the agencies reported spending in fiscal year 2014 in these three areas are shown in table 6. FISMA 2002 established NIST’s role of developing information security standards and guidelines for federal agencies such as the Federal Information Processing Standards and the special publications in the 800- series for non-national security federal information systems and assigned NIST some specific responsibilities, including the development of: Standards to be used by federal agencies to categorize information and information systems based on the objectives of providing appropriate levels of information security according to a range of risk levels. Guidelines recommending the types of information and information systems to be included in each category. Minimum information security requirements (management, operational, and technical security controls) for information and information systems in each such category. To meet these responsibilities, NIST has continued providing information security guidelines and updates to existing publications. For example, in June of 2014, NIST published Supplemental Guidance on Ongoing Authorization, at the request of OMB. This white paper discusses the current set of NIST guidance, and how it supports concepts of ongoing authorizations. Additionally, in September 2014, NIST issued Special Publication 800-56B, Rev. 1: Recommendation for Pair-Wise Key- Establishment Schemes Using Integer Factorization Cryptography. This publication is intended to provide vendors with information for implementing encryption requirements according to FIPS 140-2. Table 7 lists the dates for FISMA-related publications that NIST plans to update and issue. FISMA 2002 required that agencies have an independent evaluation performed each year to evaluate the effectiveness of the agency’s information security program and practices. FISMA 2002 also required this evaluation to include (1) testing of the effectiveness of information security policies, procedures, and practices of a representative subset of the agency’s information systems and (2) an assessment of compliance with FISMA 2002 requirements, related information security policies, and procedures. For agencies with an inspector general, FISMA 2002 required that these evaluations be performed by the inspector general or an independent external auditor. Lastly, FISMA 2002 required that each year, agencies submit the results of these evaluations to OMB and that OMB summarize the results of the evaluations in its annual report to Congress. According to OMB, the metrics for inspectors general were designed to measure the effectiveness of agencies’ information security programs. OMB relies on the responses by inspectors general to gauge the effectiveness of information security program processes. Agency inspectors general identified weaknesses in agency information security programs and practices in fiscal years 2013 and 2014. They responded to most of the DHS-defined metrics for reporting on agency implementation of FISMA 2002’s requirements, and most also issued a detailed audit report discussing the results of their evaluation of agency policies, procedures, and practices. FISMA 2002 required that OMB, among other things, oversee and annually report to Congress on agencies’ implementation of information security policies, standards, and guidelines. To support its oversight responsibilities, OMB assigned responsibilities to DHS, including overseeing and assisting government efforts to provide adequate, risk- based, cost-effective cybersecurity. OMB and DHS have continued overseeing and assisting agencies with implementing and reporting on cybersecurity, including the following: CyberStat sessions: According to OMB, these sessions were held with agencies to ensure they are accountable for their cybersecurity posture and to assist them in developing a focused strategy for improving their information security. According to a DHS official, these sessions were held with eight agencies during fiscal year 2013 and four agencies during fiscal year 2014. Beginning in fiscal year 2015, OMB officials stated that that these sessions will be held with agencies with high risk factors, as determined by cybersecurity performance and incident data. Cybersecurity metrics: Each year, OMB and DHS provide metrics to federal agencies and their inspectors general for preparing FISMA reports that DHS summarizes for OMB’s report to Congress. The metrics listed in the reporting guidance help to form the basis for information on agencies’ progress in implementing FISMA requirements and in determining whether agencies have met certain cybersecurity goals set by the current administration. Proactive scans of publicly-facing agency networks: In October 2014, OMB instructed DHS and federal agencies to implement a process that allows DHS to conduct regular and proactive vulnerability scans of the publicly-facing segments of the agencies’ networks. In addition, DHS is to provide federal agencies with specific results of the scans; offer additional risk and vulnerability assessment services at the request of individual agencies; and report to OMB on the identification and mitigation of risks and vulnerabilities across federal agencies’ information systems. According to a DHS official, the department began these scans in February 2015 and has been issuing more than 100 reports per week to federal departments and agencies. In addition, OMB satisfied its FISMA 2002 requirement to annually report to Congress not later than March 1 of each year on agencies’ implementation of the act. OMB transmitted its fiscal year 2014 report on February 27, 2015, to Congress and the Comptroller General. The report highlighted improvements across the federal government such as increases for CAP goals in continuous monitoring, strong authentication, and implementing TIC capabilities. Notwithstanding these improvements, agencies and their inspectors general could further benefit from improved guidance for reporting measures of performance, as described in the next section. FISMA 2002 specified that OMB, among its other responsibilities, is to develop policies, principles, standards, and guidelines on information security and report to Congress. Each year, OMB and DHS provide guidance to federal agencies and their inspectors general for preparing their FISMA reports and then summarize the information provided by the agencies and the inspectors general in OMB’s annual report to Congress. For fiscal year 2014 annual FISMA reporting, DHS requested that inspectors general assess their agency’s security program in 11 program components (e.g. continuous monitoring, configuration management, security training, among others). For 9 of the 11 program components, the inspector general is first asked to conclude on whether its agency has established a program component that is consistent with FISMA 2002 requirements, OMB policy, and applicable NIST guidelines. Inspectors general are then asked subsequent questions as to whether the program components include certain attributes listed in the reporting instructions. These attributes consist of 5 to 16 additional questions such as whether the agency has documented policies and procedures for that program component or has implemented controls related to that component. Inspectors general are asked to respond to their overall assessment of each program component and the individual attributes using “yes” or “no” responses. Our review of fiscal year 2014 responses by inspectors general revealed that the reporting guidance was not complete. The lack of appropriate guidance was illustrated by the inconsistent responses to questions supporting their overall evaluation for each of the 11 agency program components. For example, in fiscal year 2014, 19 inspectors general reported that their agency had implemented a continuous monitoring program. Seventeen of the 19 inspectors general reported that their agency’s continuous monitoring program included at least 4 of 7 seven attributes or that the attribute was not applicable. However, two of the inspectors general reported their agency had implemented a continuous monitoring program, although those agencies had implemented only 2 of 7 attributes required for the program area. Other examples we identified illustrate inconsistent inspector general interpretation in reporting. Fifteen of 24 inspectors general reported that their agency had a configuration management program in place and that the program included at least 5 (50 percent) or more of the 10 attributes or that they had not reviewed those attributes. However, 3 other inspectors general reported that their agency had not implemented a configuration management program, even though their program also included at least 5 (50 percent) or more of the 10 attributes. In addition, another inspector general responded that the program was in place, although only 2 of the 10 configuration management attributes were included in the agency’s program. In our follow-up with the inspectors general, three provided responses illustrating inconsistencies with how they interpreted the annual reporting guidance. Specifically, one pointed out that he based his overall top-level response of “yes” for the program areas on whether more than 50 percent of the attributes were in place at his agency. Another replied that, in addition to OMB and DHS guidance, his agency used an internal threshold of 70 percent for a “yes” answer and that 69 percent and below would result in a “no.” The third inspector general responded that he had reviewed five key elements for each component and then evaluated each of the 11 program components by determining whether (1) policies and procedures were in place, (2) controls were designed per policies and procedures, (3) controls were implemented, and (4) controls were operating as intended. These variations in how the guidance was interpreted suggest that additional information on how to incorporate the attributes into the overall conclusion could be valuable in ensuring consistent reporting. The reporting guidance asks inspectors general for an overall assessment of each program component, but does not define criteria for inspectors general to provide a “yes” or “no” response on whether the program component is implemented. In addition, the guidance does not identify the extent (number or percent of attributes needed for a “yes”) to which the attributes should be considered into the overall assessment for each of the components. Therefore, based on our analysis, it appears that some inspectors general reached the same overall assessment, but varied in how those attributes affected their rating. Without complete instructions, differing interpretations of the guidance may therefore result in responses by inspectors general that are not always comparable for presenting a clear government-wide picture of agencies’ information security implementation. Clarifying reporting guidance to inspectors general for the program areas they evaluate would further enhance the quality and consistency of information reported on the government-wide status of federal agencies’ implementation of information security policies, procedures, and practices. Without consistent criteria for reporting, inspectors general may be providing Congress and other oversight bodies with uneven information on the extent to which federal agencies are effectively implementing security requirements. In the past, we have reported that performance information derived from FISMA reporting provides valuable information on the status and progress of agency efforts to implement effective security management programs, but that shortcomings in the reporting process needed to be addressed. For example, we previously recommended that OMB and DHS provide insight into agencies’ security programs by developing additional metrics for key security areas such as those for periodically assessing risk and developing subordinate security plans. We also recommended that metrics for FISMA reporting be developed to allow inspectors general to report on the effectiveness of agencies’ information security programs. OMB and DHS have not yet fully implemented these recommendations. Federal agencies’ information and systems remain at a high risk of unauthorized access, use, disclosure, modification, and disruption. These risks are illustrated by the wide array of cyber threats, an increasing number of cyber incidents, and breaches of PII occurring at federal agencies. Agencies also continue to experience weaknesses with effectively implementing security controls, such as those for access, configuration management, and segregation of duties. OMB and federal agencies have initiated actions intended to enhance information security at federal agencies. Nevertheless, persistent weaknesses at agencies and breaches of PII demonstrate the need for improved security. Until agencies correct longstanding control deficiencies and address the hundreds of recommendations that we and agency inspectors general have made, federal systems will remain at increased and unnecessary risk of attack or compromise. Federal agencies’ implementation of FISMA during fiscal years 2013 and 2014 was mixed. The number of agencies fully implementing components of their security programs increased for some elements, such as developing and documenting policies and procedures, but decreased in others, such as testing controls or providing security training, and varied in implementing incident response and reporting. During fiscal years 2013 and 2014, inspectors general continued to identify weaknesses with the processes agencies used for implementing components of their programs. As a result, agencies are not effectively implementing the risk- based activities necessary for an effective security program required under FISMA 2002 and continued under FISMA 2014. Although OMB and DHS have increased oversight and assistance to federal agencies in implementing and reporting on information security programs, inconsistencies remain in reporting by inspectors general. Some of these inconsistencies could be alleviated with revised guidance from OMB and DHS. Shortcomings in reporting could result in uneven information being provided to Congress and other oversight entities and limit their ability to compare the extent to which federal agencies are implementing information security programs. We recommend that the Director of the Office of Management and Budget, in consultation with the Secretary of Homeland Security, the Chief Information Officers Council, and the Council of the Inspectors General on Integrity and Efficiency, enhance reporting guidance to the inspectors general for all rating components of agency security programs, such as configuration management and risk management, so that the ratings will be consistent and comparable. We provided a draft of this report to OMB; DHS; the Departments of Commerce, State, and Treasury; General Services Administration; National Science Foundation; and the Social Security Administration. According to a representative from OMB, the agency generally concurred with our recommendation and provided these comments. During fiscal year 2015, OMB worked with DHS and the Intelligence Community to develop and refine the FY 2016 FISMA metrics. Additionally, OMB continued to work with DHS and the Intelligence Community and has worked with the Chief Information Officers Council and the Information Technology Committee for the Council of the Inspectors General on Integrity and Efficiency to improve the reporting process and enhance FISMA reporting guidance for the inspector general community, respectively. In written comments (reproduced in appendix IV), SSA’s Executive Counselor to the Commissioner stated that the agency takes a proactive approach to identifying and mitigating risk associated with access to their secure network. In e-mail responses, the audit liaison for DHS and Commerce provided technical comments, which we have incorporated as appropriate. Officials from the Departments of State and Treasury, the General Services Administration, and the National Science Foundation responded that their agency did not have any comments. We are sending copies of this report to the Director of the Office of Management and Budget, the Secretary of Homeland Security, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-6244 or wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Our objectives were to evaluate (1) the adequacy and effectiveness of federal agencies’ information security policies and procedures and (2) the extent to which federal agencies have implemented the requirements of the Federal Information Security Management Act (FISMA) of 2002. To assess the adequacy and effectiveness of agencies’ information security policies and practices, we reviewed and analyzed our, agency, and inspectors general information security-related reports that were issued from October 2013 through May 2015 and covered agencies’ fiscal years 2013 and 2014 security efforts. We reviewed and summarized weaknesses identified in these reports using the five major categories of information security general controls identified in our Federal Information System Controls Audit Manual: (1) access controls, (2) configuration management controls, (3) segregation of duties, (4) contingency planning, and (5) security management controls. In addition, we reviewed and analyzed financial and performance and accountability reports of the 24 major federal agencies covered by the Chief Financial Officers Act for fiscal years 2013 and 2014. To evaluate the extent to which the agencies have implemented FISMA’s requirements, we reviewed and analyzed the provisions of the 2002 act. We reviewed and analyzed the provisions of the act to identify agency, Office of Management and Budget (OMB), Department of Homeland Security (DHS), and National Institute of Standards and Technology (NIST) responsibilities for implementing, overseeing, and providing guidance for agency information security. We did not evaluate agencies’ implementation of the Federal Information Security Modernization Act of 2014 (FISMA 2014), but we compared it to the 2002 act’s requirements to identify revised responsibilities for OMB, DHS, and federal agencies. We also reviewed OMB and DHS’ annual FISMA reporting guidance, and OMB’s annual reports to Congress for fiscal years 2013 and 2014 FISMA implementation. In addition, we analyzed, categorized, and summarized the annual FISMA data submissions for fiscal years 2013 and 2014 by each agency’s chief information officer, inspector general, and senior agency official for privacy. To assess the reliability of the agency-submitted data we obtained via CyberScope, we reviewed FISMA reports that agencies provided to corroborate the data. In addition, we selected 6 agencies to gain an understanding of the quality of the processes in place to produce annual FISMA reports. To select these agencies, we sorted the 24 major agencies from highest to lowest using the total number of systems each agency had reported in fiscal year 2013; separated them into even categories of large, medium, and small agencies; then selected the last 2 agencies from each category. These agencies were the Departments of Commerce, State, and the Treasury; the General Services Administration; the National Science Foundation; and the Social Security Administration. We conducted interviews and collected data from the inspectors general and agency officials from the selected agencies to determine the reliability of data submissions. As appropriate, we interviewed officials from OMB, DHS, and NIST. Based on this assessment, we determined that the data were sufficiently reliable for our work. We conducted this performance audit from December 2014 to September 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Number of Agency and Contractor-Operated Systems by Impact Level The Departments of Agriculture (USDA), Commerce, Defense (DOD), Education, Energy, Health and Human Services (HHS), Homeland Security (DHS), Housing and Urban Development (HUD), the Interior, Justice, Labor, State, Transportation, the Treasury, and Veterans Affairs (VA); the Environmental Protection Agency (EPA); General Services Administration (GSA); National Aeronautics and Space Administration (NASA); National Science Foundation (NSF); Nuclear Regulatory Commission (NRC); Office of Personnel Management (OPM); Small Business Administration (SBA); Social Security Administration (SSA); and the U.S. Agency for International Development (USAID). In addition to the contacts named above, Larry Crosland (assistant director), Christopher Businsky, Rosanna Guerrero, Nancy Glover, Angel Ip, Fatima Jahan, Carlo Mozo, and Shaunyce Wallace made key contributions to this report.
Since 1997, GAO has designated federal information security as a government-wide high risk area, and in 2003 expanded this area to include computerized systems supporting the nation's critical infrastructure. In February 2015, in its high risk update, GAO further expanded this area to include protecting the privacy of personal information that is collected, maintained, and shared by both federal and nonfederal entities. FISMA required federal agencies to develop, document, and implement an agency-wide information security program. The act also assigned OMB with overseeing agencies' implementation of security requirements. FISMA also included a provision for GAO to periodically report to Congress on (1) the adequacy and effectiveness of agencies' information security policies and practices and (2) agencies' implementation of FISMA requirements. GAO analyzed information security-related reports and data from 24 federal agencies, their inspectors general, and OMB; reviewed prior GAO work; examined documents from OMB and DHS; and spoke to agency officials. Persistent weaknesses at 24 federal agencies illustrate the challenges they face in effectively applying information security policies and practices. Most agencies continue to have weaknesses in (1) limiting, preventing, and detecting inappropriate access to computer resources; (2) managing the configuration of software and hardware; (3) segregating duties to ensure that a single individual does not have control over all key aspects of a computer-related operation; (4) planning for continuity of operations in the event of a disaster or disruption; and (5) implementing agency-wide security management programs that are critical to identifying control deficiencies, resolving problems, and managing risks on an ongoing basis (see fig.). These deficiencies place critical information and information systems used to support the operations, assets, and personnel of federal agencies at risk, and can impair agencies' efforts to fully implement effective information security programs. In prior reports, GAO and inspectors general have made hundreds of recommendations to agencies to address deficiencies in their information security controls and weaknesses in their programs, but many of these recommendations remain unimplemented. Federal agencies' implementation in fiscal years 2013 and 2014 of requirements set by the Federal Information Security Management Act of 2002 (FISMA) was mixed. For example, most agencies had developed and documented policies and procedures for managing risk, providing security training, and taking remedial actions, among other things. However, each agency's inspector general reported weaknesses in the processes used to implement FISMA requirements. In addition, to comply with FISMA's annual reporting requirements, the Office of Management and Budget (OMB) and the Department of Homeland Security (DHS) provide guidance to the inspectors general on conducting and reporting agency evaluations. Nevertheless, GAO found that this guidance was not always complete, leading to inconsistent application by the inspectors general. For example, because it did not include criteria for making overall assessments, inspectors general inconsistently reported agency security performance. GAO is recommending that OMB, in consultation with DHS and others, enhance security program reporting guidance to inspectors general so that the ratings of agency security performance will be consistent and comparable. OMB generally concurred with our recommendation.
The largest of the interagency contracting vehicles is the MAS program (also known as the Federal Supply Schedule or the schedules program). GSA directs and manages the MAS program. MACs and GWACs are also interagency contracts. Government buyers usually pay a fee for using other agencies’ GWACs, MACs, and schedule contracts. These fees are usually a percentage of the value of the procurement, which are paid to the sponsoring agency and are expected to cover the costs of administering the contract. Along with using interagency contracts to leverage their buying power, a number of large departments—DOD and DHS in particular—are turning to enterprisewide contracts as well to acquire goods and services. Enterprisewide contracts are similar to interagency contracts in that they can leverage the purchasing power of the federal agency but generally do not allow purchases from the contract outside of the original acquiring activity. Enterprisewide contracting programs can be used to reduce contracting administrative overhead, provide information on agency spending, support strategic sourcing initiatives, and avoid the fees charged for using interagency contracts. All of these contracts are indefinite delivery/indefinite quantity (ID/IQ) contracts. ID/IQ contracts are established to buy goods and services when the exact times and exact quantities of future deliveries are not known at the time of award. Once the times and quantities are known, agencies place task and delivery orders against the contracts for goods and services. In fiscal year 2008, federal agencies spent at least $60 billion through GWACs, MACs, the MAS program, and enterprisewide contracts to buy goods and services to support their operations: about $46.8 billion was spent on the MAS program; about $5.3 billion was spent on GWACs; at least $2.5 billion was spent on MACs although the actual amount could be much higher; and at least $4.8 billion was spent on the three enterprisewide contracts we reviewed, although, like MACs, the actual amount spent on all enterprisewide contracts could be higher. Sales under the MAS program have been relatively flat in recent years, and obligations under GWACs have declined slightly in recent years. However, the total amount of money spent in fiscal year 2008 using the three enterprisewide contracting programs included in our review is approaching the amount spent for GWACs during the same period. In addition, as OMB recently reported, numerous agencies are planning to increase their use of enterprisewide contracts as a means of addressing the administration’s goal of reducing the amount agencies spend on contracting by 7 percent through fiscal year 2011. Nevertheless, GSA’s MAS program is still the primary governmentwide buying program aimed at helping the federal government leverage its significant buying power when buying commercial goods and services. As the largest interagency contracting program, the MAS program provides advantages to both federal agencies and vendors. Agencies, using the simplified methods of procurement of the schedules, can avoid the time, expenditures, and administrative costs of other methods. And vendors receive wider exposure for their commercial products and expend less effort in selling these products. Interagency and enterprisewide contracts should provide an advantage to government agencies when buying billions of dollars worth of goods and services, yet OMB and agencies lack reliable and comprehensive data to effectively leverage, manage, and oversee these contracts. More specifically, The total number of MACs and enterprisewide contracts currently approved and in use by agencies is unknown because the federal government’s official procurement database is not sufficient or reliable for identifying these contracts, Departments and agencies cite a variety of reasons to establish, justify, and use their own MACs and enterprisewide contracts rather than use other established interagency contracts—reasons that include avoiding fees paid for the use of other agencies’ contracts, gaining more control over procurements made by organizational components, and allowing for the use of cost reimbursement contracts, Concerns remain about contract duplication—vendors and agency officials expressed concerns about duplication of effort among these contracts, and in our review we found many of the same vendors provided similar products and services on many different contract vehicles. This could be resulting in duplication of products and services being offered, increased costs to both the vendor and the government, and missed opportunities to leverage the government’s buying power, Limited governmentwide policy is in place for establishing and overseeing MACs and enterprisewide contracts. Recent legislation and OFPP initiatives are expected to strengthen oversight and management of MACs, but no similar initiatives are underway to strengthen oversight of enterprisewide contracts. In April 2010, we made five recommendations to OMB to improve data, strengthen policy, and better coordinate agencies’ awards of MACs and enterprisewide contracts, and OMB concurred with all of our recommendations. Prior attempts by the acquisition community to identify interagency and enterprisewide contracts have not resulted in a reliable database useful for identifying or providing governmentwide oversight on those contracts. In 2006, OFPP started the Interagency Contracting Data Collection Initiative to identify and list the available GWACs, MACs, and enterprisewide contracts. However, the initiative was a one-time effort and has not been updated since. In conducting our review, we were not able to identify the universe of MACs and enterprisewide contracts because the data available in the official government contracting data system, the Federal Procurement Data System-Next Generation (FPDS-NG), were insufficient and unreliable. For instance, FPDS-NG includes a data field that is intended to identify GWACs but we found a number of instances where known GWACs were coded incorrectly. We also searched the system by contract number for MACs that we were aware of and found similar issues, with some contracts coded properly as MACs and some not. Despite its critical role, we have consistently reported on problems with FPDS-NG data quality over a number of years. Most of the senior procurement executives, acquisition officials, and vendors we spoke with as part of our review believed a publicly available source of information on these contracts is necessary. For example, senior procurement executives from DHS and DOD stressed the usefulness of a governmentwide clearinghouse of information on existing contracts. Agency officials we spoke with said that if agencies could easily find an existing contract, which they cannot do, they would avoid unnecessary administrative time to enter into a new contract, which they said could be significant. The report of the Acquisition Advisory Panel—often referred to as the SARA panel— previously noted some of these concerns, stating that too many choices without information related to the performance and management of these contracts make the cost-benefit analysis and market research needed to select an appropriate acquisition vehicle impossible. To improve the transparency of and data available on these contracts, we made three recommendations to OFPP: 1. Survey departments and agencies to update its 2006 data collection initiative to identify the universe of MACs and enterprisewide contracts in use and assess their utility for maximizing procurement resources across agencies. 2. Ensure that departments and agencies use the survey data to accurately record these contracts in FPDS-NG. 3. Assess the feasibility of establishing and maintaining a centralized database to provide sufficient information on GWACs, MACs, and enterprisewide contracts for contracting officers to use to conduct market research and make informed decisions on the availability of using existing contracts to meet agencies’ requirements. Agencies cited several reasons for establishing their own MACs and enterprisewide contracts including cost avoidance through lower prices, fewer fees compared to other vehicles, mission specific requirements, and better control over the management of contracts. For example: The Army cited several reasons for establishing their MACs for information technology hardware and services in 2005 and 2006. The Army wanted to standardize its information technology contracts so each contract would include the required Army and DOD security parameters. According to the Army, GSA contracts do not automatically include these security requirements and using a GSA contract would require adding these terms to every order. The Army also cited timeliness concerns with GSA contracts and GSA fees as reasons for establishing their own contracting vehicles. In 2005, DHS established the EAGLE and FirstSource contracting programs. Both involve enterprisewide contracts used for information technology products and services. Officials stated the main reason these programs were established was to avoid the fees associated with using other contract vehicles and save money through volume pricing. In addition, the programs centralized procurements for a wide array of mission needs among DHS’ many agencies. Furthermore, DHS officials stated they wanted to be able to coordinate the people managing the contracts, which did not happen when using GSA contracts. We found the same vendors on many different contract vehicles providing information technology goods or services, which may be resulting in duplication of goods and services being offered. Table 1 below shows that the top 10 GWAC vendors, based on sales to the government, offer their goods and services on a variety of government contracts that all provide information technology goods and services. For example, of the 13 different contract vehicles listed in Table 1, 5 of the 10 vendors were on 10 or more of these. Vendors and agency officials we met with expressed concerns about duplication of effort among the MACs, GWACs, and enterprisewide contracts across government. A number of vendors we spoke with told us they offer similar products and services on multiple contract vehicles and that the effort required to be on multiple contracts results in extra costs to the vendor, which they pass to the government through the prices they offer. The vendors stated that the additional cost of being on multiple contract vehicles ranged from $10,000 to $1,000,000 due to increased bid and proposal and administrative costs. Interestingly, we found one vendor offering the exact same goods and services on both their GSA schedule and the NASA’s GWAC and offering lower prices on the GWAC. Another vendor stated that getting on multiple contract vehicles can be cost-prohibitive for small businesses and forces them to not bid on a proposal or to collaborate with a larger business in order to be on a contract vehicle. Government procurement officials expressed additional concerns. For example, an official from OFPP has stated that such duplication of effort only complicates the problem of an already strained acquisition workforce. The GSA Federal Acquisition Service Deputy Commissioner stated that while the agencies cite GSA fees as a reason for creating their own vehicles, agencies fail to consider the duplication of effort and cost of doing these procurements. Federal agencies operate with limited governmentwide policy that addresses the establishment and use of MACs and enterprisewide contracts. Federal regulations generally provide that an agency should consider existing contracts to determine if they might meet its needs. The six federal agencies and the three military departments we reviewed have policies that require approval and review for acquisition planning involving large dollar amount contracts which would generally include the establishment of MACs and enterprisewide contracts. The review process varies from agency to agency. For example, an official from the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics told us that any new DOD contract estimated at over $100 million would be required to go through a review process to ensure that no other contract exists that could fulfill the new requirement. As another example, DHS requires that the senior procurement executive approve the establishment of each enterprisewide contract. In contrast, GWAC creation and management have governmentwide oversight, as OFPP exercises statutory approval authority regarding establishment of a GWAC. The senior procurement executives we spoke with had mixed views on the proper role of OFPP in providing clarification and oversight to agencies establishing their own contract vehicles. For example, Army senior acquisition officials representing the senior procurement official told us that the policy on interagency contracting is not cohesive. In their view, OFPP should provide policy and guidance that agencies would be required to follow. In contrast, the Senior Procurement Executive for the Department of the Navy pointed to agency-specific circumstances or requirements that create uncertainty about the utility of broad OFPP guidance. Furthermore, agencies have issued guidance encouraging the use of enterprisewide contracts rather than using interagency contracts. For example, DOD guidance advises that contracting officers consider the use of internal DOD contract vehicles to satisfy requirements for services prior to placing an order against another agency’s contract vehicle. Moreover, OMB recently reported that 20 of the 24 largest procuring activities are planning on reducing procurement spending by using enterprise contracting to leverage their buying power, as part of the administration’s goal of reducing contract spending by 7 percent over the next 2 years. To provide a more coordinated approach in awarding MACs and enterprisewide contracts, we recommended that OFPP take steps to establish a policy and procedural framework in conjunction with agencies for establishing, approving, and reporting on new MACs and enterprisewide contracts on an ongoing basis. The framework should stress the need for a consistent approach to leveraging governmentwide buying power while allowing agencies to continue to use their statutory authorities for buying goods and services. Recent legislation and OFPP initiatives are expected to strengthen oversight and management of MACs, but these initiatives do not address enterprisewide contracts. The 2009 National Defense Authorization Act required, 1 year after its enactment, that the FAR be amended to require that any MAC entered into by an executive agency after the amendment’s effective date be supported by a business case analysis. The business case is to include an analysis of all direct and indirect costs to the federal government of awarding and administering a contract and the impact it would have on the ability of the federal government to leverage its buying power. However, the Act is silent on what steps an agency should take to examine the effect a new contract will have on the ability of the government to leverage its buying power. Additionally, the Act does not address similar requirements for enterprisewide contracts. Under the Act, the pending FAR rule relating to this legislation was required to be issued by October 15, 2009; however, the rule was still in progress as of June 11, 2010. A business case analysis approach for MACs has the potential to provide a consistent governmentwide approach to awarding MACs as was pointed out by the SARA panel. The panel noted that the OFPP review and approval process for GWACs could serve as a good business model for approving MACs. Using the GWAC process as a model, the full business case analysis as described by the SARA panel would need to include measures to track direct and indirect costs associated with operating a MAC. It would also include a discussion about the purpose and scope, and the amount and source of demand. Further, the business case would need to identify the benefit to the government along with metrics to measure this benefit. We recommended that as OFPP develops the pending FAR rule to implement the business case analysis requirement above, it ensures that departments and agencies complete a comprehensive business case analysis as described by the SARA panel, and include a requirement to address potential duplication with existing contracts, before new MACs and enterprisewide contracts are established. Our work identified a number of challenges GSA faces in effectively managing the MAS program, the federal government’s largest interagency contracting program. More specifically, GSA Lacks transactional data about its customers’ use of MAS contracts, which would provide GSA insight to facilitate more effective management of the program; Makes limited use of selected pricing tools that make it difficult for GSA to determine whether the program achieves its goal of obtaining the best prices for customers and taxpayers; Uses a decentralized management structure for the MAS program in conjunction with deficient program assessment tools, which create obstacles for effective program management. In April 2010, we made a number of recommendations to GSA to improve MAS program management and pricing, with which GSA concurred. GSA lacks data about the use of the MAS program by customer agencies that it could use to determine how well the MAS program meets its customers’ needs and to help its customers obtain the best prices in using MAS contracts. GSA officials told us that because agency customers generally bypass GSA and place their orders directly with MAS vendors, they lack data on the orders placed under MAS contracts; as a result, GSA also lacks data on the actual prices paid relative to the MAS contract prices. While GSA does have a spend analysis reporting tool through its GSA Advantage system that provides agencies with sales and statistical data on their orders, it accounts for a very small percentage of overall MAS program sales, thus restricting the amount of data available. There are two drawbacks to the lack of available transactional data on the goods and services ordered under the MAS program and the prices paid: The lack of data hinders GSA’s ability to evaluate program performance and manage the program strategically. Several GSA officials acknowledged that it is difficult for GSA to know whether the MAS program meets their customers’ needs without data on who uses MAS contracts and what they are buying. The GSA Inspector General has recommended that GSA take steps to collect these data to use in evaluating customer buying patterns and competition at the order level in order to adopt a more strategic management approach. We have made similar observations in prior reports going back several decades. The lack of data could limit the ability of GSA and its customers to achieve the best prices through the MAS program. Some GSA officials informed us that they could possibly use transactional data to negotiate better prices on MAS contracts. Several agency contracting officers we spoke with cited benefits of having additional transactional data on MAS orders to improve their negotiating position when buying goods and services, and increasing visibility over the purchases their respective agency makes. In addition, a number of the senior acquisition officials at agencies in our review said that they considered the prices on MAS contracts to be too high, and without additional data from GSA, it was difficult to see the value in the MAS program and the prices that GSA negotiates. GSA officials told us that they have initiated a process improvement initiative to collect more transactional data in the future, as they make improvements to information systems that support the MAS program. However, this initiative is currently in its early stages. We recommended that GSA take steps to collect transactional data on MAS orders and prices paid and provide this information to contract negotiators and customer agencies, potentially through the expanded use of existing electronic tools or through a pilot data collection initiative for selected schedules. GSA uses several tools and controls in the contract award and administration process to obtain and maintain best prices for its contracts. These tools include: pre-award audits of MAS contracts by the GSA Inspector General, clearance panel reviews of contract negotiation objectives, and Procurement Management Reviews. However, it applies these tools to a small number of contracts, which hinders GSA’s ability to determine whether it achieves the program’s goal of obtaining best prices. For example, the GSA Inspector General performs pre-award audits of MAS contracts, which enable contract negotiators to verify that vendor- supplied pricing information is accurate, complete, and current before contract award. These audits can also result in lower prices for MAS customers by identifying opportunities for GSA to negotiate more favorable price discounts prior to award. From fiscal year 2004 through 2008, the GSA Inspector General identified almost $4 billion in potential cost avoidance through pre-award audits. However, we found that GSA could be missing additional opportunities for cost savings on MAS contracts by not targeting for review more contracts that are eligible for audit. While GSA guidance instructs contract negotiators to request audit assistance for new contract offers and extensions as appropriate when a contract’s estimated sales exceed $25 million for the 5-year contract period, more than 250 contracts that exceeded this threshold were not selected for audit for the 2-year period of 2009 through 2011 due to resource constraints. In addition, the 145 contracts that were selected for audit represent only 2 percent of the total award dollars for all MAS contracts. GSA uses other tools to improve the quality of contract negotiations, but we found that their effectiveness was limited by incomplete implementation and a narrow scope. GSA established a prenegotiation clearance panel process to ensure the quality of GSA’s most significant contract negotiations by reviewing the contract’s negotiation objectives with an emphasis on pricing, prior to contract award for contracts that meet certain defined dollar thresholds. However, we found several instances where clearance panel reviews were not held for contracts that met these thresholds, and GSA officials said that they do not check whether contracts that met the appropriate threshold received a panel review, thus limiting the effectiveness of this tool. GSA has begun the process of updating its prenegotiation clearance panel guidance to address this issue. GSA also conducts Procurement Management Reviews to assess contracts’ compliance with statutory requirements and internal policy and guidance. However, GSA only selects a small number of contracts for review and at the time of our fieldwork did not use a risk-based selection methodology, which does not permit GSA to derive any trends based on the review findings. A subsequent update to GSA’s PMR methodology to focus on attempting to select a statistical sample of contracts for review could address this issue. We recommended that GSA, in coordination with its Inspector General, target the use of pre-award audits to cover more contracts that meet the audit threshold. In addition, we recommended that GSA fully implement the process that has been initiated to ensure that vendors who require a prenegotiation clearance panel receive a panel review. The decentralized management structure for the MAS program and shortcomings in assessment tools also create MAS program management challenges. GSA established the MAS Program Office in July 2008 to provide a structure for consistent implementation of the MAS program. The program office’s charter provides it broad responsibility for MAS program policies and strategy. Responsibility, however, for managing the operation of individual schedules resides with nine different acquisition centers under three business portfolios. None of these business portfolios or the MAS acquisition centers that award and manage MAS contracts are under the direct management of the MAS Program Office. In addition, the program office’s charter does not specifically provide it with direct oversight of the business portfolios’ and acquisition centers’ implementation of the MAS program. GSA officials and program stakeholders we spoke with had varying opinions about this management structure, with some noting that the program is still not managed in a coordinated way and that there is a lack of communication and consistency among MAS acquisition centers which impairs the consistent implementation of policies across the program and the sharing of information between business portfolios. The GSA Inspector General has expressed similar concerns, noting in a recent report that a lack of clearly defined responsibilities within the new FAS organization has harmed national oversight of the MAS program and may have affected the sharing of best practices between acquisition centers. We also found that performance measures were inconsistent across the GSA organizations that manage MAS contracts, including inconsistent emphasis on competitiveness of pricing, making it difficult to have a programwide perspective of MAS program performance. Finally, GSA’s MAS customer satisfaction survey has had a response rate of one percent or less in recent years that limits its utility as a means for evaluating program performance. We recommended that GSA clarify and strengthen the MAS Program Office’s charter and authority so that it has clear roles and responsibilities to consistently implement guidance, policies, and best practices across GSA’s acquisition centers , establish more consistent performance measures across the MAS program to include measures for pricing, and take steps to increase the MAS customer survey response rate. Billions of taxpayer dollars flow through interagency and enterprisewide contracts; however, the federal government does not have a clear and comprehensive view of who is using these contracts and if they are being used in an efficient and effective manner—one that minimizes duplication and advantages the government’s buying power by taking a more strategic approach to buying goods and services. Long-standing problems with the quality of FPDS-NG data on these contracts and the lack of consistent governmentwide policy on the creation, use, and costs of awarding and administering some of these contracts are hampering the government’s ability to realize the strategic value of using these contracts. Furthermore, departments and agencies may be unknowingly contracting for the same goods and services across a myriad of contracts—MACs, GWACs, the MAS program, and enterprisewide contracts. In addition, GSA’s shortcomings in data, program assessment tools, and use of pricing tools create oversight challenges that prevent GSA from managing the MAS program more strategically and knowing whether the MAS program provides best prices. In agreeing with our recommendations, OMB and GSA recognize the importance of addressing these problems, but until they are resolved, we believe the government will continue to miss opportunities to minimize duplication and take advantage of the government’s buying power through more efficient and more strategic contracting. Madam Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or the other members of the subcommittee may have at this time. For further information regarding this testimony, please contact John Needham at (202) 512-4841 or needhamjk1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this product. Individuals making key contributions to this statement were James Fuquay (Assistant Director); Marie Ahearn; Lauren Heft; and Russ Reiter. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Agencies can use several different types of contracts to leverage the government's buying power for goods and services. These include interagency contracts--where one agency uses another's contract for its own needs--such as the General Services Administration (GSA) and the Department of Veterans Affairs multiple award schedule (MAS) contracts, multiagency contracts (MAC) for a wide range of goods and services, and governmentwide acquisition contracts (GWAC) for information technology. Agencies spent at least $60 billion in fiscal year 2008 through these contracts and similar single-agency enterprisewide contracts. GAO was asked to testify on the management and oversight of interagency contracts, and how the government can ensure that interagency contracting is efficient and transparent. GAO's testimony is based on its recent report, Contracting Strategies: Data and Oversight Problems Hamper Opportunities to Leverage Value of Interagency and Enterprisewide Contracts ( GAO-10-367 , April 2010). In that report, GAO made recommendations to the Office of Management and Budget (OMB) to strengthen policy, improve data, and better coordinate agencies' awards of MACs and enterprisewide contracts, and to GSA to improve MAS program pricing and management. Both agencies concurred with GAO's recommendations. Interagency and enterprisewide contracts should provide an advantage to government agencies when buying billions of dollars worth of goods and services, yet OMB and agencies lack reliable and comprehensive data to effectively leverage, manage, and oversee these contracts. More specifically, the total number of MACs and enterprisewide contracts currently approved and in use by agencies is unknown because the federal government's official procurement database is not sufficient or reliable for identifying these contracts. Departments and agencies cite a variety of reasons to establish, justify, and use their own MACs and enterprisewide contracts rather than use other established interagency contracts--reasons that include avoiding fees paid for the use of other agencies' contracts, gaining more control over procurements made by organizational components, and allowing for the use of cost reimbursement contracts. However, concerns remain about contract duplication--under these conditions, many of the same vendors provided similar products and services on multiple contracts, which increases costs to both the vendor and the government and can result in missed opportunities to leverage the government's buying power. Furthermore, limited governmentwide policy is in place for establishing and overseeing MACs and enterprisewide contracts. Recent legislation and OMB's Office of Federal Procurement Policy initiatives are expected to strengthen oversight and management of MACs, but no initiatives are underway to strengthen approval and oversight of enterprisewide contracts. GSA faces a number of challenges in effectively managing the MAS program, the federal government's largest interagency contracting program. GSA lacks data on orders placed under MAS contracts that it could use to help determine how well the MAS program meets its customers' needs and help its customers obtain the best prices in using MAS contracts. In addition, GSA makes limited use of selected pricing tools, such as pre-award audits of MAS contracts, which make it difficult for GSA to determine whether the program achieves its goal of obtaining the best prices for customers and taxpayers. In 2008, GSA established a program office with broad responsibility for MAS program policy and strategy, but the program continues to operate under a decentralized management structure that some program stakeholders are concerned has impaired the consistent implementation of policies across the program and the sharing of information among the business portfolios. In addition, performance measures were inconsistent across the GSA organizations that manage MAS contracts, including inconsistent emphasis on pricing, making it difficult to have a programwide perspective of MAS program performance. Finally, GSA's MAS customer satisfaction survey has had a response rate of 1 percent or less in recent years that limits its utility as a means for evaluating program performance.
Leadership in agencies across the federal government is essential to providing the accountable, committed, consistent, and sustained attention needed to address human capital and related organization transformation issues. Leaders must not only embrace reform, they must integrate the human capital function into their agencies’ core planning and business activities. Senior executive leadership is especially key today as the federal government faces significant reform challenges. OPM’s 2006 Federal Human Capital Survey (FHCS) results showed that the government needs to establish a more effective leadership corps. For example, slightly less than half of employees responding to the survey reported a high level of respect for their senior leaders or are satisfied with the information they receive from management on what is going on in the organization. Similarly, only 38 percent of respondents agreed or strongly agreed with the statement that leaders in their organization generate high levels of motivation and commitment in the workforce. This represents little change from the 2004 survey when 37 percent of respondents had positive responses to this question. However, a majority of respondents, 58 percent, agreed or strongly agreed that managers communicate the goals and priorities of the organization. This level of response is essentially the same as the 2004 survey when 59 percent of respondents provided a positive response to this item. OPM plays a key role in fostering and guiding improvements in all areas of strategic human capital management in the executive branch. As part of its key leadership role, OPM can assist—and as appropriate, require—the building of the infrastructures within agencies needed to successfully implement and sustain human capital reforms and related initiatives. OPM can do this in part by encouraging continuous improvement and providing appropriate assistance to support agencies’ efforts. For example, OPM has exerted human capital leadership through its Human Capital Scorecard of the President’s Management Agenda to assist agencies in improving strategic management of their human capital. Also, OPM has developed the governmentwide FHCS to assist agencies and OPM in better understanding specific and governmentwide agency workforce management conditions and practices in the areas of leadership, performance culture, and talent. Most recently, OPM began a television campaign to promote federal employment and has undertaken a greater focus on succession planning to respond to the forthcoming federal retirement wave. However, in leading governmentwide human capital reform, OPM has itself faced challenges in its capacity to assist, guide, and certify agencies’ readiness to implement reforms. We recently reported that OPM has made commendable efforts in transforming itself from less a rulemaker, enforcer, and independent agent to more a consultant, toolmaker and strategic partner in leading and supporting executive agencies’ human capital management systems. We also reported on OPM’s leadership of transformation efforts. Using the new senior executive performance-based pay system and other recent human capital reform initiatives as a model for understanding OPM’s capacity to lead and implement future human capital reforms, we identified seven key lessons learned, which are (1) ensure internal OPM capacity to lead and implement reform, (2) ensure that executive branch agencies’ infrastructures support reform, (3) collaborate with the Chief Human Capital Officer (CHCO) council, (4) develop clear and timely guidance, (5) share best practices, (6) solicit and incorporate feedback, and (7) track progress to ensure accountability. In addition to the lessons learned that can be applied to future human capital reforms, we recommended, among other things, that OPM (1) improve its capacity for future reforms by reexamining its own agencywide skills and (2) address issues specific to senior executive pay systems, such as sharing best practices and tracking progress towards goals. OPM has said that it has made progress toward achieving its operational and strategic goals. Equally important is OPM’s leadership in federal workforce diversity and oversight of merit system principles. In our review of how OPM and the Equal Employment Opportunity Commission (EEOC) carry out their mutually shared responsibilities for helping to assure a fair, inclusive, and nondiscriminatory federal workplace, we found limited coordination between the two agencies in policy and oversight matters. The lack of a strategic partnership between the two agencies and an insufficient understanding of their mutual roles, authority, and responsibilities can result in a lost opportunity to realize consistency, efficiency, and public value in federal equal employment opportunity and workplace diversity human capital management practices. We recommended that OPM and EEOC regularly coordinate in carrying out their responsibilities under the equal employment opportunity policy framework and seek opportunities for streamlining like reporting requirements. Both agencies acknowledged that their collaborative efforts could be strengthened but took exception to the recommendation to streamline requirements. We continue to believe in the value of more collaboration. As of August of last year, the two agencies had begun discussions on ways to increase coordination. Strategic human capital planning is the centerpiece of federal agencies’ efforts to transform their organizations to meet the governance challenges of the 21st century. Generally, strategic workforce planning addresses two critical needs: (1) aligning an organization’s human capital program with its current and emerging mission and programmatic goals and (2) developing long-term strategies for acquiring, developing, motivating, and retaining staff to achieve programmatic goals. The long-term fiscal outlook and challenges to governance in the 21st century are prompting fundamental reexaminations of what government does, how it does it, and who does it. Strategic human capital planning that is integrated with broader organizational strategic planning is critical to ensuring agencies have the talent they need for future challenges. An agency’s strategic human capital plan should address the demographic trends that the agency faces with its workforce, especially pending retirements. In 2006, OPM reported that approximately 60 percent of the government’s 1.6 million white-collar employees and 90 percent of about 6,000 federal executives will be eligible for retirement over the next 10 years. We have found that leading organizations go beyond a succession planning approach that focuses on simply replacing individuals and engage in broad, integrated succession planning and management efforts that focus on strengthening both current and future organizational capacity to obtain or develop the knowledge, skills, and abilities they need to meet their missions. For example, about one third of the Nuclear Regulatory Commission’s (NRC) workforce with mission-critical skills will be eligible to retire by 2010. At the same time, NRC’s workforce needs to expand because NRC expects to receive applications for new nuclear power reactors beginning in October 2007. Although there is room for further improvement, we found that NRC’s human capital planning framework is generally aligned with its strategic goals and coherently identifies the activities needed to achieve a diverse, skilled workforce and an infrastructure that fully supports the agency’s mission and goals. The agency’s framework included using its human capital authorities, developing a critical skills and gaps inventory tool, and using targets and measures to monitor the composition of its hires and separations. NRC has been effective in recruiting, developing, and retaining a critically skilled workforce, though it is unclear if this trend will continue in the next few years. We also have reported in recent years on a number of human capital issues that have hampered the Department of State’s ability to carry out U.S. foreign policy priorities and objectives, particularly at posts central to the war on terror. For example, the department initiated a number of efforts to improve its foreign language capabilities. However, it has not systematically evaluated the effectiveness of these efforts, and it continues to experience difficulties filling its language-designated positions with language proficient staff. We reported that these gaps in language proficiency can adversely affect the department’s ability to communicate with foreign audiences and execute critical duties. Another example of the government’s strategic human capital planning challenges involves its acquisition workforce. The government increasingly relies on contractors for roles and missions previously performed by government employees. Acquisition of products and services from contractors consumes about a quarter of discretionary spending governmentwide and is a key function in many federal agencies. We reported in 2003 that because of a more sophisticated business environment, most acquisition professionals would need to acquire a new set of skills focusing on business management. In a forum hosted by the Comptroller General in July 2006, acquisition experts reported that agency leaders have not recognized or elevated the importance of the acquisition profession within their organizations, and a strategic approach has not been taken across government or within agencies to focus on workforce challenges, such as creating a positive image essential to successfully recruit and retain a new generation of talented acquisition professionals. Faced with a workforce that is becoming more retirement-eligible and finding gaps in talent because of changes in the knowledge, skills, and competencies in occupations needed to meet their missions, agencies need to strengthen their efforts and use of available flexibilities to acquire, develop, motivate, and retain talent. A chronic complaint about the federal hiring process is its lengthy procedures, which puts the federal government at a competitive disadvantage. In recent years, Congress, OPM, and agencies have taken significant steps to streamline the hiring process. For example, Congress has provided agencies with flexibilities such as the use of categorical rating and exemptions from the pay and classification restrictions of the General Schedule. OPM’s efforts included improvements to the USAJOBS Web site as well as other measures, such as job fairs and television commercials, to make the public more aware of the work federal employees do. OPM has also established a model 45-day hiring program— the time-to-hire period from the date a vacancy announcement closes to the date a job offer is extended. In addition, OPM has developed a Hiring Tool Kit on its website to help agencies improve their hiring processes. Moreover, OPM assists agencies on the use of student employment program flexibilities, which can expedite the hiring process and lead to noncompetitive conversion to permanent employment. Our work, however, has found that agencies’ use of the tools and flexibilities that Congress has provided has been uneven. OPM has made some progress in assessing how agencies are using their hiring flexibilities and authorities. For example, in January of this year, we reported that OPM began working with a contractor in 2005 to review hiring flexibilities and authorities to determine which ones are used and not used, who is using them, and when and how they are being used. As a result of its work with the contractor, OPM plans to survey eight CHCO Council agencies to evaluate the use and effectiveness of hiring authorities and flexibilities and use the results to improve policies in these areas. This is a positive step on OPM’s part as we continue to believe that more needs to be done to provide information to help agencies meet these human capital needs. Developing and maintaining workforces that reflect all segments of society and our nation’s diversity is a key part of agencies’ recruitment challenge. For example, the National Aeronautics and Space Administration (NASA) said it must compete with the private sector for the pool of Hispanics qualified for aerospace engineering positions, which is often attracted by more-lucrative employment opportunities in the private sector in more preferable locations. To address the situation, part of NASA’s strategy in recruiting Hispanics focuses on increasing educational attainment, beginning in kindergarten and continuing into college and graduate school, with the goal of attracting students into the NASA workforce and aerospace community. NASA centers sponsor, and its employees participate in, mentoring, tutoring, and other programs to encourage Hispanic and other students to pursue careers in science, engineering, technology, and math. NASA also developed a scholarship program designed to stimulate a continued interest in science, technology, engineering, and mathematics. Another example is the U.S. Air Force “Grow Your Own” aircraft maintenance program at three of its Texas bases. In partnership with vocational-technical schools, the program includes both on-the-job training and classroom education to provide a pool of trained candidates, including Hispanics, to replace retiring federal civilian aircraft maintenance workers. In addition to hiring, agencies need to have effective training and development programs to address gaps in the skills and competencies that they identified in their workforces. We have issued guidance that introduces a framework, consisting of a set of principles and key questions that federal agencies can use to ensure that their training and development investments are targeted strategically and are not wasted on efforts that are irrelevant, duplicative, or ineffective. Training and developing new and current staff to fill new and different roles will play a crucial part in the federal government’s endeavors to meet its transformation challenges. Of some concern, however, is the 2006 FHCS, which showed about half, or 54 percent, of respondents were very satisfied or satisfied with the training they receive on their current jobs, little change from the 2004 survey, which showed 55 percent had positive responses to this question. High-performing organizations have found that to successfully transform themselves they must often fundamentally change their cultures so that they are more results-oriented, customer-focused, and collaborative in nature. An effective performance management system is critical to achieving this vital cultural transformation. Effective performance management systems are not merely used for once- or twice-yearly individual expectation setting and rating processes, but are tools to help the organization manage on a day-to-day basis. These systems are used to achieve results, accelerate change, and facilitate two-way communication throughout the year so that discussions about individual and organizational performance are integrated and ongoing. Moreover, leading public sector organizations both in the United States and abroad create a clear linkage—line of sight—between individual performance and organizational success and, thus, transform their cultures to be more results-oriented, customer-focused, and collaborative in nature. The government’s senior executives need to lead the way in transforming their agencies’ cultures. Credible performance management systems that align individual, team, and unit performance with organizational results can help manage and direct this process. The performance-based pay system that Congress established in November 2003 for members of the senior executive service (SES) seeks to provide a clear and direct linkage between performance and pay for the government’s senior executives and is an important step toward governmentwide transformation. Under this performance based pay system, senior executives no longer receive annual across-the-board pay increases or locality-pay adjustments. Executive branch agencies are to now base pay adjustments for senior executives on individual performance and contributions to agency performance through an evaluation of their skills, qualifications, or competencies as well their current responsibilities. Just as it has for senior executives, the federal government needs to fundamentally rethink its current approach to paying nonexecutive employees by better linking their pay to individual and organizational performance. Today’s jobs in knowledge-based organizations require a much broader array of tasks that may cross the narrow and rigid boundaries of job classifications of the General Schedule system. Since being exempted from the General Schedule system, DOD and DHS have been moving toward occupational clusters and pay bands that better define occupations and facilitate movement toward performance management systems that create a line of sight between performance and organizational results, make meaningful differences in performance, and appropriately reward those who perform at the highest levels. The results of the 2006 FHCS underscore the need for serious attention to the way federal employees are assessed and compensated. About a third, or 34 percent, of the respondents strongly agreed or agreed with the statement that promotions in their work units are based on merit. When respondents were asked if pay raises in their work units depend on how well employees perform their jobs, only 22 percent responded positively. These responses are consistent with past survey results. Further, somewhat less than a third of the survey respondents had a positive response to the question about whether their leadership and management recognized differences in performance in a meaningful way. High- performing organizations have found that actively involving employees and key stakeholders, such as unions and other employee associations, helps gain ownership of new performance management systems and improves employees’ confidence and belief in the fairness of the systems. In addition, adequate safeguards need to be built into the performance management system to ensure fairness and to guard against abuse. Using safeguards, such as having an independent entity conduct reasonableness reviews of performance management decisions can help allay concerns and build a fair, credible, and transparent system. In summary, Mr. Chairman, we need to continue to move forward with appropriate human capital reforms. But how reform is done, when it is done, and the basis on which it is done can make all the difference in whether such efforts are successful. Before implementing significant human capital reforms, especially reforms that make stronger links between employee pay and performance, executive branch agencies should follow a phased approach that meets a “show me” test. That is, each agency should be authorized to implement reform only after it has shown that it has met certain conditions, including having the institutional infrastructure to effectively and fairly implement any new authorities. Mr. Chairman and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions you or others may have at this time. For further information regarding this statement, please contact J. Christopher Mihm, Managing Director, Strategic Issues, at (202) 512- 6806, or mihmj@gao.gov. Individuals making key contributions to this testimony include Anthony P. Lofaro, Assistant Director; Ami J. Ballenger; Thomas M. Beall; Crystal M. Bernard; William Doherty; Karin K. Fangman; and Anthony R. Patterson. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government is facing new and more complex challenges in the 21st century because of long-term fiscal constraints, changing demographics, evolving governance models, and other factors. Strategic human capital management, which remains on GAO's high-risk list, must be the centerpiece of any serious change management and transformation effort to meet these challenges. However, federal agencies do not consistently have the modern, effective, economical, and efficient human capital programs, policies, and procedures needed to succeed in their transformation efforts. In addition, the Office of Personnel Management (OPM) must have the capacity to successfully guide human capital transformations. This testimony, based on a large body of GAO work over many years, focuses on strategic human capital management challenges that many federal agencies continue to face. Federal agencies continue to face strategic human capital challenges in several areas. Leadership--Top leadership in agencies across the federal government must provide committed and inspired attention needed to address human capital and related organizational transformation issues. However, slightly less than half of respondents to the 2006 Federal Human Capital Survey reported a high level of respect for senior leaders while only 38 percent agreed or strongly agreed that leaders in their organizations generate high levels of motivation and commitment in the workforce. Strategic Human Capital Planning--Strategic human capital planning that is integrated with broader organizational strategic planning is critical to ensuring agencies have the talent they need for future challenges, especially as the federal government faces a retirement wave. Too often, agencies do not have the components of strategic human capital planning needed to address their current and emerging human capital challenges. Acquiring, Developing, and Retaining Talent--Faced with a workforce that is becoming more retirement eligible and finding gaps in talent, agencies need to strengthen their efforts and use of available flexibilities to acquire, develop, motivate, and retain talent. Agencies are not uniformly using available flexibilities to recruit and hire top talent and to address the current and emerging demographic challenges facing the government. (4) Results-Oriented Organizational Culture--Leading organizations create a clear linkage--"line of sight"--between individual performance and organizational success and, thus, transform their cultures to be more results-oriented, customer-focused, and collaborative. However, in many cases, the federal government does not have these linkages and has not transformed how it classifies, compensates, develops, and motivates its employees to achieve maximum results within available resources and existing authorities. Agencies are facing strategic human capital challenges in a period of likely sustained budget constraints. Budget constraints will require agencies to plan their transformations more strategically, prioritize their needs, evaluate results, allocate their resources more carefully, and react to workforce challenges more expeditiously in order to achieve their missions economically, efficiently, and effectively. OPM will continue to play a key role in fostering and guiding strategic human capital management improvements in the executive branch and in helping agencies meet transformation challenges. Although making commendable efforts in transforming itself to more a consultant, toolmaker, and strategic partner in leading and supporting agencies' human capital management systems, OPM has itself faced challenges in its capacity to assist, guide, and certify agencies' readiness to implement reforms.
Anesthesia services are generally administered by anesthesia practitioners, such as anesthesiologists and CRNAs. In 2004, there were approximately 42,000 anesthesiologists and 30,000 CRNAs in the United States. Anesthesiologists are physicians who have completed a bachelor’s degree, medical school, and an anesthesiology residency, typically 4 years in length. CRNAs are licensed as registered professional nurses and have completed a bachelor’s degree and a 2- or 3-year nurse anesthesia graduate program. In our prior work, we showed that physician specialists, who include anesthesiologists, tend to locate in metropolitan areas. Anesthesia services can be provided in several ways. Anesthesia services can be provided by anesthesiologists alone, by anesthesiologists working with CRNAs or other practitioners, or by CRNAs alone. In 2004, proportionally more anesthesia services provided to Medicare beneficiaries were provided by anesthesiologists working as the sole anesthesia practitioner and by anesthesiologists working with another practitioner, such as a CRNA, compared to the proportion of anesthesia services provided by CRNAs as the sole anesthesia practitioner. CRNAs can directly bill Medicare for the provision of anesthesia services. In order to receive Medicare payment for anesthesia services, CRNAs generally are required to practice under the supervision of a physician or an anesthesiologist, except in states that have obtained an exemption from this requirement from CMS. As of May 2007, CMS reports that 14 states had requested and obtained this exemption, which would allow CRNAs to practice independently without physician supervision in a variety of inpatient and outpatient settings. Anesthesiologists derive approximately 28 percent of their income from Medicare. CRNAs derive approximately 35 percent of their patient mix from Medicare. In the Omnibus Budget Reconciliation Act of 1989, Congress required the establishment of a national Medicare physician fee schedule which sets payment rates for services provided by physicians and other practitioners. Under the Medicare physician fee schedule, Medicare payments for anesthesia services are generally the lesser of the actual charge for the service or the anesthesia fee schedule amount. Payments for anesthesia services are subject to the same annual updates as all other services paid under the physician fee schedule. However, Medicare payments for anesthesia services are calculated differently than payments for other services covered by the physician fee schedule. Specifically, Medicare fee schedule payments for anesthesia services are calculated using both “base” and “time” units. The relative complexity of an anesthesia service is measured by base units; the more activities that are involved, the more base units assigned by Medicare. The time spent performing an anesthesia service is measured continuously from when the anesthesia practitioner begins preparing the patient for services and ends when the patient may be safely placed in postoperative care and is measured by 15-minute units of time with portions of time units rounded to one decimal place. The sum of the base and time units are converted into a dollar payment amount by multiplying the sum by an anesthesia service-specific conversion factor, which also accounts for regional differences in the cost of providing services. As such, each Medicare payment locality has a unique anesthesia conversion factor assigned by CMS. The calculation of the Medicare payment for an anesthesia service associated with a lens surgery—the most common anesthesia service provided to Medicare beneficiaries in 2004—performed by an anesthesiologist or a CRNA working without another anesthesia practitioner is shown in figure 1. Subject to certain exceptions, Medicare payments for anesthesia services provided by anesthesiologists and CRNAs are equal in most situations. For illustrative purposes, we assumed that the service was provided in the Connecticut payment locality and took 21 minutes to perform. In 2004, the total Medicare payment for this service would have been $99.31, which was equal to the product of the anesthesia service conversion factor specific to the locality ($18.39) and the sum of the base and time units associated with the anesthesia service (5.4 total units). In contrast, Medicare payments for other physician services are calculated using relative value units (RVUs) that correspond to the different resources required to provide physician services. The RVUs are each adjusted to account for geographic differences in the cost of providing services, summed, and then multiplied by a general fee schedule conversion factor, which is applicable across all Medicare payment localities. Physicians who bill Medicare for services can accept Medicare’s payment as payment in full (with the exception of the ability to bill a Medicare beneficiary for 20 percent coinsurance plus any unmet deductible). This is known as accepting assignment. Or they may exercise an option to bill a Medicare beneficiary for the difference between Medicare’s payment and its limiting charge. This is known as balance billing. High rates of assignment may serve as an indicator of physicians’ willingness to serve Medicare beneficiaries. In April 2004, 99.4 percent of the anesthesia services provided by anesthesiologists to Medicare beneficiaries were provided by anesthesiologists who accepted Medicare payment as payment in full. The anesthesiologists’ assignment rate for anesthesia services was comparable to rates for other hospital-based specialists, such as pathologists (99.4 percent) and radiologists (99.6 percent), and was higher than the rate for all other physicians (98.8 percent). In addition to anesthesia services, anesthesiologists and CRNAs can also provide other nonanesthesia types of physician services covered by Medicare. Payments for these other physician services—which can include medical services such as office visits, and procedures such as pain management services—represented approximately 31 percent of anesthesiologists’ and 2 percent of CRNAs’ revenue from Medicare in 2004. Because payment for these services is determined by a different formula than anesthesia services, a significant portion of these Medicare payments are closer to private payments levels for the same services, in contrast to the difference in payments for anesthesia services. According to a MedPAC-sponsored analysis, the average difference between Medicare and private payments for medical services such as office visits and for procedures provided in 2001 was 5 percent and 25 percent, respectively. Most private payers, like Medicare, determine payments for anesthesia services using base units, time units, and anesthesia-specific conversion factors. Unlike the Medicare program, however, private payers can set their fees in response to market forces such as managed care prevalence and the extent of competition among providers. For example, private anesthesia conversion factors are generally negotiated between payers and anesthesia practitioners. In addition, some private payers use different methods to determine time units, such as rounding up fractional time units to the next whole number or using 10-minute increments for each time unit, which can result in higher anesthesia payments. When setting payment rates, some private payers also allow higher payments for certain patient-related factors such as extremes in age. In our prior work we found that private payments for physician services, excluding anesthesia and some other services, differed by about 100 percent between the lowest- and the highest-priced metropolitan areas and were responsive to market forces, such as regional differences in the extent of competition among hospitals and health maintenance organizations’ (HMOs) ability to leverage prices. For example, we found that areas with less competition and lower levels of HMO price leverage had higher payments than areas with more competition and greater levels of HMO price leverage. We have also reported that because private payers can adjust their payment levels to account for market forces, their payment levels vary more than Medicare payments across geographic areas. We found that average Medicare payments for a set of seven anesthesia services provided by anesthesiologists alone were lower than average private payments in 41 Medicare payment localities in 2004, and ranged, on average, from 51 percent lower to 77 percent lower than private payments (see fig. 2). For all 41 payment localities, Medicare payments were lower than private payments by an average of 67 percent. In 2004, the average Medicare payment for a set of seven anesthesia services was $216, and the average private payment for the same set of anesthesia services was $658. Medicare payments varied less than private payments across the 41 payment localities. In 2004, average Medicare payments for the set of seven anesthesia services ranged from $177 to $303 across the 41 payment localities, a range of 71 percent. In contrast, average private payments for the same set of seven anesthesia services in that same year ranged from $472 to over $1,300 across these localities, a range of 177 percent. In 2004, there was no correlation between the overall supply of anesthesia practitioners—that is, the total number of both anesthesiologists and CRNAs per 100,000 people—and either the difference between Medicare and private payments for anesthesia services or the concentration of Medicare beneficiaries in the Medicare payment localities included in our analyses. However, when we examined the supply of anesthesiologists and CRNAs separately, we found correlations between practitioner supply and payment differences and practitioner supply and beneficiary concentration. Specifically, we found that in 2004, the supply of CRNAs tended to decrease as the difference between Medicare and private payments for anesthesia services increased in 41 Medicare payment localities. We also found that in 2004, the supply of anesthesiologists tended to decrease as the concentration of Medicare beneficiaries increased across 87 Medicare payment localities, while the supply of CRNAs tended to increase as the concentration of Medicare beneficiaries increased across these Medicare payment localities. We found no correlation between the overall supply of anesthesia practitioners per 100,000 people and the difference in Medicare and private payments for anesthesia services across 41 of Medicare’s payment localities in 2004. The supply of anesthesia practitioners varied across the 41 localities independent of the payment differences in these localities and the payment differences varied independently of the supply of anesthesia practitioners in the localities. When we considered anesthesiologists and CRNAs separately, we found a relationship between the supply of CRNAs and the payment differences for anesthesia services across the 41 Medicare payment localities in 2004. Specifically, there tended to be fewer CRNAs in the localities with the larger differences between Medicare and private payments for anesthesia service. For example, on average, there were about 11.5 CRNAs per 100,000 people in the localities where private payments exceeded Medicare payments by about 59 percent, while there were fewer CRNAs—on average, about 7.5 per 100,000 people—in the localities where private payments exceeded Medicare payments by about 73 percent. In contrast, we did not find an association between the supply of anesthesiologists and the differences between Medicare and private payments for anesthesia services across the same 41 localities. We found no correlation between the overall supply of anesthesia practitioners and the concentration of Medicare beneficiaries across 87 Medicare payment localities in 2004. The overall supply of anesthesia practitioners—the number of both anesthesiologists and CRNAs combined per 100,000 people—varied across the 87 localities independent of the number of Medicare beneficiaries in these localities. We found that the supply of anesthesiologists and the supply of CRNAs were each correlated with the concentration of Medicare beneficiaries across 87 payment localities in 2004. However, we found the opposite relationship between the concentration of Medicare beneficiaries and the supply of anesthesiologists and the supply of CRNAs. We generally found fewer anesthesiologists in localities with a greater concentration of Medicare beneficiaries. For example, in 2004, in localities where on average 17 percent of the population was made up of Medicare beneficiaries, there were 13 anesthesiologists per 100,000 people. For localities where, on average, 11 percent of the population was made up of Medicare beneficiaries, the supply of anesthesiologists was relatively higher at 16 per 100,000 people. In contrast, we generally found more CRNAs in localities with higher concentrations of Medicare beneficiaries. For example, in 2004, on average, there were 14 CRNAs per 100,000 people in localities where the proportion of Medicare beneficiaries was 17 percent, on average, but half that supply—7 CRNAs per 100,000 people—in localities where 11 percent of the population was Medicare beneficiaries. The larger supply of CRNAs in localities with greater concentrations of Medicare beneficiaries appeared to offset the smaller anesthesiologist supply in these localities so that, in total, there was no relationship between the overall supply of anesthesia practitioners and the concentration of Medicare beneficiaries across the 87 localities in 2004. For 2005, compensation for anesthesia practitioners was reported to compare favorably to that of other physicians and nonphysician practitioners, according to information from medical group practices from across the country that responded to a survey of MGMA member organizations. The 2005 median annual compensation for general anesthesiologists—approximately $354,240—was over 10 percent higher than the median annual compensation for specialists and over twice the compensation for generalists. When compared to other hospital-based specialists, the MGMA-reported median annual compensation for general anesthesiologists was higher than that for three categories of pathologists and less than that for three categories of radiologists. For example, the MGMA-reported median annual compensation for general anesthesiologists was approximately 10 percent higher than the MGMA- reported median annual compensation for anatomic and clinical pathologists. MGMA data also showed that the median annual compensation for pain management anesthesiologists and pediatric anesthesiologists exceeded the median annual compensation for general anesthesiologists and all categories of pathologists and radiologists. Similarly, for 2005, the MGMA-reported median annual compensation for CRNAs—approximately $131,400—was higher than the MGMA-reported median annual compensation for other nonphysician practitioners such as nurse practitioners, nurse midwives, and physician assistants. For example, the MGMA-reported median annual compensation for CRNAs was over 40 percent higher than the MGMA-reported median annual compensation for either nurse midwives or nurse practitioners and over 35 percent higher than the MGMA-reported median annual compensation for physician assistants. The number of anesthesiology residency positions offered through the NRMP and the number of nurse anesthesia graduates have increased in recent years. From 2000 to 2006 the number of residency positions available in anesthesiology through the NRMP increased from 1,005 to 1,311, and the number of these positions that were filled increased from 802 to 1,287. By 2006, the anesthesiology residency match rate—the percentage of positions that have been filled—was 98 percent. This rate was higher than the rate for pathologists, radiologists, and all physicians in 2006. In addition, there has been a significant increase in the number of newly graduated nurse anesthetists. According to the Council on Certification of Nurse Anesthetists (CCNA), in 1999, nurse anesthesia programs produced 948 new graduates; in 2005, that number had increased to 1,790, an overall increase of 89 percent. We provided a draft of this report to CMS and to two external commenters that represent anesthesia service practitioners; the AANA and the American Society of Anesthesiologists (ASA). CMS’s written comments are reprinted in appendix II. CMS stated that our study provides a good summary of information collected from a variety of sources on anesthesia payments and the supply of anesthesia practitioners but was concerned that our analysis of payment differences for anesthesia services did not include four of the top five Medicare anesthesia services in terms of Medicare payments. CMS noted that private payer rates are not a criterion under the law to determine whether Medicare physician payments are reasonable and stated that the Medicare and private payment differences for anesthesia services do not necessarily indicate a deficiency in Medicare payment rates. CMS also suggested that the report should mention that the services of CRNAs in most rural hospitals and critical access hospitals are paid on a reasonable cost basis—not under the physician fee schedule—and that payments based on reasonable costs could affect Medicare and private payment differences for anesthesia services in these areas. One of the external commenters generally agreed with our findings. The other external commenter agreed with our finding regarding payment differences for anesthesia services, but like CMS questioned our choice of the anesthesia services included in our analysis of payment differences. This external commenter was also concerned regarding our finding related to supply of anesthesia practitioners and believed that we overestimated the supply of anesthesiologists based on analysis of its own association membership counts. Both external commenters stated that we should have addressed aspects of payments to anesthesia service practitioners that were not included in our analysis. Specifically, one external commenter stated we should have examined the use of stipends by hospitals to augment anesthesiologists’ compensation. The other external commenter stated we should have included analysis of Medicare and private anesthesia service payments to CRNAs, including analysis of anesthesia services during which CRNAs work with anesthesiologists or provide the services as the sole anesthesia practitioner. We carefully considered which anesthesia services to include in our analysis of Medicare and private payment differences for anesthesia services, but were not able to include all of the high-volume Medicare anesthesia services. In order to calculate the difference between Medicare and private payments for anesthesia services and include the maximum number of localities in our analysis, it was essential to include anesthesia services that were high volume for both Medicare and the private sector. Some anesthesia services that were high volume for Medicare beneficiaries, for example anesthesia for lens surgery, were not as high volume for private patients and were not included for that reason. We agree with CMS that differences between Medicare and private payments for anesthesia services are not a statutory criterion for determining Medicare payments for these services and added this clarification to our report. We also clarified that Medicare payments for CRNA anesthesia services provided in rural and critical access hospitals could be paid on a reasonable cost basis and added a statement to the report stating this fact. However, we did not determine the extent to which Medicare and private payments to CRNAs practicing in rural and critical access hospitals differed as this was beyond the scope of our study. In response to the external commenter’s concern regarding the accuracy of our estimate of the supply of anesthesiologists, we believe the AMA data that we used to calculate the supply of anesthesiologists represent the most complete and accurate data source for analyzing physician supply, and that the external commenter estimates of supply based on association membership counts may underestimate supply because it is likely that some anesthesiologists do not belong to the association. Additionally, we checked our calculations regarding the supply of anesthesiologists and verified that we had removed inactive and nonpracticing anesthesiologists from our supply estimates. We did not include a discussion of stipends paid by hospitals to anesthesia service practitioners. Stipends are reported to be paid to a variety of specialists, including anesthesiologists, for several reasons, including to compensate specialists for treating a high proportion of Medicare beneficiaries, 24- hour coverage of trauma units, and to help cover costs associated with treating uninsured patients. As our study focused on Medicare and private payments for anesthesia services and overall compensation for anesthesia practitioners, it was beyond the scope of our study to examine this issue in further detail. We agree with the external commenter that it would have been preferable to include payments for CRNA anesthesia services in our analysis, but were not able to do this due to data limitations. The external commenters provided us with technical comments and clarifications, which we incorporated as appropriate. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We are sending copies of this report to the Administrator of CMS and interested congressional committees. We will also make copies available to others upon request. The report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions, please contact me at (202) 512-7114 or kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members that made major contributions to this report are listed in appendix III. This appendix describes in detail the data and methods we used to calculate differences in Medicare and private anesthesia service payments, anesthesia practitioner supply, and Medicare beneficiary concentration. It also describes the correlation analyses we conducted to determine the relationship between anesthesia practitioner supply measures, differences in anesthesia service payments, and Medicare beneficiary concentration. Finally, this appendix addresses data reliability issues and limitations related to our studies. To examine the extent to which Medicare payments for anesthesia services were lower than private payments across Medicare payment localities in 2004, we used anesthesia service claims data from two billing companies that bill and track payments from private payers and Medicare and calculated payments by payer for services provided by anesthesiologists alone at the Medicare payment locality level. This provided us with average Medicare and private payments for a set of anesthesia services. We then calculated payment differences—that is, the percentage by which Medicare payments were lower than private payments, calculated as the difference between average private and Medicare payments as a percent of average private payments—for each of the localities included in our analysis. To calculate the difference between Medicare and private payments for anesthesia services, we used 2004 anesthesia service claims data from two companies that bill private payers and Medicare on behalf of anesthesia practitioners. We obtained names of several billing companies from interviews with industry experts who were knowledgeable about industry billing practices. We chose to use anesthesia service claims data from billing companies because such data contain claims from many different insurers in an area. The two billing companies from which we obtained claims data together provided billing services on behalf of over 10 percent of all anesthesiologists in the country in 2004. Although the anesthesia service claims data from the two companies may not be generalizeable to all anesthesia services provided by anesthesiologists, billing company officials stated that their claims data were generally representative of other companies that provided billing for anesthesia services and that anesthesia practioner groups that did not use billing services were not that different from groups that did use billing services. The billing companies provided us with claims data for anesthesia services provided in 2004, including payment information for the 27 highest- expenditure anesthesia services paid for by Medicare in 2003, which accounted for approximately 70 percent of Medicare anesthesia service expenditures in 2003. The specific information the billing companies provided included data on the type of payer; the anesthesia service code; payment modifiers that specified the type of anesthesia practitioner involved; total minutes of time required to perform the service; payments, including insurer and beneficiary payments; and the Medicare payment locality in which the service was provided. Due to the proprietary nature of the data and concerns about identification of providers or beneficiaries, the billing companies could not provide payment information at a smaller geographic level. Therefore, Medicare payment localities were the smallest areas for which we could examine payments for anesthesia services. Only claims for which fee-for-service Medicare was the payer were included in our calculation of Medicare payments. For our calculation of private payments for these services, we included fee-for-service, preferred provider organization, and managed care claims from all commercial payers. Average payments included payments made by insurers as well as patient obligations such as deductibles and coinsurance payments. Because our study compared Medicare and private payments only, we excluded the billing companies’ claims from other payers of anesthesia services, such as Medicaid and workers’ compensation funds. We also excluded any claims for which we could not definitively identify the payer. Although both billing companies provided claims data, one company provided information at the individual claims level while the other company provided claims information summarized to the case level. For the individual claims-level data, we excluded claims from the analysis if the average anesthesia service payment was greater than or less than 3 standard deviations from the log of the average anesthesia service payment, specific to each anesthesia service, Medicare payment locality, and payer. We applied similar criteria to anesthesia service conversion factors (which we calculated as the total payment for the service divided by the sum of the base and time units associated with the service) in the individual claims-level data. Because data from the other company were summarized, we were not able to apply similar exclusion criteria. Instead, prior to providing the claims data to us, the billing company excluded claims if an individual Medicare or private anesthesia service payment was less than 10 percent of the Medicare allowable payment for the locality in which the service was provided or if the receivable was greater than $50. We excluded claims paid by Medicare from the data provided by either billing company if the Medicare anesthesia conversion factor did not match any of the Centers for Medicare & Medicaid Services’ (CMS) established conversion factors, based on the localities present in the data. We examined descriptive statistics for both data sets after all exclusions were applied and determined that it would be appropriate to merge the two data sets to calculate payment differences. After applying these and other exclusion criteria, we ranked the anesthesia service codes in order of prevalence across the Medicare payment localities represented in the billing companies’ claims data. Based on the rankings and prevalence across localities, we identified a set of seven anesthesia services that were most prevalent and well represented across the Medicare payment localities included in the claims data. We balanced the need for maximizing the number of localities with having a set of anesthesia services that were prevalent in all of the localities chosen. In our final data set we retained billing company claims data for all seven of these anesthesia services in 41 different Medicare payment localities. These seven anesthesia services were services provided by anesthesiologists only. We did not have a sufficient volume of claims for anesthesia services provided by certified registered nurse anesthetists (CRNAs) alone to include data from CRNA-performed services in our analysis. We also did not include data for anesthesia services provided by anesthesiologists with the involvement of other anesthesia practitioners because the billing data for these services from the two billing companies were not consistent and we therefore determined them to be not reliable. Medicare and private payments were both weighted to account for the relative national expenditures for each of the seven anesthesia services by Medicare in 2003 (see table 1). For example, because anesthesia services for intraperitoneal procedures in the upper abdomen including laparoscopy accounted for approximately one-third of Medicare expenditures for the seven selected codes combined, approximately one- third of the overall average payment we calculated for each locality was based on payments for this service. There were far fewer Medicare expenditures associated with anesthesia for hernia repairs in the lower abdomen, not otherwise specified and therefore payments for these services had a much smaller weight in overall average payment calculations. Over 136,000 Medicare and private anesthesia service cases were included in our calculation of payment differences. Using the weighted average Medicare and private payments, we calculated payment differences for each of the 41 Medicare payment localities included in our analysis. We also calculated an overall average payment difference inclusive of data from all 41 localities. To examine a payment variable that was not influenced by variation in time, we examined the difference in conversion factors for Medicare and private anesthesia services, using the seven services provided by anesthesiologists in the 41 Medicare payment localities. The average difference in conversion factors was 69 percent, an amount very similar to the difference in Medicare and private payments. Therefore, we focused our analyses on the difference in Medicare and private payments. To estimate anesthesia practitioner supply at the locality level, we used data from the American Medical Association (AMA), the American Association of Nurse Anesthetists (AANA), the U.S. Census Bureau, and CMS. Only active anesthesiologists and CRNAs practicing in the 50 states and the District of Columbia were included in our analysis. We assigned anesthesia practitioners and the number of total U.S. general population residents to 87 Medicare payment localities.10,,11 12 To determine supply per 100,000 people, we divided the number of anesthesia practitioners in each locality by the total resident population in the same locality, multiplied by 100,000. (See table 2). To estimate the concentration of Medicare beneficiaries at the locality level, we used CMS and U.S. Census Bureau data. Using a geographic crosswalk file, we assigned the number of beneficiaries enrolled in Medicare and the number of total U.S. general population residents to Medicare payment localities. We then computed the percentage of Medicare beneficiaries in the general population to estimate the concentration of Medicare beneficiaries in each Medicare payment locality. (See table 3). To measure the relationship between the supply of anesthesia practitioners, the difference in average Medicare and private payments, and the concentration of Medicare beneficiaries at the locality level, we performed correlation analyses. A correlation coefficient measures the strength and direction of linear association between two variables without controlling for the effects of other characteristics as in a multivariate analysis. We calculated correlations between three measures of anesthesia practitioner supply—anesthesiologists, CRNAs, and total (anesthesiologists and CRNAs combined)—and differences in payments in 41 Medicare payment localities. We also calculated correlations between the three supply measures and the concentration of Medicare beneficiaries in 87 Medicare payment localities. (See tables 4 and 5 below.) We used a variety of data sources in our analysis, including anesthesia service claims data from two billing companies, the AMA, the AANA, the U.S. Census Bureau, CMS, the National Resident Matching Program (NRMP), and the Medical Group Management Association (MGMA). We tested the internal consistency and reliability of all our data sources and determined they were adequate for our purposes. The files containing the billing company data, which were used by the two companies to record bills and payments, were subjected to various internal controls, including spot checks, batch totals, and balancing controls as reported by the two companies. Although we did not review these internal controls, we did assess the reliability of the billing company data. We conducted extensive interviews with representatives from both companies to gain an understanding of the completeness and accuracy of the data the companies provided. We also reviewed all information provided to us concerning the data, including data dictionaries and file layouts. Additionally, we examined the data for errors, missing values, and values outside of expected range and computed payment differences from each company’s data separately and found them to be comparable. Finally, we determined that our calculation of anesthesia service payment differences was comparable with the results of a MedPAC-sponsored study. We also assessed the reliability of median compensation information reported by MGMA. Although multiple compensation surveys are available, we chose to use MGMA as our data source because it has been used as a source in a number of peer-reviewed articles, and it contains comprehensive information on various aspects of physician compensation. Through interviews with MGMA officials, we learned of the steps taken by MGMA to ensure the reliability of the data the association published on median compensation, including comparisons with other industry studies on physician and nonphysician compensation and year-to-year analyses of respondents. We identified several potential limitations of our analyses. First, while we used payment data from 41 different Medicare payment localities, we do not know if the payment data are representative of all 89 of Medicare’s payment localities. Second, we did not have sufficient payment information to calculate payment differences for anesthesia services provided by anesthesiologists working with other anesthesia practitioners or anesthesia services provided solely by CRNAs. As a result, we do not know if payment differences for services provided in these ways would have been different than payment differences for anesthesia services provided by anesthesiologists alone. Third, we limited our analyses to determining whether the supply of anesthesia practitioners was linearly associated with payment differences or Medicare beneficiary concentration. However, practitioners’ decisions on where to locate could be influenced by many other factors not included in our analyses. We also identified potential limitations with MGMA’s compensation data. The data were based on a survey of MGMA member organizations which are reported to overrepresent large medical groups. In addition, the MGMA survey response rate of 16 percent raises the possibility that their compensation data may not be representative of the compensation of all physician and nonphysician practitioners. We performed our work from September 2004 through May 2007 in accordance with generally accepted government auditing standards. In addition to the contact named above, Christine Brudevold, Assistant Director; Stella Chiang; Krister Friday; Jawaria Gilani; and Ba Lin made key contributions to this report. Medicare Physician Services: Use of Services Increasing Nationwide and Relatively Few Beneficiaries Report Major Access Problems. GAO-06-704. Washington, D.C.: July 21, 2006. Federal Employees Health Benefits Program: Competition and Other Factors Linked to Wide Variation in Health Care Prices. GAO-05-856. Washington, D.C.: August 15, 2005. Medicare Physician Fees: Geographic Adjustment Indices Are Valid in Design, but Data and Methods Need Refinement. GAO-05-119. Washington, D.C.: March 11, 2005. Physician Workforce: Physician Supply Increased in Metropolitan and Nonmetropolitan Areas but Geographic Disparities Persisted. GAO-04-124. Washington, D.C.: October 31, 2003.
In 2005 Medicare paid over $1.4 billion for anesthesia services. These services are generally provided by anesthesia practitioners, such as anesthesiologists and certified registered nurse anesthetists (CRNAs). A government-sponsored study found that Medicare payments for anesthesia services are lower than private payments. Congress is concerned that this difference may create regional discrepancies in the supply of anesthesia practitioners, and asked GAO to explore this issue. GAO examined (1) the extent to which Medicare payments for anesthesia services were lower than private payments across Medicare payment localities in 2004, (2) whether the supply of anesthesia practitioners across Medicare payment localities in 2004 was related to the differences between Medicare and private payments for anesthesia services or the concentration of Medicare beneficiaries, and (3) compensation levels for anesthesia practitioners in 2005 and trends in graduate training. GAO used claims data from two anesthesia service billing companies that bill private insurance payers and Medicare to calculate payments by payer for seven anesthesia services in 41 Medicare payment localities. GAO also used data from the Centers for Medicare & Medicaid Services (CMS) and other sources to determine practitioner supply and Medicare beneficiary concentration in 87 Medicare payment localities. GAO found that in 2004 average Medicare payments for a set of seven anesthesia services provided by anesthesiologists alone were 67 percent lower than average private insurance payments in 41 Medicare payment localities--geographic areas established by CMS to account for geographic variations in the relative costs of providing physician services. In 2004, there was no correlation between the overall supply of anesthesia practitioners--that is, the total number of both anesthesiologists and CRNAs per 100,000 people--and either the difference between Medicare and private insurance payments for anesthesia services or the concentration of Medicare beneficiaries in the Medicare payment localities included in GAO's analyses. However, when GAO examined the supply of anesthesiologists and CRNAs separately, GAO found correlations between practitioner supply and payment differences and practitioner supply and beneficiary concentration. Specifically, GAO found that in 2004, the supply of CRNAs tended to decrease as the difference between Medicare and private insurance payments for anesthesia services increased in 41 Medicare payment localities. GAO also found that in 2004 the supply of anesthesiologists tended to decrease as the concentration of Medicare beneficiaries increased across 87 Medicare payment localities, while the supply of CRNAs tended to increase as the concentration of Medicare beneficiaries increased across these Medicare payment localities. For 2005, compensation for anesthesia practitioners was reported to compare favorably with other practitioners, according to information from medical group practices from across the country that responded to a survey of Medical Group Management Association (MGMA) member organizations. The 2005 median annual compensation for general anesthesiologists--approximately $354,240--was over 10 percent higher than the median annual compensation for specialists and over twice the compensation for generalists. For 2005, MGMA-reported median annual compensation for CRNAs-approximately $131,400--was over 40 percent higher than the MGMA-reported median annual compensation for either nurse midwives or nurse practitioners and over 35 percent higher than the MGMA-reported median annual compensation for physician assistants. The number of anesthesiology residency positions offered through the National Resident Matching Program and the number of nurse anesthesia graduates have increased in recent years. CMS stated that the study provided a good summary of information collected from a variety of sources on anesthesia payments and the supply of anesthesia practitioners.
GSA serves as federal agencies’ landlord and designs, builds, manages, and maintains federal facilities. According to fiscal year 2013 data, over 8,900 buildings in the United States are held or leased by the GSA, and these buildings provide workspace for over 1-million federal employees and average 1.4-million daily visitors. FPS, a subcomponent of the National Protection and Programs Directorate within DHS, is the primary agency responsible for providing law enforcement and related security USMS, a component of DOJ, has received services at GSA buildings.delegations of authority for building security from GSA and has primary responsibility for providing the security for federal judicial facilities and personnel. Security screening consists of the electronic, visual, or manual inspection or search of persons, vehicles, packages, and containers for detecting the possession or attempted introduction of prohibited items including illegal and other dangerous items into a federal facility or secure area within a federal facility. An individual in possession of or attempting to introduce a prohibited item, including an illegal or dangerous item into a federal building, is considered an individual who may pose a security threat. For the purposes of this report, we focused our efforts on the security screening of persons at access control points. This process varies at each federal building based on a variety of factors, but a visitor to a FSL IV federal building, for example, may undergo a full security screening, which may include a protective security officer or court security officer checking his or her government-issued identification, having his or her belongings go through an x-ray machine, and the visitor physically walking through a walk-through magnetometer. Federal employees may undergo different levels of security screening at a federal building depending on a variety of security-related factors unique to that building. Screening of federal employees may range from a protective security officer or court security officer verifying that the employee has a valid government-issued identification card or an agency-issued credential, to full screening that would require the employee to go through a similar process as a visitor to a FSL IV federal building, as described above. FPS’s protective security officers—contract security guards—are the most visible component of FPS’s operations, as well as the first contact with federal agencies for individuals entering a federal building. FPS relies heavily on its protective security officers and considers them to be the entity’s “eyes and ears” while performing their duties. FPS protective security officers are responsible for controlling access to federal buildings, conducting security screening at access control points, enforcing property rules and regulations, detecting and reporting criminal acts, and responding to emergency situations involving building safety and security. FPS protective security officers (1) control access to federal buildings by checking the identification of government employees who work there as well as members of the public who visit, and also (2) operate security- screening equipment, such as x-ray machines and walk-through magnetometers, to ensure prohibited items—including illegal items, such as firearms, explosives, knives, and drugs—do not enter federal buildings. In general, FPS protective security officers do not have arrest authority, but can detain individuals who are being disruptive or pose a danger to public safety. According to FPS, it has around 13,000 protective security officers at approximately 2,700 of the 8,900 FPS- protected federal buildings across its 11 regions. Of those, FPS conducts security screening of visitors and employees at approximately 2,400 buildings. FPS’s budget for fiscal year 2014 was over $1.3 billion. USMS has primary responsibility for protecting the federal judicial process by ensuring safe and secure conduct of proceedings and protecting federal judges, jurors, and members of the visiting public, in GSA buildings housing the judiciary. USMS’s responsibilities include managing court security officers and security systems and equipment, including x- ray machines, surveillance cameras, duress alarms, and judicial chambers’ entry control devices. USMS court security officers, also contract security guards, are responsible for screening for and intercepting weapons and other prohibited items from individuals attempting to bring them into federal courthouses. USMS court security officers also assist in providing security at facilities that house federal court operations. According to USMS, as of May 2014, USMS court security officers conducted entrance security screening at 410 federal buildings, 121 of which (approximately 30 percent) are multi-tenant federal buildings across the 94 federal court districts. USMS oversees the daily operation and management of security services performed by more than 5,000 court security officers. The USMS’s fiscal year 2014 enacted budget totaled more than $2.7 billion across multiple appropriations, with nearly $460 million designated for judicial and courthouse security. The Judicial Conference of the United States is the principle policy-making body for administering the federal court system, and its Committee on Judicial Security recommends security policies for federal judges and courts. The Administrative Office of the United States Courts (AOUSC) coordinates with the federal courts, USMS, FPS, and GSA to implement the judiciary’s security program. Since FPS is responsible for enforcing federal laws and regulations, and providing building entry and perimeter security at GSA buildings, among other responsibilities, FPS and USMS seek to closely coordinate security activities for federal buildings that contain courtrooms and judicial officers. The responsibilities for FPS and USMS are defined as part of a 1997 memorandum of agreement. More specifically, in multi-tenant federal buildings that are primarily courthouses (i.e., judicial or judicial-related space comprise more than 75 percent of the building), USMS provides court security officers for security screening at access control points at the building entrances, access control, and security for all judicial areas while FPS may assist in providing perimeter-roving patrol and after hour coverage. In multi-tenant federal buildings that house federal courts, where judicial or judicial-related space comprise less than 75 percent of the building, FPS would generally provide protective security officers for security screening at access control points at the building entrances, as well as perimeter-roving patrol. USMS court security officers would conduct security screening at access control points for the judicial space within the building. Currently, there are seven courthouses participating in a pilot program where USMS has also assumed control of perimeter security. The roles and responsibilities of USMS and FPS under this pilot program are outlined in a 2008 memorandum of understanding. The ISC develops governmentwide physical security standards and best practices for federal security professionals responsible for protecting nonmilitary federal buildings in the United States. The ISC was established in 1995 by Executive Order 12977 following the 1995 bombing of the Alfred P. Murrah Federal Building in Oklahoma City, Oklahoma. The ISC is an interagency organization chaired by the DHS and is comprised of representatives from more than 50 federal agencies and departments. FPS is a member agency of the ISC, along with other federal entities such as GSA, USMS, SSA, and the federal judiciary. Executive Order 12977 directs each executive agency and department to cooperate and comply with the ISC policies and recommendations issued pursuant to the order. The ISC’s mission is to enhance the quality and effectiveness of the security and protection of nonmilitary federal buildings in the United States and to provide a permanent body to address continuing governmentwide security issues for these facilities. For example, in February 2013, the ISC developed a baseline list of items that are prohibited in federal buildings in order to provide some consistency. Federal management regulations identify items generally prohibited from being introduced into a federal building—such as explosives, firearms, or other dangerous weapons—except for law enforcement purposes (and other limited circumstances). The ISC standard also establishes a process for preventing prohibited items from entering into federal buildings and identifies responsibilities for denying entry to those individuals who attempt to enter with such items. Federal buildings vary in their assigned FSL and implemented security countermeasures. FPS is to coordinate with the building tenants, law enforcement and intelligence partners, and other stakeholders to gather information and identify the risks unique to each building being assessed. The initial evaluation of risks is used by FPS in calculating the FSL proposal. The facility security committee or court security committeethen uses this proposal to establish the final FSL determination. FPS and USMS are to work in partnership with tenant facility security committees and court security committees to build a consensus regarding the type of countermeasures appropriate for each individual facility. Facility security committees and court security committees, which are composed of representatives of tenant entities at federal buildings and other stakeholders, have broad latitude in determining the security measures appropriate for their facility. The decision regarding the optimal combination of physical countermeasures (such as security barriers, x-ray machines, closed circuit television, and number and type of security- screening access control points staffed by FPS protective security officers and USMS court security officers) is based on a variety of factors. These factors include a facility security assessment report conducted by FPS,the FSL, and the security needs of individual tenants. It is important to note that facility security committees and court security committees, rather than FPS and USMS, render the final decision regarding the number and type of security-screening access control points and technical countermeasures that are to be installed in each individual building. Facility security committees and court security committees have broad latitude in determining which items, if any, can be prohibited in their respective facilities, in addition to those specifically prohibited by law, as discussed later in the report. FPS and USMS experience a range of challenges in their efforts to provide effective security screening. Such challenges can create a complex environment in which to conduct effective security screening. These challenges include: (1) building characteristics and location that may limit security options; (2) balancing security and public access; (3) operating with limited resources; (4) working with multiple federal tenants; and (5) effectively informing the public of prohibited items. Many GSA buildings were designed and constructed prior to the occurrence of several high-profile incidents where federal buildings were targets of acts of violence and consequently before security screening became more of a priority. GSA reported in 2011 that the lack of reinvestment funding is a challenge it faces as the average age of its buildings was 47 years old and has accelerated the deterioration of an already aged portfolio.be challenging for FPS and USMS because they have to work within the parameters of the building’s original layout, physical location, and composition of tenants. For example, at the majority of the buildings that we visited, the public is required to undergo full screening upon entering the building, while employees typically undergo limited screening once their government identification cards are checked. However, according to USMS officials at a building that we visited, the layout of the building As a result, conducting security screening may makes it difficult for court security officers to conduct any type of security screening on the employees entering the building from the underground parking garage. The elevators to enter the building from the underground employee parking garage are physically located behind the building’s screening access control points. As such, the employees who enter the building from the parking garage receive little to no screening beyond checking their identification cards upon entry into the garage. Further, according to USMS officials, in the instance that USMS determines the building needs to increase its security, it would be difficult to screen employees entering the building from the parking garage because the court security officers would not be able to utilize most of the screening equipment at the access control point due to the location of the elevators relative to the access control point. Further, if a GSA building is considered to be historically significant—that is, it is listed on the National Register of Historic Places or is eligible for listing—renovations by federal agencies must follow the requirements of the National Historic Preservation Act of 1966, as amended. Under the act, federal agencies are to use historic properties to the maximum extent feasible and retain and preserve the historic character of the property when making infrastructure changes or rehabilitating a property. As we have reported in the past, buildings listed on the National Register of Historic Places or aging buildings may not be able to support, or may make it more difficult to implement, security changes when complying with the National Historic Preservation Act’s requirements. Also, when trying to make security screening enhancements that will alter the design or layout of a public space in a GSA building, such as to a security screening access control point for the public, FPS and USMS officials reported that it is challenging to coordinate such efforts with GSA due to factors including GSA’s limited budget and initiatives such as GSA’s First Impressions Program. The program emphasizes making better “first impressions” for the visiting public and also for the building’s tenants, in the public spaces of existing federal buildings. Therefore, GSA’s First Impressions Program’s goals can sometimes conflict with what the tenant entities or FPS or USMS believe to be needed screening- security enhancements. According to USMS officials, working with GSA can be challenging when trying to install new security enhancements or making alterations to the space because the changes may not meet the aesthetic framework GSA desired for the public space. For example, there is a severe glare caused by the sun’s reflection through the windows into the lobby during a significant portion of the day, where the security screening access control point for the public is located at a building that we visited. According to USMS officials at the building, the glare obscures the court security officer’s ability to see incoming visitors. The glare also affects the court security officer’s view of the x-ray machine’s computer monitor, potentially impeding the court security officer’s ability to appropriately screen items sent through the x-ray machine (see fig. 1 below). However, according to USMS officials, GSA will not allow USMS to apply a tinting on the windows to reduce the impact of the glare because it would alter the aesthetics of the public space. At the time of our review, USMS and GSA had not resolved this issue. At a different entrance at the same building, solar glare made it difficult for court security officers to see individuals entering the building. USMS, however, was able to work with GSA to come up with a mutually agreeable solution. GSA suggested and created a “living wall” by planting foliage to help cover an exterior plain white wall, which had reflected sunlight into the building (see fig. 2 below). Striking an appropriate balance between providing security at federal buildings, and facilitating the public’s access to government offices for services and other business transactions, continues to be a major challenge, as we have reported in the past. FPS and USMS officials at 6 of the 11 buildings we visited noted this challenge. We previously reported that GSA’s goal is to create an environment that reflects an open, welcome atmosphere, as well as to protect against those with the intent to do harm. GSA also considers federal workers’ convenience and privacy an important part of these considerations. For example, federal employees may undergo different levels of security screening depending on a variety of building-specific factors which may range from a federal identification check to a full screening, as the general public would experience. Federal agencies face particular challenges in GSA buildings with high public demand requiring regular public access. Such buildings include courthouses and federal office buildings that house agencies such as the SSA, the United States Citizenship and Immigration Services (USCIS), and the Internal Revenue Service. According to the ISC, the potential threat to federal tenant entities within a multi-tenant building is based on several factors, which include, but are not limited to whether: the tenant entity’s mission and interaction with certain segments of the public is adversarial in nature (e.g., criminal and bankruptcy courts, high-risk law enforcement); the tenant entity’s mission draws attention of organized protest groups (e.g., Environmental Protection Agency, courthouses, Department of Energy); and the building is located in a high-crime area, as determined by local law enforcement. For example, according to SSA headquarters officials, the majority of challenges the agency experiences results from the tension between trying to provide effective security while accomplishing its mission. The SSA’s mission is to deliver social security services that meet the needs of the public. To accomplish its mission, the SSA has 1,256 field offices where the agency provides in-person services to the public. Security has become such a pressing concern that there are armed FPS protective security officers at all SSA offices that involve customer interactions. Furthermore, many field offices are specifically located in areas easily accessible to the public, which requires some offices to be located in higher risk crime areas, increasing their security risks. In some instances, the SSA has delegated security authority from FPS, and would then be responsible for providing the armed security officers. each day, primarily for services provided by USCIS or the Internal Revenue Service. USCIS created separate entrances with screening access control points at two buildings we visited, to provide additional security while also managing the number of visitors to the USCIS offices. At both buildings, USCIS paid for the dedicated screening access control points for its customers, including the screening equipment and the additional protective security officers. According to FPS officials at both buildings, USCIS funded the screening-security enhancements to better serve its customers and to streamline the process that would have been required had USCIS gone through the facility security committee’s approval and budget process. According to FPS headquarters officials, some protective security officers are not fully trained to address all security-screening scenarios presented to them at screening-access control points. In 2010, 2013, and in 2014, we concluded that FPS continued to experience difficulty ensuring that its protective security officers have their required screening training and certifications, in part due to FPS’s limited resources. As a result, protective security officers deployed to federal buildings may have been operating x-ray and walk-through magnetometer equipment for which they have not been trained to use, thus raising questions about their ability to fulfill a primary responsibility at screening access control points. We have made recommendations for FPS to improve upon its security- screening procedures, and FPS has taken some steps to do so.Specifically, FPS has begun to implement its16-hour National Weapons Detection Training program. This program, referred to as “screener training,” was included in all new solicitations for contract protective security-officer vendor companies issued in fiscal year 2014, according to FPS headquarters officials. This program doubles the screener training that protective security officers had received under prior contracts and includes performance-oriented training and testing. In February 2014, the first contract was awarded that included the 16-hour “screener training,” according to FPS headquarters officials. FPS-certified inspectors are to provide the “screener training” to those protective security officers covered under the new contract. According to FPS officials at the headquarters and regional level, the initial feedback from the protective security officers who have undergone the new training has been very positive. The implementation of this program, however, will take time. Typically, protective security-officer vendor contracts are for 5-year periods, and according to a FPS headquarters official, contracts can be modified at any time to add within scope changes such as the new “screener training” provision. In addition, to offset resource demands on FPS for providing the additional “screener training” and to increase accountability for all new solicitations, FPS selected four contracts in three regions for a “Train the Trainer” pilot. FPS is to train and certify instructors from the contract protective security officer companies at the National Weapons Detection Training program and the certified contract instructors are to then deliver the 16-hour training to their companies’ respective contract protective security officers. FPS and the respective protective security officer companies modified four contracts in March 2014. The “Train the Trainer” pilot officially began in April 2014. Limited resources may contribute to the added challenge of not having enough protective security officers. According to FPS headquarters officials, limited staff is due to limited funding at the facility level (building- specific) or tenant agency level (tenant-specific). For example, despite ISC standards that specify that each protective security officer should only be responsible for one screening task at a security-access control point, during our building visits, we found several instances when a protective security officer was conducting multiple screening tasks due to limited staff. For example, at one building we visited, there are three security screening access control points, each with two protective security officers who are responsible for (1) checking employee identification, (2) manning the x-ray machine, and (3) manning the walk-through magnetometer. According to an FPS regional official, despite the fact that this building is the largest building in the West Coast by square footage, there is a limited FPS presence on-site, relative to the size of the building. In some instances, a roving protective security officer may backfill at a security screening access control point if it gets busy, but screening may not be within his or her responsibilities or an area he or she is specifically trained in. At another building we visited, USMS officials made the decision to close one of its three security screening-access control points in November 2013 due to budget shortfalls. According to USMS headquarters officials, obtaining adequate resources and funding is always an issue, but USMS is continuing to take steps to develop its security program. In 2013, USMS doubled the annual training requirement for court security officers, focused primarily on security screening. USMS is also examining the current level of court security officer training as compared with screening test passage rates, which we discuss below, to see if there are any trends and any potential actions that can be taken to improve its security training. Also, USMS headquarters officials told us that it would be helpful to have additional funding for more court security officers. As such, USMS is currently doing an analysis to determine the optimal number of court security officers that should be stationed at a security screening access control point, in order to determine the extent to which more resources may be needed in the field. Multi-tenant GSA buildings pose additional challenges in the security screening process because there are many federal stakeholders involved in the facility security committees and court security committees (if the judiciary is involved). As noted above, these stakeholders are responsible for building security screening decisions, among other security responsibilities. However, we found, as we did in August 2010, that tenant entity representatives in the facility security committee may not have security knowledge or experience, but nonetheless are expected to make security decisions for their respective agencies. During our site visits, multiple FPS regional and USMS district level officials identified the lack of security knowledge as a challenge in trying to work with federal tenants to implement recommended security-screening enhancements. When FPS recommends countermeasures for a building in its facility security assessment, the facility security committee’s chairperson is made aware of the recommendations. For example, a recommended countermeasure may be to add an additional protective security officer, so each protective security officer is only responsible for one screening task at each screening access-control point, as outlined in ISC standards. FPS and USMS officials told us that federal tenant entities, who may have different needs, may not always agree with what level of security and security countermeasures are needed at their building, or agree with the costs that may be associated with those enhancements. Security countermeasures must compete with other program objectives for limited funding. Also, we previously found that the facility security committee’s tenant-entity representatives often do not have the authority to commit their respective organizations to fund security countermeasures.result, competing requirements, standards, and priorities for a building cannot always be reconciled, and the chairperson, on behalf of the facility or court security committee, may agree to accept the risk of not implementing a specific countermeasure. According to ISC policy, when a recommended countermeasure is not implemented, it must be clearly documented, as appropriate: Why the necessary level of protection cannot be achieved. What is the rationale for accepting the risk? What alternate strategies are being considered or implemented? What opportunities are in the future to implement the necessary level of protection? For example, some possible rationales for risk acceptance are: physical site or structural limitations, historical or architectural integrity, impact on an adjacent structure, and funding priorities. Executive branch agencies, with the exception of certain intelligence- related exemptions, are required to comply with the ISC’s policies and recommendations. The ISC is required to develop a strategy for ensuring compliance with its standards; however, we previously found that the ISC did not formally monitor agencies’ compliance with ISC standards, in part, because it lacks the staff and resources to conduct monitoring. Currently, in place of a formal monitoring program, ISC officials hold quarterly meetings and participate in ISC’s working groups along with their member agencies. ISC officials said that the information sharing that occurs through these channels helps them achieve a basic understanding of whether and how member agencies use the standards. This approach, however, does not provide a systematic assessment of ISC member agencies’ use of the standards, and provides no information on non- member agencies’ physical security practices. The ISC stated in its 2012 to 2017 action plan that it plans to establish protocols and processes for monitoring and testing compliance with its standards by fiscal year 2014. We previously recommended that DHS direct ISC to conduct outreach to executive branch agencies to clarify how its standards are to be used, and develop and disseminate guidance on management practices for resource allocation as a supplement to the ISC’s existing physical-security standards. According to ISC officials, as of September 2014, the ISC has created a compliance working group and is in the beginning stages of developing a standard for ensuring compliance with its established policies. GAO, Federal Facility Security: Additional Actions Needed to Help Agencies Comply with Risk Assessment Methodology Standards, GAO-14-86 (Washington, D.C.: Mar. 5, 2014). FPS and USMS face challenges in effectively informing the visiting public about what items are prohibited from being brought into GSA buildings as lists of prohibited items vary among the buildings and among tenants in multi-tenant buildings. Based on various factors, such as the composition of federal tenants, and in the case of courthouses, decisions by judicial districts, each GSA building may have a unique list of prohibited items that, according to FPS and USMS officials, can cause some confusion to the visiting public. Facility security committees and court security committees have broad latitude in determining which items, in addition to those specifically prohibited by law (i.e., illegal items), can be prohibited from their facilities. These additional items may not necessarily be “illegal.” In addition, some items may be admissible for some individuals, while not for others. For example, a courthouse may restrict the general public from possessing a cell phone or laptop in a court space, but may permit such a device to be carried by a court employee or an attorney representing a client. Further, the visiting public may not know that an item is prohibited from the building until they are already there. In these instances, the protective security officer or court security officer might tell individuals to take the item back to their vehicle, or surrender the item. In some instances, court security officers may also be responsible for helping to store a prohibited item (such as a cell phone or laptop in a court space), until the individual returns to get it. According to USMS officials we met with, this adds to the responsibilities of the court security officers. Though we did not specifically evaluate signage as part of this review, during our building visits, we observed a wide range in the types of signage posted informing the visiting public about what items were prohibited from the building. All signage in a GSA facility is under the direct control of the GSA building manager. GSA requires agencies to post signage at each of its buildings, such as signs that list prohibited items.with the GSA building manager to ensure that signage is in place to inform their visitors and employees of the items that are prohibited within that building. However, we found that some signs were small, posted in The facility security committee and court security committee work an obscure location, or very difficult to see or read (see fig. 3 below). We also saw some signs on the public entrance doors at one building we visited, with regulatory language on prohibited items from July 1999, even though current regulations were last revised in November 2005 (see fig. 4 below). Conversely, we found that some buildings had large, informative signs for visitors, and the signs were posted in key locations to help facilitate the security-screening process (see examples of signage in figs. 5 and 6 below). Both FPS and USMS have taken steps to assess their security screening efforts such as conducting covert and intrusion tests and collecting data on prohibited items. Our work showed that according to FPS data from fiscal years 2010 through 2013, FPS has experienced low covert-testing passage rates, and FPS has also limited the number of screening scenarios that can be used for testing. However, in fiscal years 2012 and 2013, for example, USMS data showed that court security officers passed 92 percent of intrusion tests on security screening. Although USMS tests more frequently than FPS, it has been unable to meet its intrusion-test frequency requirement per building each year. Also, FPS and USMS data on prohibited items show a wide variation in the number of items identified across buildings for both entities. Overall, FPS and USMS may use the results of covert and intrusion tests to address problems at the individual building or FPS region or USMS district level, to some degree, but they do not readily use the results to strategically assess performance nationwide. The benefits of using performance data in this strategic manner are reflected in ISC guidance, as well as key practices in security and internal control standards GAO has developed. Without a more strategic approach to assessing performance, both FPS and USMS are not well-positioned to improve security screening, identify trends and lessons learned, and address the aforementioned challenges related to screening in a complex security environment. FPS and USMS have established testing programs to help officials assess security screening efforts at buildings they protect. For example, in 2010, FPS developed a policy requiring regional offices to conduct covert testing of security countermeasures with the goals of (1) assessing the effectiveness of countermeasures; (2) identifying policy and training deficiencies; (3) ensuring immediate corrective action; and (4) documenting, analyzing, and archiving results. As part of FPS’s covert testing program, a “report of investigation” is to be developed after the conclusion of each test. In these reports, the responsible FPS official details the actions taken to prepare for each covert test, to execute it, and to assess it. USMS has also developed tools for measuring the effectiveness of its security screening practices at the building level. For example, USMS implemented a policy directive over 10 years ago for conducting a specified number of intrusion tests on security-screening procedures at court facilities each year. These tests primarily consist of attempts to (1) circumvent the public-screening access control points of either the building or the judicial areas and (2) access the court building with a prohibited item such as a weapon. Following each intrusion test, USMS is to complete a facility-security test form that includes detailed information about the test conducted. In addition to testing security-screening procedures, FPS and USMS also require protective security officers and court security officers to document prohibited items identified during the screening process. FPS policy requires protective security officers to document each prohibited item discovered by using a designated reporting form that includes information such as the item type and description. The data for each prohibited-item report are to be entered into FPS’s web-based Enterprise Information System. In USMS’s statement of work for court security officers, these officers are responsible for providing statistical information on the number of prohibited items including weapons detected during the screening process, and USMS districts are responsible for reporting these items to the USMS Office of Court Security on a monthly basis. The data are compiled at the end of the fiscal year by the USMS Office of Court Security and forwarded to the AOUSC. According to USMS officials, this information is used for supporting, among other things, the AOUSC’s annual budget request related to courthouse security. FPS has consistently experienced low passage rates for covert tests since implementing its covert-testing program in fiscal year 2010. The covert-testing data we reviewed were from fiscal years 2010 through 2013 and related to buildings with a specific FSL. In addition, we found that in October 2012, FPS reduced the number of screening scenarios that can be used for covert testing. However, in December 2014, FPS reinstated some testing scenarios. For this publicly available report, we are not including the specifics about the covert tests themselves or the related passage rates due to the sensitivity of the information. Since fiscal year 2010, USMS has recorded high-intrusion test passage rates, and USMS reported that it has experienced improvements in the effectiveness of its security-screening efforts for the years we reviewed. For example, USMS reported that the intrusion-test passage rate for security-screening tests improved from 83 percent in fiscal year 2010, to 91 percent in fiscal year 2011, and to 92 percent in fiscal years 2012 and 2013. Furthermore, USMS reported that it has improved its intrusion- test passage rate while consistently increasing the number of tests it conducted. For instance, in fiscal year 2010, USMS conducted 335 intrusion tests on security-screening procedures, and by fiscal year 2013, the agency nearly doubled that number by completing 628 intrusion tests on security-screening procedures. See figure 7 for an overview of the number of USMS intrusion tests conducted from fiscal years 2010 through 2013 on security-screening procedures. USMS conducted significantly more intrusion tests than FPS conducted covert tests; however, we did not determine the reasons for this difference or what would constitute an adequate number of tests. Nevertheless, while USMS has increased the number of intrusion tests it conducts, we found that some USMS districts were conducting tests less frequently than required. Current USMS policy requires its 94 USMS districts to conduct an intrusion test at each court facility a specified number of times a year. However, for the four USMS districts we visited, we found that none of the districts complied with this requirement. For example, one district we visited completed only 1 of the many intrusion tests it was required to conduct from fiscal years 2010 through 2013. Additionally, USMS conducts security screening at 11 buildings in another district we visited, and the district did not complete any covert tests during fiscal years 2012 and 2013. Furthermore, from fiscal years 2010 through 2013, USMS conducted only 14 percent of the intrusion tests that they were required to conduct at these 11 buildings. Overall, the 94 USMS districts conducted 45 percent of the total intrusion tests that USMS policy required these districts to conduct at their 410 buildings. Further, at the four USMS districts we visited, from fiscal years 2010 through 2013, there was a large range in each district’s compliance rate from 2 percent to 63 percent. According to USMS headquarters officials, USMS lacks the appropriate resources to complete the required number of intrusion tests in each district. For example, USMS headquarters officials told us that each district manages its own resources and faces unique challenges that affect testing rates, such as the size of the district, geographical distances, workload, and manpower. As such, USMS is in the process of reviewing its current policy and expects to reduce the number intrusion tests required. As discussed earlier, aside from their efforts to conduct covert and intrusion screening tests, FPS and USMS both collect data on prohibited items that are detected through the screening process. For example, FPS reported that in 2013, protective security officers detected approximately 700,000 prohibited items. FPS policy directs FPS’s Risk Management Branch to ensure that prohibited-items reports are collected correctly and that information is properly entered into the Enterprise Information System on a weekly basis. However, our visits to selected FPS buildings and analysis of their reporting process indicated that these FPS data can vary widely from building to building. For example, one building we visited reported over 230,000 prohibited items from fiscal years 2004 through 2013, an average of approximately 23,000 items per year. By contrast, a different building we visited reported just over 2,000 prohibited items during this same time period, an average of about 200 items per year, even though it is a much larger building with many more visitors (approximately 4,100 daily visitors) than the first building mentioned above (approximately 670 daily visitors). Furthermore, for the larger building mentioned above, we identified 5 years (fiscal years 2009 through 2013) where no prohibited items were reported by FPS. However, during our visit to the building, FPS officials stated that prohibited items had been identified during that time period and provided physical evidence of prohibited items recently collected at the building (see fig. 8 below). According to FPS headquarters officials, in 2009, the prohibited items policy at the building—set by the facility security committee, not FPS—was for protective security officers to turn away anyone attempting to enter the building with a prohibited item. As a result, the protective security officers did not report identified prohibited items, believing that the policy was to report only items that had been confiscated. FPS headquarters officials stated that they had not been aware that there was a misinterpretation of the policy and that this resulted in a 5-year lapse in FPS oversight. We also reviewed data on prohibited items for FPS buildings that we did not visit and found that there were 295 buildings with no reported prohibited items during the 10-year period from fiscal years 2004 through 2013. These data alone would not allow us to definitively determine that prohibited items were detected and not reported at these buildings. However, the wide variation in the number of items detected warrants further analysis by FPS, which is discussed later in this report. Similar to FPS, in assessing USMS’s data on prohibited items, we found wide variations in the number of prohibited items identified during the security screening process. In fiscal year 2013, court security officers detected over 1.3 million prohibited items in federal courthouses, according to USMS data. However, one USMS district we visited— District of Columbia—did not report detecting any prohibited items for 3 consecutive years (fiscal years 2005 through 2007) during the 10-year period we reviewed. In total, 24 of the 94 USMS districts (26 percent) did not report any prohibited items for at least 1 year, and 11 districts (12 percent) did not report prohibited items for multiple years during the 10- year period. According to USMS headquarters officials, in cases when no prohibited items are reported by a district, USMS headquarters officials accept that it is possible no prohibited items were identified or confiscated in a district, and no follow-up is conducted. As with FPS, however, the wide variation across buildings would warrant further analysis, which is discussed below. The benefits of using performance data strategically are reflected in ISC guidance, as well as key practices in security and internal control standards GAO has developed. The ISC identified the use of performance measurement and testing as a key management tool and reported that performance measurement data is essential to appropriate decision making on the allocation of resources. In addition, our prior work on key practices in facility protection noted that monitoring and testing, as well as other methods of measuring performance, can help gauge the adequacy of facility protection, improve security, and ensure accountability for achieving goals. We have also found that internal control activities help ensure that management’s directives are carried out and goals are met. Internal control activities are an integral part of an entity’s planning, implementing, reviewing, and accountability for stewardship of government resources and achieving effective results. These controls call for comparisons and assessments relating different sets of data to one another so that analyses of the relationships can be made and appropriate actions taken. FPS officials said that they use covert testing to help determine weaknesses in personnel capabilities and performance at the building level. These weaknesses may then be addressed through training or corrective actions. The FPS officials also use covert testing to determine gaps in screening, security countermeasures, and access control processes at the building level. However, FPS headquarters officials also said that that they had difficulty determining how to use the test results for improving their security-screening efforts overall. For example, they said there are multiple reasons why a protective security officer or screening access control point can fail a covert test such as: poor protective- security-officer performance (e.g., a protective security officer may have ignored training or access control point instructions); insufficient training; and security-screening systems or conditions that may not be conducive to success (e.g., inadequate lighting or unsuitable position of screening equipment). FPS’s difficulty in using the covert test results may stem from its lack of a strategy or systematic approach to linking performance data with corrective actions on a nationwide basis by determining trends and helping inform which types of scenarios to use for the covert tests. Even though FPS collects covert-testing data, it does not systematically analyze the data at the headquarters or regional level. A systematic analysis of data could help FPS adhere to the internal control standard related to data analyses and comparisons, and be better-positioned to target the primary causes for covert test failures. While USMS has experienced higher intrusion-test passage rates, it similarly lacks a strategic approach to using and analyzing screening data that could aid in further improving its passage rates. USMS headquarters officials said that they do not systematically analyze intrusion-testing data. Instead, they collect testing data to measure the quality of services that contractors provide at the district level, and they feel that their current reporting efforts accomplish that goal. USMS headquarters also does not conduct any follow-up with its districts to ensure compliance with the intrusion-testing program, and testing data are not used to comprehensively assess the program. Nonetheless, these data could be useful to USMS in determining whether its intrusion-test passage rates are acceptable and whether goals should be set for higher passage rates. Greater use of the data could also help USMS determine the number and frequency of tests that would be adequate and attainable within its available resources. A more strategic approach to assessing screening efforts could also include analyses of data on prohibited items. FPS and USMS do not conduct systematic analyses of the data on prohibited items they collect. At the time of our review, FPS officials told us that they did not conduct any follow up with the regions that did not report on the prohibited items they identified. USMS headquarters officials said that prohibited items are defined by individual districts, which makes data across districts difficult to compare. However, a more strategic approach to analyzing data on prohibited items would allow FPS and USMS to determine (1) the reasons for wide variations in these data, (2) whether data are incomplete, and (3) if there are lessons learned that could be applied nationwide. It may also be useful in determining how best to communicate prohibited items policy to the public through signage. Federal buildings held and leased by GSA have been targets of acts of violence in recent years and providing security screening at these buildings can be challenging for a variety of reasons, including balancing security and public access and operating with limited resources. Due to the sensitivity of certain FPS and USMS information regarding covert and intrusion testing, that information was omitted for the purposes of this publicly available report. However, the results of our analysis of all the information we reviewed provided the groundwork for our recommendations to both DHS and DOJ—actions we believe will improve FPS’s and USMS’s security-screening efforts. In recent years, FPS and USMS have taken steps to improve their security-screening efforts, such as implementing various policies, conducting covert and intrusion tests of security-screening procedures, and collecting data on prohibited items identified at screening access control points. However, FPS has experienced low covert-testing passage rates and has limited the number of security-screening testing scenarios it uses during covert tests. USMS has recorded higher intrusion-test passage rates. And although USMS tests security screening more frequently than FPS, it has been unable to meet its intrusion-test-frequency requirement. Also, FPS and USMS data on prohibited items show wide variation in the number of items identified across buildings. Compounding these issues, both entities lack an approach or strategy to systemically assess screening performance. The benefits of using performance data in this manner are reflected in ISC guidance, as well as key practices in security and internal control standards that GAO has developed. Without a more strategic approach to assessing performance, FPS and USMS are not well-positioned to improve security screening, to identify trends and lessons learned, and to address the range of challenges related to screening in a complex security environment. We are making two recommendations—one to the Secretary of the Department of Homeland Security and one to the Attorney General: We recommend that the Secretary of the Department of Homeland Security direct FPS to develop and implement a strategy for using covert- testing data and data on prohibited items to improve FPS’s security- screening efforts. The strategy should, at a minimum, aim to ensure that: covert-testing data are used to systematically monitor, review, and improve performance nationwide; covert-testing data are used to determine which testing scenarios will be implemented or reinstated; and data on prohibited items are analyzed to determine the reasons for wide variations in the number of reported prohibited-items detected across buildings and to assist with managing the screening process and informing policy. We recommend that the Attorney General direct USMS to develop and implement a strategy for using intrusion-testing data and data on prohibited items to improve USMS’s security-screening efforts at federal courthouses held by GSA. The strategy should, at a minimum, aim to ensure that: intrusion-testing data is used to systematically monitor and review performance nationwide; intrusion-testing data are used to determine, with stakeholders, what frequency of testing is appropriate; and data on prohibited items are analyzed to determine the reasons for wide variations in the number of reported prohibited-items detected across buildings and to assist with managing the screening process and informing policy. We provided a draft of this report to the AOUSC, DHS, DOJ, GSA, and SSA for review and comment. DHS and DOJ concurred with the recommendations directed at FPS and USMS, respectively. DHS stated that moving forward, FPS will continue to develop an overall strategy to better define how to leverage covert testing and prohibited items data to systematically monitor, analyze, and improve screening processes nationwide and inform policy. DHS’s official written response is reprinted in appendix II. DOJ conveyed its concurrence with the recommendation in an e-mail. AOUSC, DHS, DOJ, and GSA provided technical comments, which we incorporated as appropriate. SSA agreed with the report as written and did not have any technical comments. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Director of AOUSC; the Secretary of Homeland Security; the Attorney General of the United States; the Administrator of GSA; and the Commissioner of SSA. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or GoldsteinM@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report focuses on security screening at General Services Administration (GSA) buildings. Specifically, our review addressed the following questions: (1) What challenges do federal entities face in their efforts to prevent prohibited items and individuals who may pose a security threat from entering GSA buildings? and (2) What actions have these federal entities taken to assess the effectiveness of their screening efforts, and what have been the results? This report is a public version of a previously issued report identified by DHS and DOJ as containing information designated as For Official Use Only, which must be protected from public disclosure. Therefore, this report omits sensitive information regarding FPS’s and USMS’s covert- and intrusion-testing data, specific examples of the types of covert and intrusion tests these two entities used, and the names and locations of the buildings we visited, among other things. The information provided in this report is more limited in scope as it excludes such sensitive information, but it addresses the same questions as the For Official Use Only report and the overall methodology used for both reports is the same. For our review, we selected two civilian federal tenant entities: the We selected the judiciary and the Social Security Administration (SSA).judiciary and SSA because the missions of these tenant entities result in high levels of public interaction and public visits to their offices within GSA buildings. We also selected the judiciary and SSA because they occupy a large proportion of GSA’s federally owned building inventory, with the judiciary having the largest presence overall. For the purposes of this report, we focused our efforts on the security screening of persons. To inform both objectives, we selected a nongeneralizable sample of 11 federally owned buildings held by GSA in three major metropolitan areas for our site visits. The focus of our review was on federally owned buildings held by GSA with a facility security level (FSL) IV. We selected these 11 buildings because FSL IV buildings are considered to have a “high” level of risk, and also based on a variety of other criteria, including the presence of our two selected tenant entities, recommendations received from agency officials, and, for the Federal Protective Service (FPS), possible inconsistencies in its data on prohibited items. To determine challenges federal entities face in their efforts to prevent prohibited items and individuals who may pose a security threat from entering GSA buildings, we interviewed GSA headquarters officials, FPS and USMS officials responsible for security issues at the headquarters level, FPS regional and USMS district level officials, and also officials at the building level for our 11 selected GSA buildings. FPS is the primary agency responsible for providing law enforcement and related security services at GSA buildings. USMS has primary responsibility for various aspects of protecting federal courthouses and other federal buildings with a court presence. Although information from our building visits is not generalizable to all GSA buildings, this information provides illustrative examples and context to our understanding of the challenges faced by FPS and USMS when conducting building security screening. This approach yielded diverse perspectives as our selected group of buildings varied in building type, use, size, and composition of federal tenant entities. Prior to our building visits, we reviewed FPS and USMS documentation on efforts to manage security screening. We requested and reviewed security assessments and reports for buildings we visited as well as other buildings located in the FPS regions and USMS districts we visited, to the extent they were available. In preparation for our site visits, we also provided the appropriate FPS regional and USMS district officials with a series of questions regarding security-screening challenges, and asked for their responses. To further understand the challenges that FPS and USMS may face, we also spoke with members of the National Association of Security Companies, which is the nation’s largest contract security officer association and its membership includes companies that provide government contract security officers. The National Association of Security Companies officials provided their perspectives regarding security screening issues at federal buildings, such as challenges faced by security officers. To determine actions federal entities have taken to assess the effectiveness of their screening efforts and the results of these efforts, we compared FPS’s and USMS’s efforts to comply with the Interagency Security Committee’s (ISC) standards, including The Risk Management Process for Federal Facilities and the Items Prohibited from Federal Facilities. We also reviewed FPS and USMS agency directives, policies, and guidance related to assessment tools such as collecting data on prohibited items and conducting covert and intrusion tests at security- screening entrances, and we obtained and analyzed FPS and USMS data submissions for these assessment areas. For example, we reviewed FPS and USMS’s data on prohibited items from fiscal years 2004 through 2013. For FPS, we also obtained covert-testing data at the national, regional, and building level from fiscal years 2010 through 2013. For USMS, we obtained agency-wide results of its intrusion tests and detailed data for the districts we visited from fiscal years 2010 through 2013. To gather detailed examples of security-screening data issues and to learn about the processes by which data are collected and submitted, we compared our findings from our building visits with the data provided by selected agencies. We then assessed FPS’s and USMS’s processes for managing these data against agency requirements and GAO’s Standards for Internal Control in the Federal Government. According to GAO’s standards for internal control, internal controls are a major part of managing an organization and comprise the plans, methods, and procedures used to meet missions, goals, and objectives. Internal controls, which are synonymous with management controls, help government program managers achieve desired results through effective stewardship of public resources, and control activities contribute to data’s accuracy and completeness.the data and conducted a data reliability assessment for the data we reviewed. We posed questions to officials at FPS and USMS about the collection and reporting of prohibited items and covert and intrusion- testing data. We determined that the agencies’ data on prohibited items are not always complete or properly reported. As a result, agencies cannot ensure that prohibited-items data are sufficiently reliable to support sound management and decision making about security- screening issues. However, based on information gathered for covert and intrusion tests conducted by FPS and USMS, we determined that the data were sufficiently reliable for describing the tests conducted from fiscal years 2010 through 2013 and the results of those tests. We also interviewed agency officials about We conducted this performance audit from January 2014 to March 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, other key contributors to this report were David Sausville, Assistant Director; Catherine Kim, Analyst-in Charge; Russell Burnett; Raymond Griffith; Geoffrey Hamilton; Delwen Jones; Hannah Laufe; Tom Lombardi; and Josh Ormond.
FPS and USMS conduct building security screening at thousands of GSA buildings across the country. Given continued concerns related to the security of federal buildings, GAO was asked to examine the: (1) challenges federal entities face in their efforts to prevent prohibited items and individuals who may pose a security threat from entering GSA buildings and (2) actions federal entities have taken to assess the effectiveness of their screening efforts, and the results of those actions. GAO conducted site visits to 11 selected buildings in three metropolitan areas based on a variety of criteria, including security level, agency officials' recommendations, and for FPS, possible inconsistencies in its data on prohibited items, and other factors. GAO analyzed FPS's and USMS's data, reviewed relevant documentation, and interviewed FPS and USMS officials in headquarters and the field. The Department of Homeland Security's (DHS) Federal Protective Service (FPS) and the Department of Justice's (DOJ) United States Marshals Service (USMS) experience a range of challenges in their efforts to provide effective security screening, including: Building characteristics and location may limit security options : many General Services Administration (GSA) buildings were designed and constructed before security screening became a priority. Balancing security and public access : striking an appropriate balance between facilitating the public's access to government services and providing adequate security can be difficult, for example, when there is a high volume of visitors. Operating with limited resources : some FPS protective security officers are not fully trained to conduct security screening, and FPS and USMS may have limited funding for additional training or additional security officers. Working with multiple federal tenants : many tenant stakeholders at multi-tenant GSA buildings have differing needs and priorities that may not always align when trying to build consensus for security-screening decisions. To assess security-screening efforts, both FPS and USMS have taken steps such as conducting covert and intrusion tests and collecting data on prohibited items. From fiscal years 2011 to 2013, FPS data show that protective security officers passed covert tests on security-screening procedures at a low rate. In October 2012, FPS reduced the number of screening scenarios used for covert testing, but has since reinstated some of them. USMS data show that court security officers passed intrusion tests on security screening at a higher rate. For example, USMS reported that court security officers passed 83 percent of intrusion tests on security screening in fiscal year 2010, 91 percent in fiscal year 2011, and 92 percent in fiscal years 2012 and 2013. Although USMS tests more frequently than FPS, it has not met its intrusion-test frequency requirement per building each year. In addition, FPS's and USMS's data on prohibited items show wide variations in the number of items identified across buildings. For example, FPS reported it had detected approximately 700,000 prohibited items in 2013; however, FPS data showed that there were 295 buildings with no reported data on prohibited items from fiscal years 2004 through 2013. While FPS and USMS may use the results of covert and intrusion tests to address problems at the individual building or FPS region or USMS district level, to some degree, they do not use the results to strategically assess performance nationwide. The benefits of using data in this manner are reflected in the Interagency Security Committee's (ISC) guidance, as well as key practices in security and internal control standards GAO has developed. Without a more strategic approach to assessing performance, both FPS and USMS are not well positioned to improve security screening nationwide, identify trends and lessons learned, and address the aforementioned challenges related to screening in a complex security environment. GAO recommends that FPS and USMS each develop and implement a strategy for using covert and intrusion testing, and prohibited-items data to improve security-screening efforts. Specifically, for FPS, the strategy would, among other things, help determine which covert testing scenarios to use. For USMS, the strategy would, among other things, help determine the appropriate frequency of intrusion testing. DHS and DOJ concurred with GAO's recommendations.
The U.S. government maintains more than 270 diplomatic posts, including embassies, consulates, and other diplomatic offices, in about 180 countries worldwide. More than 80,000 U.S. government employees work overseas, including both U.S. direct hires and locally-employed staff under chief of mission authority, representing more than 30 agencies and government entities. Agencies represented overseas include the Departments of Agriculture, Commerce, Defense, Homeland Security, Justice, State, the Treasury, and USAID. In the aftermath of the August 1998 bombings of two U.S. embassies in Africa, State formed the Overseas Presence Advisory Panel to conduct an assessment of overseas presence. The panel determined that overseas staffing levels had not been adjusted to reflect changing missions, requirements, and security concerns. Some missions were overstaffed, while others were understaffed. In 2002, we outlined a In 2003, we found that framework for assessing overseas staff levels. U.S. agencies’ staffing projections for new embassy compounds were developed without a systematic approach or comprehensive rightsizing analysis. In 2004, Congress mandated the establishment of the Office of Rightsizing within State. The Office of Rightsizing was combined with two other offices in 2007 to create M/PRI. The House Foreign Affairs Committee directed the office to lead State’s efforts to develop internal and interagency mechanisms to coordinate, rationalize, and manage the deployment of U.S. government staff overseas. This legislation was intended to result in the reallocation of resources to achieve a leaner, streamlined, more agile and secure U.S. government presence abroad. The conference report accompanying the legislation establishing the Office of Rightsizing stated that a proper rightsizing plan should include a systematic analysis to bring about a reconfiguration of overseas staffing to the number necessary to achieve U.S. foreign policy needs, and noted that rationalizing staffing and operations abroad had the potential for significant budgetary savings. The office was directed by the Senate Foreign Relations Committee to review all U.S. government staffing overseas, including all American and foreign national personnel, in all employment categories. The House Foreign Affairs Committee also directed OBO to work closely with M/PRI to ensure that projected staffing levels for new embassy compounds were prepared in a disciplined and realistic manner, and that these estimates become a basis for determining the size, configuration, and budget of new embassy construction projects. M/PRI conducts rightsizing reviews before each construction project and on each mission every 5 years, among other responsibilities. M/PRI focuses on streamlining staffing levels by, for example, consolidating or outsourcing administrative functions. M/PRI also looks for opportunities to substitute less expensive, locally-employed staff for more expensive U.S. direct-hire employees. According to the guidance M/PRI provides to overseas missions, a rightsizing analysis may lead to the reallocation of resources from one mission goal to another and to enhancing operational efficiency through regionalization and centralization. M/PRI uses GAO’s definition of rightsizing: aligning the number and location of staff assigned overseas with foreign policy priorities, security concerns, and other constraints. Rightsizing may result in the addition or reduction of staff, or a change in the mix of staff at a given embassy or consulate. M/PRI’s guidance stresses that all sections and agencies of an overseas mission should be included in a rightsizing analysis. In the first step of the rightsizing process, overseas missions, generally led by the mission’s management officer, prepare a report for M/PRI outlining their strategic goals, current staffing data for all agencies, and projected staffing levels 5 years into the future. State and non-State agencies present at an overseas mission provide their staffing data to be included in the mission’s submission to M/PRI. M/PRI officials stated that, under their current process, an M/PRI analyst usually visits the mission to assist in preparing the rightsizing report. After a mission completes its rightsizing report, the relevant regional bureau approves the submission before sending it to M/PRI. Next, M/PRI conducts its analysis of staffing at the mission, coordinating with the headquarters of non-State agencies to confirm the numbers provided at the mission for those agencies. When M/PRI completes a draft rightsizing review, other State bureaus and agencies have the opportunity to review and discuss it. According to officials from State bureaus, they frequently engage in a dialogue with M/PRI to negotiate the staffing projections to be published in the rightsizing review and in a majority of cases, differences in projected staffing numbers are resolved through these discussions. Once all bureaus and agencies have reviewed the rightsizing review document, M/PRI finalizes and publishes it on an internal State website. Since its creation in 2004, State’s rightsizing office has conducted 224 reviews. According to M/PRI officials, all overseas missions have undergone the process once, and a second round of reviews is now under way. M/PRI provided us with 181 rightsizing reviews within the time frame of our analysis. Since that time, M/PRI has completed additional reviews. The staffing levels of a mission are determined by the chief of mission through the National Security Decision Directive 38 (NSDD-38) process, which provides authority for the chief of mission to determine the size, composition, or mandate of personnel operating at the mission. To add or abolish U.S. direct-hire positions at a mission, agencies electronically submit an NSDD-38 request for the chief of mission to either approve or deny. Requests may only include one agency in one country, but may include requests for multiple positions. Formal submission is generally preceded by informal discussions about the requested positions, according to officials. State has improved the consistency of its analyses across overseas missions, but differences between actual and projected staffing levels still exist due to unanticipated events and other factors. We reported in 2006 that the Office of Management Policy, Rightsizing, and Innovation (M/PRI) had not been conducting its rightsizing reviews in a consistent manner.State has since improved the consistency of its reviews by developing a variety of methodological tools and a standard template that it applies to each mission. These tools include ratios and formulas that compare missions similar in size and foreign policy priority to help M/PRI project what the office determines is the appropriate level of staffing at each mission. We found that although actual staffing levels as of December 2011 were within 10 percent of projected staffing levels in over half of the reviews we analyzed, over 40 percent of the missions have staffing level differences over 10 percent. Unanticipated events and other factors, such as changes in policies and priorities, contributed to the differences between actual and projected staffing levels. With its current approach to rightsizing, State has improved the consistency of its analysis across overseas missions. In 2006, we reported that the information presented in rightsizing reviews varied from mission to mission and the rightsizing elements that missions evaluated and reported were not consistent. Some missions provided narratives discussing various rightsizing elements, such as outsourcing and post security, while others did not. The reviews ranged in length from less than 5 pages to over 20 pages. According to current M/PRI officials, the methodology used in the rightsizing process has evolved since the office was created. M/PRI officials stated that their reviews are now more standardized than in the past. The reviews now contain the same types of information in a similar format and have a more uniform level of detail. The required elements of a rightsizing review include detailed analysis of current and projected staff for each section of an overseas mission, as shown in table 1. M/PRI has also refined its methodology for analyzing administrative, management, and program staff. M/PRI has developed uniform guidance for staff at overseas missions to use in preparing rightsizing submissions. The majority of State officials at posts we visited that had participated in a rightsizing review said that the M/PRI guidance was helpful for the post in completing its submission. M/PRI has developed standard methodological tools to examine overseas staffing on a mission-by-mission basis. These tools are ratios and formulas that compare missions considered similar in size, foreign policy priority, and management and administrative requirements, and help M/PRI to determine what it believes to be appropriate staffing levels in each section of an overseas post. The total management ratio, for example, is the number of customer units divided by the number of U.S. direct-hire management positions. Further, the level of program staff is analyzed using two tools—the Four Factor Index and diplomatic density. The Four Factor Index is an attempt to measure a country’s theoretical foreign policy importance to the United States using a combination of factors such as population, gross domestic product, trade volume with the United States, and U.S. foreign assistance. Diplomatic density is an effort to quantify the size of the U.S. diplomatic presence in a country with respect to U.S. interests in that particular country. It is calculated by dividing the number of diplomatic direct-hire positions present in a given country by the Four Factor Index. According to M/PRI officials, diplomatic density tends to be relatively low in developed countries with which the United States has close relations, such as Canada, Japan, and Germany, or where our interests are limited or primarily humanitarian. Diplomatic density may be higher where the United States has or has recently had difficult relations or where vital security interests are at stake, such as in Russia and many of the countries in the Middle East. Many post officials we spoke with considered M/PRI’s standardized analysis appropriate but emphasized the need for flexibility to account for varying circumstances at each post. Some officials noted that M/PRI’s comparative analysis among posts was particularly helpful in providing context for staffing decisions. For example, one management officer stated that the rightsizing review found that locally-employed staff at post had heavier workloads than their counterparts at similar posts. The post used this analysis as justification for requesting more locally-employed staff positions. According to non-State officials, M/PRI generally coordinates with other agencies in preparing rightsizing reviews of U.S. government staffing overseas. In 2006, we reported that coordination with other agencies in the rightsizing process was initially limited. Non-State agencies had voiced a number of concerns regarding their interaction with the Office of Rightsizing, including their desire for greater participation in the rightsizing process. We recommended that the Office of Rightsizing increase its outreach activities with non-State agencies so that all relevant agencies with an overseas presence could discuss rightsizing initiatives on a regular and continuous basis. During our current review, non-State officials stated that M/PRI’s current coordination efforts had improved. For more than half of the 144 staffing projections based on rightsizing reviews that we analyzed, actual staffing levels as of December 2011 were within 10 percent of review staffing projections, either higher or lower. However, over 40 percent of the projections based on the reviews had differences of greater than 10 percent. About 30 percent of these had more staff than projected and 13 percent had fewer (see fig. 1). In a few cases, the actual staffing levels as of December 2011 were much higher or lower than the projected levels. For example, the actual number of U.S. direct-hire desk positions (81) in Bolivia as of 2011 was less than half of the projected number of U.S. direct-hire desk positions (164). On the other hand, the actual number of U.S. direct-hire desk positions in Algeria (56) was nearly 20 percent higher than the projected level (45). See appendix I for more information about our methodology. Numerous factors contribute to differences between projected and current staffing levels, such as unanticipated U.S. and foreign government policy changes. Officials from ten missions we identified as having the largest differences between December 2011 staffing levels and the rightsizing projected staffing levels, either higher or lower, identified such factors. Table 2 shows the percentage differences between December 2011 actual staffing levels and projected total staffing levels based on the rightsizing reviews for these missions. Unanticipated changes in U.S. government policies and priorities contribute to differences between actual and projected staffing levels at overseas posts. Programs such as the President’s Emergency Plan for AIDS Relief (PEPFAR) and USAID and State hiring initiatives, including have added additional staff to overseas posts, while Diplomacy 3.0,other changes in U.S. foreign policy have led to lower-than-projected staffing levels. According to the management officer in Mozambique, the increase in the number of U.S. direct-hire and locally-employed staff positions as a result of PEPFAR’s initiation was greater than anticipated. The introduction of the Visa Waiver Program for Korea reduced the need for consular officers to conduct visa interviews and led to lower- than-projected staffing levels, according to the management officer in Korea. Ghana became a USAID priority country and the beneficiary of the Global Health Initiative, Feed the Future, and Partnership for Growth, which led to increased staffing levels, according to the management officer in Ghana. USAID’s and State’s hiring initiatives added a human resource officer, a political officer, and a general service officer, positions not anticipated at the time of the rightsizing review, according to the management officer in Mozambique. According to the management officer in Pakistan, increased funding to address development and security projects has led to higher staffing levels than the rightsizing review projected. The closure of an Arabic language school for State employees in Tunisia resulted in staffing levels below rightsizing projections, according to the management officer. Unanticipated changes in foreign government priorities and political environment can contribute to differences between actual and projected staffing levels. A foreign government’s decision to eliminate program funding or request the closure of a U.S. program usually leads to lower staffing levels, as in the following examples. According to the Deputy Chief of Mission in Kuwait, the decrease in Kuwaiti government funding for the Office of Military Cooperation- Kuwait caused the post to reduce staffing levels beginning in 2009. In 2008, the Bolivian government ordered the U.S. Drug Enforcement Agency to leave Bolivia, leading to an unexpected reduction in staff, according to the management officer in Bolivia. According to the management officer in Libya, staff levels decreased after the evacuation and destruction of the U.S. embassy in February 2011. Additionally, some posts reported that they were unable to carry out the relatively large reductions in staffing levels projected in the rightsizing reviews, usually for locally-employed staff positions. M/PRI projected sizeable reductions in locally-employed staffing levels for posts through outsourcing or contracting. However, some posts reported that a lack of viable service options in the local economy made it unfeasible to outsource or contract services. For example, In Mozambique, outsourcing services such as the motor pool, customs shipping, travel services, and warehousing are not feasible due to the country’s poor infrastructure, according to a management officer in the country. In Bangladesh, according to a management officer in the country, the post does not contract custodial services, warehouse services, or car repair as recommended by the rightsizing review because no local contracting options exist. In Burkina Faso, the embassy did not contract guard services because no major contractors exist in the capital, Ouagadougou, and local companies cannot provide the level of quality and service required by the post, according to the embassy’s management officer. Rightsizing recommendations often focus on administrative or management positions, where efficiencies are considered likely to be achieved. M/PRI typically does not make recommendations to non-State agencies and generally relies on non-State agencies, as well as certain State bureaus, to determine their own staffing needs. Rightsizing reviews contain recommendations to improve post operations and eliminate duplicative services and positions; these recommendations often focus on State’s administrative and management staff. To develop its recommendations, M/PRI reviews the levels of all staff at missions and seeks input from both State and non-State agencies. Many of M/PRI’s recommendations that we analyzed focused on State administrative and management staff rather than programmatic staff or staff from other agencies. Officials stated that administrative and management functions are where greater efficiencies are considered likely to be achieved. M/PRI recommendations may include outsourcing or regionalization of administrative functions such as voucher processing or warehousing. These changes affect administrative staff responsible for those functions, at times addressing dozens of positions filled by locally- employed staff. In Albania, for example, the rightsizing review recommended a reduction of over half of the locally-employed staff non- desk positions, from 216 to 93, mainly through outsourcing of guard services. In Bangladesh, the rightsizing review recommended eliminating 27 locally-employed non-desk staff positions out of a total of 192 to improve the efficiency of administrative functions, such as building, gardening, and custodial services. The review found that the number of square meters maintained per service provider for both residential and non-residential buildings in Bangladesh was lower than the worldwide median. For example, the review found that the area a service provider maintained in Bangladesh was less than half that in other posts for non- residential buildings and thus deemed the service to be inefficient. It recommended eliminating a sufficient number of positions to bring the ratio of square meters per service provider on par with other posts. According to State officials, the focus on management services is appropriate because that is where duplication of effort is most likely to occur. State officials said that it is easier to apply M/PRI’s quantitative tools to administrative and management staff activities than to programmatic activities. According to State officials, administrative or management work is better suited to measurements that can be compared across posts. For example, voucher examiners can record the volume of vouchers handled in a given time and the length of time they take to process. M/PRI has developed tools to assess the level of administrative support needed at posts of different sizes and has used those tools to compare posts of similar size. By comparing the efficiency of administrative services across similar posts, M/PRI has developed targets that posts should meet and uses these targets to identify posts that may be under- or overstaffed in administrative functions. For example, the rightsizing review for Paraguay recommended that the embassy cut one U.S. direct-hire position in administrative services support, a general services officer. This recommendation was based on comparing the workload of Paraguay’s service providers with workloads of service providers at similar posts—Uruguay, Croatia, and Cyprus. Rightsizing reviews also evaluate whether posts can utilize locally- employed staff in a position rather than a more costly U.S. direct hire. For example, the 2010 rightsizing review for Kenya recommended that the post use appointment-eligible family members to serve in office management positions instead of U.S. direct hires. According to M/PRI, the cost of employing these appointment-eligible family members is only a fraction of U.S. direct-hire employees and helps minimize the American footprint in dangerous overseas environments. In addition, M/PRI recommended that appointment-eligible family members be considered for employment if host country nationals are unavailable or present an unacceptable risk. According to State officials, it is more difficult to quantify the workload of program staff such as political officers than that of administrative and management staff. M/PRI has developed methodological tools to measure a post’s diplomatic density and foreign policy priority for comparison with similar posts. However, State officials said that it is difficult to assess the efficiency of program staff due to the qualitative nature of their activities, such as discussing policy issues with their diplomatic counterparts or drafting briefing documents for visiting officials. Nevertheless, M/PRI makes recommendations regarding programmatic staff where possible. In Kuwait, for example, the 2010 rightsizing review recommended the periodic reevaluation of the political and economic sections to assess the possibility of combining them. In some cases, M/PRI has made broader recommendations for posts to review levels of staff across an entire region. For example, M/PRI recommended that the Bureau of European and Eurasian Affairs reevaluate an appropriate presence in former Warsaw Pact country posts, given that the political and economic environment in these countries has shifted dramatically during the past 2 decades. M/PRI reviews all U.S. government staffing overseas and incorporates staffing data and projections from non-State agencies with a presence overseas. While chiefs of mission have final decision-making authority on staffing changes at their missions, M/PRI officials stated that their office does not have the authority to direct non-State agencies’ overseas staffing decisions. M/PRI generally does not analyze staffing numbers of other U.S. agencies overseas or make recommendations affecting these staff. Instead, M/PRI officials stated that they rely on these agencies to conduct their own rightsizing assessments and determine independently what their staffing needs will be for each post. M/PRI infrequently makes recommendations to other agencies, such as USAID. For example, M/PRI recommended that USAID evaluate the distribution of its staff in Central America, questioning the sustainability and cost-effectiveness of high USAID staffing levels in El Salvador and suggesting that USAID’s development resources could be better utilized elsewhere in Central America. However, such broader recommendations are an exception in rightsizing reviews and not a common occurrence, according to M/PRI officials. According to some bureau officials, non-State agencies that are relatively new to operating overseas have been slow to acclimate to the rightsizing process. State officials noted that non-State agency officials in Washington might have a different view of long-range overseas staffing needs than their agency officials at post. Several officials from different regional bureaus said that agencies prefer to conduct their own strategic planning and staffing exercises and view rightsizing as an activity internal to State. Officials from several non-State agencies confirmed that they conduct their own internal staffing analyses. For example, officials from the Department of Homeland Security noted that they review overseas staffing on an ongoing basis, since current events dictate the department’s operational needs. Similarly, officials from the Centers for Disease Control and Prevention stated that they evaluate overseas staffing through annual updates to their strategic staffing plan and look for opportunities to reduce U.S. direct hires by empowering locally-employed staff to serve in senior management and leadership positions. The Defense Intelligence Agency coordinates the DOD’s rightsizing efforts at U.S. posts; DOD components reevaluate positions worldwide as requirements change to ensure that staff are best positioned to achieve the department’s mission, according to an agency official. State uses rightsizing reviews to plan facilities construction and for certain staffing considerations, but some U.S. officials said that use of the reviews is limited, and State officials do not monitor whether recommendations are implemented. State’s Bureau of Overseas Buildings Operations (OBO) uses the staffing projections in rightsizing reviews to plan the size and estimate the initial costs of new embassy and consulate compounds. Further, M/PRI uses rightsizing reviews when it assesses requests from State or other agencies to add staff to overseas posts, although the respective chief of mission makes the final decision for his or her mission. However, some regional bureau officials said that they do not actively use the reviews except as a historical overview of staffing, and some post officials said that they do not use the reviews at all. In addition, State often uses documents other than rightsizing reviews to inform decisions in areas such as determining staffing levels and regionalization. Finally, State does not monitor the implementation of rightsizing review recommendations and has not designated an office with that responsibility, making it difficult to know the extent to which rightsizing reviews are having an impact. State uses rightsizing reviews for various purposes, according to U.S. officials. These officials use reviews to, among other things, plan new construction, assess requests to add staff to a post, and sometimes, in conjunction with other information, allocate resources. In addition, some State officials stated that rightsizing is the only comprehensive process to verify the number of overseas positions and the personnel occupying them. The reviews that precede the construction of a new diplomatic compound have the most impact, according to M/PRI’s fiscal year 2010 report to Congress, because OBO uses the rightsizing projections to plan the size and estimate the preliminary costs of such projects. OBO officials told us that using rightsizing reviews to plan new construction is a significant improvement over the process previously used, which was informal and not systematic. Rightsizing reviews must accompany any proposal for new construction that is sent to the Office of Management and Budget and to Congress. While OBO bases its construction plans on M/PRI’s rightsizing review, OBO officials stated that they also verify the staffing numbers in the rightsizing reviews with the staffing numbers in personnel databases and with agency and post officials. If post staffing levels increase by more than 10 percent (the amount of growth space OBO builds in) after a project has started, OBO asks M/PRI to do a rightsizing revision to obtain more accurate numbers and improve construction planning, according to OBO officials. Regional bureau officials stated that they and post officials pay particularly close attention to rightsizing reviews that are conducted in preparation for construction because they want to ensure that OBO plans enough space for the new diplomatic compounds. Further, M/PRI and post officials stated that they use rightsizing reviews when assessing requests by State or other agencies through the NSDD- 38 process to add staff to overseas posts, although the final decision on requests is made by the chief of mission. An M/PRI official stated that rightsizing reviews are intended to be used by the chief of mission to inform decisions on staffing, including those made through the NSDD-38 process. A few post management officers told us that the rightsizing process had prompted posts to review staffing requests more carefully. One management officer said that the rightsizing process also prompted a more substantial justification for NSDD-38 requests, adding organization and structure to the decision-making process. Another management officer said that rightsizing prompted the post to launch a new internal mechanism to control growth. The post instituted an internal pre-NSDD-38 vetting process requiring each office or agency to justify the need for a requested position via internal memorandum and explain how it would be funded and address other logistical needs (such as available office space). In addition, some officials from State bureaus and posts told us that they use rightsizing reviews in a variety of other ways. Bureau of Diplomatic Security officials said that they use rightsizing reviews in conjunction with Office of Inspector General (OIG) reports, annual Mission Strategic and other information to make resource Resource Plans (MSRP),allocation decisions in their annual staffing planning exercise. In addition, an official in Kuwait said that she read the rightsizing review when she arrived at post because it gave a more concise summary of conditions at post than other documents, such as the MSRP. Further, a regional bureau official stated that the primary value of rightsizing was that it forces missions to systematically collect information and plan for future staffing. Several officials stated that undertaking the rightsizing process acts as a check on growth in overseas staffing levels. For example, M/PRI’s fiscal year 2011 report to Congress states that M/PRI projected 42 fewer U.S. direct-hire positions than missions had projected. Some post officials, particularly those in management functions, said that they refer to rightsizing reviews to support staffing changes. For example, the management officer in Paraguay stated that the post concurred with the rightsizing recommendation to eliminate an assistant general services officer position; post officials are now in the process of abolishing the position. The financial management officer in Sarajevo said that she had already considered outsourcing cashiering, but a rightsizing recommendation to do so gave her more incentive to take action. Further, according to M/PRI officials, M/PRI’s 2007 review on Uruguay recommended adding a second U.S. direct-hire public diplomacy position, and the post has since implemented that recommendation. According to State officials, M/PRI provides a broader perspective in analyzing overseas staffing, providing information on where posts are overstaffed or understaffed, and recommending potential ways to achieve greater efficiencies. OBO officials stated that rightsizing is an independent process that provides staffing projections. According to regional bureau officials, the rightsizing review is currently the only tool that provides a comprehensive process to verify the number of overseas positions and the personnel occupying them. Officials from several regional bureaus said that M/PRI’s broader perspective in analyzing post operations was a benefit to rightsizing, as posts tend to have a narrower, more parochial perspective on what staffing levels are necessary. Several U.S. officials stated that they do not actively use rightsizing reviews; they view other documents and tools as more timely and useful for planning and staffing decisions. For example, officials from a regional bureau said that they do not actively use the reviews except as a historical overview of staffing. Officials from one regional bureau said that the 5-year reviews do not have as clear a use as those done specifically for construction. Some State post officials, especially in non-management functions, said that the rightsizing reviews were of little or no use to them. Several U.S. officials stated that that they use MSRPs and OIG reports more frequently than rightsizing reviews to make staffing and resource allocation decisions. These officials said that they were more aware of the annual MSRPs, which are more current than 5-year rightsizing reviews, and OIG reports and recommendations, which require follow-up until they are closed. Officials said that rightsizing reviews, done every 5 years, quickly become outdated as the situation at a post changes. Officials from the Centers for Disease Control and Prevention said that, while the rightsizing review is a long-term planning document, the more immediate time frame of the annual MSRP is more actionable, given the short-term program-driven nature of the agency’s work. Further, some State officials told us that because the rightsizing process is still relatively new and done at each post only once every 5 years, many post management officers have not yet gone through a rightsizing review and may be unfamiliar with it. As a result, some post officials may be resisting the rightsizing process rather than viewing it as a tool, according to M/PRI officials. In addition, some officials said that the final rightsizing reviews are not widely disseminated, or that they do not know how to find the reviews. Department of Homeland Security officials said that this is the first year State has given them access to the final rightsizing review on State’s intranet. Previously, while they provided comments on drafts, they were not given access to the final document. In addition, a human resources officer at one of the posts we visited stated that the training State provides to new human resources officers does not mention the rightsizing review. Several officials at the posts we visited said that they first learned about their post’s rightsizing review in an announcement of our visit to discuss rightsizing. State has not clearly designated an office with responsibility for pursuing implementation of rightsizing recommendations and does not track recommendation status after completing a rightsizing review, making it The legislation that established the difficult for M/PRI to assess impact.rightsizing process states that the Secretary of State shall take actions to carry out the recommendations made in each rightsizing review. State officials have differing opinions about who should be responsible for implementing recommendations. M/PRI’s 2010 report to Congress states that rightsizing decisions are implemented through the NSDD-38 process, with the final decision resting with the chief of mission. However, one post official stated that regional bureaus should have responsibility for taking action on rightsizing recommendations because they make resource allocations across posts. Other post and regional bureau officials, in contrast, stated that individual posts have responsibility to take action on rightsizing recommendations because the recommendations are generally directed at the posts, not the bureau. Still other officials stated that the posts and regional bureaus should share responsibility for implementing the recommendations. Officials from one regional bureau said that M/PRI’s recent rightsizing recommendations were often developed in concert with the regional bureaus, which could prompt the bureau to follow up and encourage the post to implement the recommendations. M/PRI began requiring posts to provide recommendation implementation action plans in 2007 in response to one of our previous recommendations. However, officials said that they stopped doing the plans after about a year. The time horizon for implementing the rightsizing recommendations varied to such an extent that frequent reevaluation of progress would have been required to ensure compliance, which was impractical given M/PRI’s resource constraints, according to M/PRI officials. Officials from both M/PRI and the regional bureaus have noted that M/PRI does not have the authority to compel implementation of rightsizing recommendations. Some post officials noted that there is little incentive to implement recommendations, particularly if the recommendations are to decrease the workforce size. While posts may agree with rightsizing recommendations in concept, the tendency is for posts to protect their staffing levels and look for increases if possible. For example, an official in Prague agreed with a rightsizing recommendation to conduct a strategic regional review of staffing in former Warsaw Pact countries to determine whether the number of positions could be reduced. He noted, however, that it would be difficult to accomplish in practice because posts lack incentive to cut positions. The post’s budget provides salaries and other compensation for locally-employed staff, while State’s headquarters budget provides U.S. direct-hire staff salaries. incentive to reduce U.S. direct-hire staff even though they are more costly than locally-employed staff. In addition, the chief of mission in a particular country has final authority over staffing decisions and may have priorities that extend beyond rightsizing considerations. The post budget also provides some benefits to U.S. direct-hire staff. Rightsizing reviews play a crucial role in planning construction of new diplomatic facilities overseas, can inform bureau and post decisions on staffing, and have prompted some posts to reassess staffing increases. M/PRI has improved the consistency of its rightsizing approach over the past several years. In addition, undertaking the rightsizing process can act as a check on growth in overseas staffing. A valuable component of the reviews is the recommendations made to improve post operations. The legislation that established the rightsizing process requires the Secretary of State to ensure that rightsizing recommendations are addressed; however, State officials have not developed a clear approach or designated an office to address, track, and report on such recommendations. No State office has responsibility for following up on recommendations, and posts or bureaus have limited incentive to undertake an examination of recommendations and implement them if they prove to have value. Further, any actions post officials take to implement recommendations may not be known or documented outside the post, which contributes to a substantial loss of information for State officials. Although the reviews have certain limitations, including competing priorities at posts, State has not yet realized the full potential of its rightsizing reviews. To strengthen the impact of future rightsizing reviews, State needs a process by which it can capture this information to inform future decisions about the optimal number and mix of staff at posts overseas to maximize the use of limited resources. Such a process would also strengthen State’s ability to report to Congress on the accomplishments of its rightsizing process. To strengthen the effectiveness of the rightsizing effort, we recommend that the Secretary of State designate the appropriate entity or entities to take the following two actions: 1. ensure that rightsizing recommendations are addressed, including time frames for their evaluation and implementation, and 2. track and report on the actions taken to implement the recommendations. We provided a draft of this report to State for comment. In its written comments, reproduced in appendix II, State emphasized that correctly aligning staffing with foreign policy goals and ensuring the maximum safety and efficiency of overseas operations remain top department priorities. State also noted that, given the critical role rightsizing reviews play in determining staffing levels in preparation for the construction of diplomatic facilities overseas and informing bureau and post decisions on future staffing needs, it is important that the rightsizing function be carried out optimally and that rightsizing data and analysis be shared widely. State indicated that it would carefully consider our recommendations, and it described a number of actions it intends to take that could address them. State noted that M/PRI will take the lead with regard to tracking implementation of rightsizing review recommendations. For rightsizing reviews initiated after August 1, 2012, as part of the ongoing second cycle of reviews, M/PRI analysts will outline the extent to which specific recommendations M/PRI provided in the previous rightsizing cycle have been implemented, as appropriate. State proposed that this information on progress related to implementation of M/PRI’s recommendations for overseas posts be included in the yearly rightsizing report to Congress beginning in December 2012. In addition, beginning in calendar year 2013, M/PRI will survey each mission 1 year after the completion of a rightsizing review to assess progress with regard to the implementation of recommendations. Posts will be asked to report on measures taken to comply with recommendations, provide a time frame for doing so, or explain changing conditions or policies that make compliance unfeasible. State proposed to then include this additional information in the yearly rightsizing report to Congress beginning in December 2013. Further, State reported ongoing efforts to refine analytical tools used in the rightsizing analysis and cited an intention to expand the number of outreach sessions and training on rightsizing to classes at its Foreign Service Institute. State also provided technical comments that were incorporated, as appropriate. We provided the Departments of Defense; Health and Human Services; Homeland Security; and Justice; and the U.S. Agency for International Development with relevant excerpts of the report and requested technical comments, but none were provided. We are sending copies of this report to interested congressional committees. We are also sending copies of this report to the Secretary of State. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8980 or courtsm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this reported are listed in appendix III. The objectives of this report were to examine (1) the consistency of the Department of State’s (State) approach to conducting rightsizing reviews and how its projections compare to actual staffing levels; (2) the focus of State’s rightsizing recommendations; and (3) the extent to which State uses its rightsizing reviews and monitors implementation of recommendations. Our scope included the 181 rightsizing reviews that State’s Office of Management Policy, Rightsizing and Innovation (M/PRI) completed between 2005 and 2011 that were provided within the time frame of our review. Each U.S. overseas mission has undergone at least one rightsizing review, according to M/PRI; a few have undergone two reviews. To obtain information on the consistency of State’s approach to conducting rightsizing reviews, the focus of rightsizing recommendations, and the extent to which State uses its rightsizing reviews and monitors implementation of recommendations, we reviewed agency documents— including M/PRI’s annual reports to Congress, and Office of Inspector General (OIG) reports—and interviewed officials from State and non- State agencies, both in Washington, D.C., and at overseas posts. Specifically, we discussed rightsizing with State officials in Washington from M/PRI, regional bureaus, the Bureau of Overseas Buildings Operations; the Bureau of Diplomatic Security; the Bureau of Consular Affairs; and the OIG. We also spoke with officials from non-State agencies in the United States and overseas, including the Departments of Commerce, Defense, Health and Human Services, Homeland Security, and Justice, and the U.S. Agency for International Development. To obtain more detailed information on the consistency of State’s approach to conducting rightsizing reviews, how projections compare to actual staffing levels, the focus of rightsizing recommendations, and how State uses and monitors implementation, we selected 14 reviews to analyze in greater depth, traveling to 3 of the posts and contacting the other 11 by telephone or email. We based our selections on interviews with M/PRI and State’s regional bureaus, the content of the rightsizing reviews, and the political and security conditions at post to ensure that we analyzed a range of experiences. In selecting posts, we considered the date the rightsizing review was completed, whether other U.S. agencies were present at post, geographic diversity and whether a post was located in a new embassy compound. We traveled to Prague, the Czech Republic; Sarajevo, Bosnia and Herzegovina; and Kuwait City, Kuwait to discuss their respective rightsizing reviews with post officials. While at post, we interviewed officials in each embassy section, including the office of the chief of mission, management, human resources, financial management, facilities management, the regional security office, political affairs, public affairs, and consular affairs, among others. We also met with officials from other U.S. government agencies present at post. We also communicated with management officers at the following 11 missions: Bangladesh, Bolivia, Burkina Faso, Ghana, Korea, Libya, Mozambique, Pakistan, Paraguay, the Philippines, and Tunisia. To obtain additional information on the consistency of State’s approach to conducting rightsizing reviews, we reviewed agency documents— including M/PRI’s annual reports to Congress, M/PRI’s guidance to posts, and M/PRI’s guide to rightsizing for its analysts—and interviewed officials from State and non-State agencies, both in Washington, D.C., and at overseas posts. During our overseas site visits to the Czech Republic, Bosnia and Herzegovina, and Kuwait, we discussed the rightsizing process with the embassy section heads. To examine M/PRI’s coordination with other U.S. government agencies, we spoke with officials from non-State agencies in the United States and overseas. We also discussed their process for allocating overseas staff with these officials. In addition, we reviewed legislation related to the establishment of the Office of Rightsizing within State and the intent of rightsizing. To examine how M/PRI’s methodology has evolved in recent years, we reviewed 181 rightsizing reviews completed by M/PRI between 2005 and 2011. We reviewed information papers on M/PRI’s methodological tools for assessing both administrative staff and program staff, including the total management ratio and diplomatic density. To assess the extent to which State’s staffing projections compare with actual staffing levels, we relied on two main sources of data: (1) the staffing projections in the rightsizing reviews, which we manually entered into a spreadsheet and (2) the actual staffing levels State extracted from the Post Personnel database for us. To assess the reliability of the data, we conducted a data consistency check and interviewed knowledgeable State officials on how the data were collected and maintained, as well as how the data were extracted for our use. We sent the staffing projection data we manually entered to State for verification. We determined that the data were sufficiently reliable for our purpose of comparing staffing projections with actual staff levels as of December 2011. We obtained 181 rightsizing reviews from the Office of Rightsizing. We took the following steps to reduce the number of reviews to 144 for the comparison analysis: deleted entries with projection years prior to 2011; deleted entries based on an older review if there were multiple deleted entries with unreliable data. For example, State told us that Afghanistan personnel numbers were not reliable; consolidated projections for bilateral and multilateral missions in the same country. For example, we combined projections for the U.S. missions to Belgium, the European Union, and the North Atlantic Treaty Organization into one entry; consolidated projections for multiple posts in one country into one entry. For example, we consolidated projections for posts in Russia and posts in Poland; and deleted entries with no projections. To compare rightsizing projections to the actual staffing levels of 2011, which is the year for which State provided personnel data, we extrapolated 2011 staffing levels based on rightsizing review projections. We assumed linear growth or decline in staffing levels. For example, if the base year was 2008 and the projection year was 2013, we divided the change in staffing levels by 5 (5 years between the projection year and the base year) to get the annual change in staffing levels. We added the changes for 3 years (3 years between the base year and 2011) to the base-year staffing level. We then identified the number of reviews in each category of differences between the actual and the projection: within 10 percent, 10 to 50 percent overprojection, 10 to 50 percent underprojection, more than 50 percent overprojection, and more than 50 percent underprojection. Missions with overprojections had fewer staff than projected, while those with underprojections had more. To understand the factors that could lead to differences between the actual and projected staffing levels, we identified posts with relatively large differences by generating a composite index for each country, taking into consideration the differences in absolute numbers and percentages for the following three categories: (a) U.S. direct-hire desk positions, which have the most significant impact on the physical space at a post; (b) locally-employed staff, which comprise the majority of the personnel overseas; and (c) country total, which captures all personnel at a post . Based on the composite index, we identified five countries for overprojection—Tunisia, Libya, Bolivia, Korea, and the Philippines—and five countries for underprojection—Pakistan, Bangladesh, Ghana, Mozambique, and Burkina Faso. The differences between projected and actual total staffing levels as of December 2011 were over 10 percent for all 10 countries. We then sent questions to the management officers in each country asking them the reasons for the differences. We summarized their responses in the report. To obtain information on the focus of recommendations made by State’s rightsizing office, we reviewed 181 rightsizing reviews completed by M/PRI between 2005 and 2011. During our overseas site visits to the Czech Republic, Bosnia and Herzegovina, and Kuwait, we discussed the rightsizing recommendations with the relevant section heads at each post. We also discussed rightsizing recommendations with the management officers in the other 11 missions that we selected for more in-depth review. To assess the extent to which State uses its rightsizing reviews and tracks implementation of recommendations, we reviewed agency documents, including M/PRI’s annual report to Congress, and interviewed officials from State and non-State agencies, both in Washington, D.C., and at overseas posts to obtain information on how officials use the reviews and monitor implementation. In addition, we reviewed our prior work on rightsizing, embassy construction, and guidance on internal controls. In addition to the individual named above, Ming Chen, Debbie Chung, Lynn Cothern, Martin de Alteriis, Mark Dowling, Etana Finkler, Leslie Holen (Assistant Director), Heather Latta, Lisa Reijula, and Christina Werth made key contributions to this report.
After the 1998 bombings of two U.S. embassies, a U.S. government panel determined that staffing levels had not been adjusted to reflect changing missions, requirements, and security concerns. In 2004, Congress mandated the establishment of the Office of Rightsizing within the Department of State. The office reviews levels of overseas staffing for all U.S. government agencies at every post every 5 years, projects future staffing levels it determines are appropriate to meet mission needs, and recommends ways to improve efficiency. Rightsizing is intended to align the number and location of staff with foreign policy priorities, security, and other constraints. GAO examined (1) the consistency of State’s approach to conducting rightsizing reviews and how its projections compare to actual staffing levels; (2) the focus of State’s rightsizing recommendations; and (3) the extent to which State uses its rightsizing reviews and monitors implementation of recommendations. GAO reviewed 181 rightsizing reviews, compared projections in reviews with current actual staffing data, and interviewed officials from State and other agencies in Washington, D.C., and at overseas posts. The Department of State (State) has improved the consistency of its rightsizing approach across overseas posts. However, differences between future staffing levels it projects are appropriate to meet mission needs and actual staffing levels still exist due to unanticipated events and other factors. GAO reported in 2006 that State’s Office of Management Policy, Rightsizing and Innovation (M/PRI) had not been conducting its rightsizing reviews consistently. Some reviews discussed various rightsizing elements, such as outsourcing, while others did not. State has since improved the consistency of its reviews by developing a variety of methodological tools and a standard template which it applies to each post. GAO found that over half of the 144 rightsizing projections analyzed were within 10 percent of actual staffing levels as of December 2011. In contrast, over 40 percent of the posts have staffing level differences of over 10 percent. Unanticipated events and other factors, such as changes in policies, contributed to these differences. For example, according to the management officer in Mozambique, M/PRI projected staffing increases as a result of the President’s program to combat AIDS, but the actual funding level for the program was much higher than anticipated. This resulted in higher actual staffing levels for both U.S. direct-hire and locally-employed staff positions. Rightsizing reviews contain recommendations to improve post operations and eliminate duplicative services and positions. To develop its recommendations, M/PRI reviews the levels of all staff at posts and seeks input from State and non-State agencies. M/PRI relies on non-State agencies to determine independently their own staffing needs. Many of State’s recommendations for a specific post focus on the level of State’s administrative or management staff, rather than State’s programmatic staff or staff from other agencies. Some State officials stated that the activities of administrative and management staff are better suited to quantitative measurement while the qualitative nature of programmatic staff activities, such as discussing policy issues with foreign diplomatic counterparts, is more difficult to measure. State’s use of rightsizing reviews varies, and State does not follow up on review recommendations. State’s Bureau of Overseas Buildings Operations uses the staffing projections in rightsizing reviews to plan the size of new embassy compounds. Further, M/PRI uses rightsizing reviews when it assesses requests by State or other agencies to add staff to overseas posts, although the final decision is made by the respective Chief of Mission. In addition, Bureau of Diplomatic Security officials said that they incorporate rightsizing reviews into their annual staffing planning exercise, and some post officials said that they refer to rightsizing reviews to support staffing changes. Some U.S. officials stated that undertaking the rightsizing process acts as a check on growth in overseas staffing levels. However, some State regional bureau officials said that they do not actively use the reviews except as a historical overview of staffing, and some post officials said that they do not use the reviews at all. State often uses documents other than rightsizing reviews for decisions in areas including staffing levels. Finally, State does not monitor the implementation of rightsizing review recommendations and has not designated an office with responsibility for their implementation. State issues an annual report to Congress in which it lists the rightsizing reviews it has completed, number of positions recommended for elimination, and potential cost savings; the report does not address whether recommendations have been implemented. Because State does not track or report on the implementation of recommendations, State cannot determine if rightsizing reviews are achieving their purpose of aligning overseas staffing levels with U.S. priorities. GAO recommends that the Secretary of State designate the appropriate entities to ensure that rightsizing recommendations are addressed and to track and report the actions taken to implement the recommendations. State described a number of actions it intends to take that could address GAO’s recommendations.
Social Security’s projected long-term financing shortfall stems primarily from the fact that people are living longer and having fewer children. As a result, the number of workers paying into the system for each beneficiary is projected to decline. This demographic trend is occurring or will occur in all OECD countries. Although the number of workers for every elderly person in the U.S. has been relatively stable over the past few decades, it has already fallen substantially in other developed countries. The number of workers for every elderly person in the U.S. is projected to fall from 4.1 in 2005 to 2.9 in 2020. In nine of the OECD countries, this number has already fallen below the level projected for the U.S. in 2020. This rise in the share of the elderly in the population could have significant effects on countries’ economies, particularly during the period from 2010 to 2030. These effects may include slower economic growth and increased costs for aging-related government programs. Historically, developed countries have relied on some form of a PAYG program and have used a variety of approaches to reform their national pension systems. In many cases, these approaches provide a basic or minimum benefit as well as a benefit based on the level of a worker’s earnings. Several countries are preparing to pay future benefits by either supplementing or replacing their PAYG programs. For example, some have set aside and invested current resources in a national pension reserve fund to partially pre-fund their PAYG program. Some have established fully funded individual accounts. These are not mutually exclusive types of reform. In fact, many countries have undertaken more than one of the following types of reform: Adjustments to existing pay-as-you-go systems. Typically, these are designed to create a more sustainable program by increasing contributions or decreasing benefits, or both, while preserving the basic structure of the system. Measures include phasing in higher retirement ages, equalizing retirement ages across genders, and increasing the earnings period over which initial benefits are calculated. Some countries have created notional defined contribution (NDC) accounts for each worker, which tie benefits more closely to each worker’s contributions and to factors such as the growth rate of the economy. National pension reserve funds. These are set up to partially pre-fund PAYG national pension programs. Governments commit to make regular transfers to these investment funds from, for example, budgetary surpluses. To the extent that these contribute to national saving, they reduce the need for future borrowing or large increases in contribution rates to pay scheduled benefits. Funds can be invested in a combination of government securities and domestic as well as foreign equities. Individual accounts. These fully funded accounts are administered either by employers or the government or designated third parties. The level of retirement benefits depends largely on the amount of each person’s contributions into the account during their working life, investment earnings, and the amount of fees they are required to pay. We are applying GAO’s Social Security reform criteria to the experiences of countries that are members of the OECD as well as Chile, which pioneered individual accounts in 1981. We are assessing both the extent to which another country’s circumstances are similar enough to those in the U.S. to provide a useful example and the extent to which particular approaches to pension reform were considered to be successful. Countries have different starting points, including unique economic and political environments. Moreover availability of other sources of retirement income, such as occupation-based pensions, varies greatly. Recognizing this, GAO uses three criteria for evaluating pension reforms: Financing Sustainable Solvency. We are looking at the extent to which particular reforms influence the funds available to pay benefits and how the reforms affect the ability of the economy, the government’s budget, and national savings to support the program on a continuing basis. Balancing Equity and Adequacy. We are examining the relative balance struck between the goals of allowing individuals to receive a fair return on their contributions and ensuring an adequate level of benefits to prevent dependency and poverty. Implementing and Administering Reforms. We are considering how easily a reform is implemented and administered and how the public is educated concerning the reform. Because each country is introducing reforms in a unique demographic, economic, and political context these factors will likely affect reform choices and outcomes. For instance, several European countries we are reviewing have strong occupation-based pension programs that contribute to retirement income security. In addition, some countries had more generous national pensions and other programs supporting the elderly than others. All countries also provide benefits for survivors and the disabled; often these are funded separately from old age benefit programs. Some countries are carrying out reforms against a backdrop of broader national change. For example, Hungary and Poland were undergoing large political and economic transformations as they reformed their national pension systems. All of these issues should be considered when drawing lessons. In addition to the adjustments that countries have made to their existing PAYG systems, many countries have undergone other changes as well, indicating that change may not be a one-time experience. (See table 1.) Understanding the outcomes of a country’s reform requires us to look at all of the changes a country has made. The experiences of the countries that have adjusted their existing PAYG national pension programs highlight the importance of considering how modifications will affect the program’s financial sustainability, its distribution of benefits, the incentives it creates, and the extent to which the public understands the new provisions. To reconcile PAYG program revenue and expenses, nearly all the countries we studied have decreased benefits and most have also increased contributions, often in part by increasing retirement ages. Generally countries with national pension programs that are relatively financially sustainable have undertaken a package of several far-reaching adjustments. The countries we are studying increased contributions to PAYG programs by raising contribution rates, increasing the range of earnings or kinds of earnings subject to contribution requirements, or increasing the retirement age. Most of these countries increased contribution rates for some or all workers. Canada, for example, increased contributions to its Canadian Pension Plan from a total of 5.85 percent to 9.9 percent of wages, half paid by employers and half by employees. Several countries, including the UK, increased contributions by expanding the range of earnings subject to contributions requirements. Nearly all of the countries we are studying decreased scheduled benefits, using a wide range of techniques. Some techniques reduce the level of initial benefits; others reduce the rate at which benefits increase during retirement or adjust benefits based on retirees’ financial means. Increased years of earnings. To reduce initial benefits several countries increased the number of years of earnings they consider in calculating an average lifetime earnings level. France previously based its calculation on 10 years, but increased this to 25 years for its basic public program. Increased minimum years of contributions. Another approach is to increase the minimum number of years of contributions required to receive a full benefit. France increased the required number of years from 37.5 to 40 years. Belgium is increasing its minimum requirement for early retirement from 20 to 35 years. Changed formula for calculating benefits. Another approach to decreasing the initial benefit is to change the formula for adjusting prior years’ earnings. Countries with traditional PAYG programs all make some adjustment to the nominal amount of wages earned previously to reflect changes in prices or wages over the intervening years. Although most of the countries we are studying use some kind of average wage index, others, including Belgium and France, have adopted the use of price indices. The choice of a wage or price index can have quite different effects depending on the rate at which wages increase in comparison to prices. We see variation in the extent to which wages outpace prices over time and among countries. Changed basis for determining year-to-year increases in benefits. In many of the countries we are studying, the rate at which monthly retirement benefits increase from year-to-year during retirement is based on increases in prices, which generally rise more slowly than earnings. Others, including Denmark, Ireland, Luxembourg, and the Netherlands, use increases in earnings or a combination of wage and price indices. Hungary, for example, changed from the use of a wage index to the Swiss method— an index weighted 50 percent on price changes and 50 percent on changes in earnings. Implemented provisions that provide a closer link between pension contributions and benefits. Countries that have adopted this approach stop promising a defined level of benefits and instead keep track of notional contributions into workers’ NDC accounts. Unlike individual accounts, these notional defined accounts are not funded. Current contributions to the program continue to be used largely to pay benefits to current workers, while at the same time they are credited to individuals’ notional accounts. When these programs include adjustments that link benefits to factors such as economic growth, longevity, and/or the ratio of workers to retirees, they may contribute to the financial sustainability of national pension systems. Several countries, such as Sweden and the UK, have undertaken one or more of these adjustments to their PAYG programs and have achieved, or are on track to achieve relative financial sustainability. Others, including Japan, France, and Germany, may need additional reforms to fund future benefit commitments. All of the countries have included in their reforms provisions to ensure adequate benefits for lower-income groups and put into place programs designed to ensure that all qualified retirees have a minimum level of income. Most do so by providing a targeted means-tested program that provides more benefits to retirees with limited financial means. Two countries—Germany and Italy—provide retirees access to general social welfare programs that are available to people of all ages rather than programs with different provisions for elderly people. Twelve countries use another approach to providing a safety net: a basic retirement benefit. The level of the benefit is either a given amount per month for all retirees or an amount based on years of contributions to the program. In Ireland, for example, workers who contribute to the program for a specified period receive a minimum pension. Chile set a minimum pension equal to the minimum wage—about one-quarter of average earnings as of 2005. In addition, several of the countries we are studying give very low-income workers credit for a minimum level contribution. Other countries give workers credit for years in which they were unemployed, pursued postsecondary education, or cared for dependents. In selecting between the many reform options, policy makers need to strike a careful balance among the following objectives: provide a safety net, contain costs, and maintain incentives to work and save. Costs can be high if a generous basic pension is provided to all eligible retirees regardless of their income. On the other hand, means-tested benefits can diminish incentives to work and save. The UK provides both a basic state pension and a means-tested pension credit. Concerned about the decline in the proportion of preretirement earnings provided by the basic state pension, some have advocated making it more generous. Others argue that focusing safety-net spending on those in need enables the government to alleviate pensioner poverty in a cost effective manner. However, a guaranteed minimum income could reduce some peoples’ incentive to save. In view of this disincentive, the UK adopted an additional means- tested benefit that provides higher benefits for retirees near the minimum income level. This benefit, called the savings credit, allows low-income retirees near the minimum pension level to retain a portion of their additional income. However, any loss of income due to means-testing still diminishes incentives to save. Without changes to pension rules, the proportion of pensioners eligible for means-tested income is expected to increase to include almost 65 percent of retiree households by 2050. The extent to which new provisions are implemented, administered, and explained to the public may affect the outcome of the reform. Poland, for example, adopted NDC reform in 1999, but the development of a data system to track contributions has been problematic. As of early 2004, the system generated statements indicating contributions workers made during 2002, but there was no indication of what workers contributed in earlier years or to previous pension programs. Without knowing how much they have in their notional defined accounts, workers may have a difficult time planning for their retirement. Some governments have had limited success in efforts to educate workers about changes in provisions that will affect their retirement income. For example, a survey of women in the UK showed that only about 43 percent of women who will be affected by an increase in the retirement age knew the age that applied to them. Another type of pension reform is the accumulation of reserves in national pension funds, which can contribute to the system’s financial sustainability depending on when the funds are created or reformed and how they are managed. Countries that chose to partially pre-fund their PAYG programs decades ago have had more time to amass substantial reserves, reducing the risk that they will not meet their pension obligations. A record of poor fund performance has led some countries to put reserve funds under the administration of relatively independent managers with the mandate to maximize returns without undue risk. Establishing reserve funds ahead of demographic changes—well before the share of elderly in the population increases substantially—makes it more likely that enough assets will accumulate to meet future pension obligations. In countries such as Sweden, Denmark, and Finland, which have had long experiences with partial pre-funding of PAYG programs, important reserves have already built up. These resources are expected to make significant contributions to the long-term finances of national pension programs. Other countries that have recently created pension reserve funds for their pension program have a tighter time frame to accumulate enough reserves before population aging starts straining public finances. In particular, the imminent retirement of the baby-boom generation is likely to make it challenging to continue channeling a substantial amount of resources to these funds. France, for example, relies primarily on social security surpluses to finance its pension reserve fund set up in 1999, but given its demographic trends, may be able to do so only in the next few years. Similarly, Belgium and the Netherlands plan on maintaining a budget surplus, reducing public debt and the interest payments associated with the debt, and transferring these earmarked resources to their reserve funds. However, maintaining a surplus will require sustained budgetary discipline as a growing number of retirees begins putting pressure on public finances. Examples from several countries reveal that pre-funding with national pension reserve funds is less likely to be effective in helping ensure that national pension programs are financially sustainable if these funds are used for purposes other than supporting the PAYGO program. Some countries have used funds to pursue industrial, economic, or social objectives. For example, Japan used its reserve fund to support infrastructure projects, provide housing and education loans, and subsidize small and medium enterprises. As a result, Japan compromised to some extent the principal goal of pre-funding. Past experiences have also highlighted the need to mitigate certain risks that pension reserve funds face. One kind of risk has to do with the fact that asset build-up in a fund may lead to competing pressures for tax cuts and spending increases, especially when a fund is integrated in the national budget. For example, governments may view fund resources as a ready source of credit. As a result, they may be inclined to spend more than they would otherwise, potentially undermining the purpose of pre- funding. Ireland alleviated the risk that its reserve fund could raise government consumption by prohibiting investment of fund assets in domestic government bonds. Another risk is the pressure that groups may exert on the investment choices of a pension reserve fund, potentially lowering returns. For example, Canada and Japan have requirements to invest a minimum share of their fund portfolio in domestic assets, restricting holdings of foreign assets to stimulate economic development at home. Funds in several countries have also faced pressure to adopt ethical investment criteria, with possible negative impacts on returns. In recent years, some countries have taken steps to ensure that funds are managed to maximize returns, without undue risk. Canada, for example, has put its fund under the control of an independent Investment Board operating at arm’s length from the government since the late 1990’s. Several countries, including New Zealand, have taken steps to provide regular reports and more complete disclosures concerning pension reserve funds. Countries that have adopted individual account programs—which may also help pre-fund future retirement income—offer lessons about financing the existing PAYG pension program as the accounts are established. Some countries manage this transition period by expanding public debt, building up budget surpluses in advance of implementation, reducing or eliminating the PAYG program, or some combination of these. In addition, administering individual accounts requires effective regulation and supervision of the financial industry to protect individuals from avoidable investment risks. Educating the public is also important as national pension systems become more complex. It is important to consider how different approaches to including individual accounts may affect the short-term and long-term financing of the national pension system and the economy as a whole. A common challenge faced by countries that adopt individual accounts is how to pay for both a new funded pension and an existing PAYG pension simultaneously, known as transition costs. Countries will encounter transition costs depending on whether the individual accounts redirect revenue from the existing PAYG program, the amount of revenue redirected, and how liabilities under the existing PAYG program are treated. The countries we are examining offer a range of approaches for including individual accounts and dealing with the prospective transition costs. Australia and Switzerland avoided transition costs altogether by adding individual accounts to their existing national pension systems, which are modest relative to those in the other countries we are studying. Some countries diverted revenue from the existing PAYG program to the individual accounts. The resulting shortfall reflects, in part, the portion of the PAYG program being replaced with individual accounts and the amount of PAYG revenue being redirected to fund the accounts. For example, transition costs may be less in countries such as Sweden or Denmark where the contribution to individual accounts is 2.5 percent of covered earnings and 1 percent, respectively, than for Poland or Hungary, which replaced a larger portion of the PAYG program. All of the countries we are reviewing also made changes to their PAYG program that were meant to help reduce transition costs, such as increasing taxes or decreasing benefits. In addition, Chile built a surplus in anticipation of major pension reform, and Sweden had large budget surpluses in place prior to establishing individual accounts. Countries also transfer funds from general budget revenues to help pay benefits to current and near retirees, expanding public borrowing. If individual accounts are financed through borrowing they will not positively affect national saving until the debt is repaid, as contributions to individual accounts are offset by increased public debt. For example, Poland’s debt is expected to exceed 60 percent of GDP in the next few years in part because of its public borrowing to pay for the movement to individual accounts. It is sometimes difficult for countries to predict their transition costs. In particular, countries that allow workers to opt in or out of individual account programs have had difficulty estimating costs. For example, Hungary and Poland experienced higher than anticipated enrollment from current workers in their individual account programs, leaving the existing PAYG program with less funding than planned. As a result, both countries had to make subsequent changes to their individual account and PAYG programs. Countries adopting individual accounts as part of their national pension system have had to make trade-offs between giving workers the opportunity to maximize returns in their accounts and ensuring that benefits will be adequate for all participants. Some countries set a guaranteed rate of return to reduce certain investment risks and help ensure adequacy of benefits. These guarantees may, however, result in limited investment diversification with a potentially negative impact on returns. In Chile, for example, fund managers’ performance is measured against the returns of other funds. This has resulted in a “herding” effect because funds hold similar portfolios, reducing meaningful choice for workers. All the countries with individual accounts provide some form of a minimum guaranteed benefit so retirees will have at least some level of income. Some experts believe that a minimum pension guarantee could raise a moral hazard whereby individuals may make risky investment decisions, minimize voluntary contributions, or, as in the case of Australia where the minimum guarantee is means-tested, may spend down their retirement assets quickly. It is important to consider the payout options available from individual accounts, as these can also have substantial effects on adequacy of income throughout retirement. For example, an annuity payout option can help to ensure that individuals will not outlive their assets in retirement. However, purchasing an annuity can leave some people worse off if, for example, the annuities market is not fully developed, premiums are high, or inflation erodes the purchasing power of benefits. Several countries also allow for phased withdrawals, in some cases with restrictions, helping to mitigate the risk of individuals outliving their assets and becoming reliant on the government’s basic or safety-net pension. Some countries offer a lump-sum payment under certain circumstances, such as small account balances, and Australia allows a full lump-sum payout for all retirees. Important lessons can be learned regarding the administration of individual accounts, including the need for effective regulation and supervision of the financial industry to protect individuals from avoidable investment risks. Some countries have expanded their permitted investment options to include foreign investments and increased the percentage of assets that can be invested in private equities. The experiences of countries we are studying also indicate the importance of keeping administrative fees and charges under control. The fees that countries permit pension funds to charge can have a big influence on the amount of income retirees receive from their individual accounts. Several countries have limits on the level and types of fees providers can charge. Additionally, the level of fees should take into consideration the potential impact not only on individuals’ accounts, but also on fund managers. In the UK, for example, regulations capping fees may have discouraged some providers from offering pension funds. To keep costs low, Sweden aggregates individuals’ transactions to realize economies of scale. Some countries’ experiences highlighted weaknesses in regulations on how pension funds can market to individuals. The UK’s and Poland’s regulations did not prevent problems in marketing and sales. Poland experienced sales problems, in part because it had inadequate training and standards for its sales agents, which may have contributed to agents’ use of questionable practices to sign up individuals. The UK had a widely- publicized “mis-selling” scandal involving millions of investors. Many opened individual accounts when they would more likely have been better off retaining their occupation-based pension. Insurance companies were ordered to pay roughly $20 billion in compensation. Countries’ individual account experiences reveal pitfalls to be avoided during implementation. For example, Hungary, Poland, and Sweden had difficulty getting their data management systems to run properly and continue to experience a substantial lag time in recording contributions to individuals’ accounts. In addition, Hungary and Poland did not have an annuities market that offered the type of annuity required by legislation. Education becomes increasingly important as the national pension systems become more complex. It is particularly important for workers who may have to make a one-time decision about joining the individual account program. Several countries require disclosure statements about the status of a pension fund, and some provide annual statements. To help individuals choose a fund manager, one important component of these statements should be the disclosure of fees charged. Some countries have done a better job of providing fund performance information than others. For example, Australia requires its fund providers to inform members through annual reports clearly detailing benefits, fees and charges, investment strategy, and the fund’s financial position. In contrast, Hungary did not have clear rules for disclosing operating costs and returns, making it hard to compare fund performance. Demographic challenges and fiscal pressure have necessitated national pension reform in many countries. Though one common goal behind reform efforts everywhere is to improve financial sustainability, countries have adopted different approaches depending on their existing national pension system and the prevailing economic and political conditions. This is why reforms in one country are not easily replicated in another, or if they are, may not lead to the same outcome. Countries have different emphases, such as benefit adequacy or individual equity; as a result, what is perceived to be successful in one place may not be viewed as a viable option somewhere else. Although some pension reforms were undertaken too recently to provide clear evidence of results, the experiences of other countries may suggest some lessons for U.S. deliberations on Social Security reform. Some of these lessons are common to all types of national pension reform and are consistent with findings in previous GAO studies. Restoring long-term financial balance invariably involves cutting benefits, raising revenues, or both. Additionally, with early reform, policy makers can avoid the need for more costly and difficult changes later. Countries that undertook important national pension reform well before undergoing major demographic changes have achieved, or are close to achieving, financially sustainable national pension systems. Others are likely to need more significant steps because their populations are already aging. No matter what type of reform is undertaken, the sustainability of a pension system will depend on the health of the national economy. As the number of working people for each retiree declines, average output per worker must increase in order to sustain average standards of living. Reforms that encourage employment and saving, offer incentives to postpone retirement, and promote growth are more likely to produce a pension system that delivers adequate retirement income and is financially sound for the long term. Regardless of a country’s approach, its institutions need to effectively operate and supervise the different aspects of reform. A government’s capacity to implement and administer the publicly managed elements of reform and its ability to regulate and oversee the privately managed components are crucial. In addition, education of the public becomes increasingly important as workers and retirees face more choices and the national pension system becomes more complex. This is particularly true in the case of individual account reforms, which require higher levels of financial literacy and personal responsibility. In nearly every country we are studying, debate continues about alternatives for additional reform measures. It is clearly not a process that ends with one reform. This may be true in part because success can only be measured over the long term, but problems may arise and need to be dealt with in the short term. The positive lessons from other countries’ reforms may only truly be clear in years to come. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. I’d be happy to answer any questions you may have. For further information regarding this testimony, please contact Barbara D. Bovbjerg, Director, Education, Workforce, and Income Security Issues at (202) 512-7215. Alicia Puente Cackley, Assistant Director, Benjamin P. Pfeiffer, Thomas A. Moscovitch, Nyree M. Ryder, Seyda G. Wentworth and Corinna A. Nicolaou, also contributed to this report. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Many countries, including the United States, are grappling with demographic change and its effect on their national pension systems. The number of workers for each retiree is falling in most developed countries, straining the finances of national pension programs, particularly where contributions from current workers fund payments to current beneficiaries--known as a "pay-as-you-go" (PAYG) system. Although demographic and economic challenges are less severe in the U.S. than in many other developed countries, projections show that the Social Security program faces a long-term financing problem. Because some countries have already undertaken national pension reform efforts to address demographic changes similar to those occurring in the U.S., we may draw lessons from their experiences. The Chairman of the Subcommittee on Social Security of the House Committee on Ways and Means asked GAO to testify on preliminary results of ongoing work on lessons learned from other countries' experiences reforming national pension systems. GAO focuses on (1) adjustments to existing PAYG national pension programs, (2) the creation or reform of national pension reserve funds to partially pre-fund PAYG pension programs, and (3) reforms involving the creation of individual accounts. Based on preliminary work, all countries in the Organisation for Economic Co-operation and Development (OECD), as well as Chile, have, to some extent, reformed their national pension systems, consistent with their different economic and political conditions. While reforms in one country may not be easily replicated in another, their experiences may nonetheless offer lessons for the U.S. Countries' experiences adjusting PAYG national pension programs highlight the importance of considering how modifications will affect the program's financial sustainability, its distribution of benefits, the incentives it creates, and public understanding of the new provisions. Nearly all of the countries we are studying reduced benefits, and most have also increased contributions, often by increasing statutory retirement ages. Countries included provisions to ensure adequate benefits for lower-income groups, though these can lessen incentives to work and save for retirement. Also, how well new provisions are implemented, administered, and explained to the public may affect the outcome of the reform. Countries with national pension reserve funds designed to partially pre-fund PAYG pension programs provide lessons about the importance of early action and sound governance. Funds that have been in place for a long time provide significant reserves to strengthen the finances of national pension programs. Countries that insulate national reserve funds from being directed to meet other social and political objectives are better equipped to fulfill future pension commitments. In addition, regular disclosure of fund performance supports sound management and administration, and contributes to public education and oversight. Countries that have adopted individual account programs--which may also help pre-fund future retirement income--offer lessons about financing the existing PAYG pension program as the accounts are established. Countries that have funded individual accounts by directing revenue away from the PAYG program while continuing to pay benefits to PAYG program retirees have expanded public debt, built up budget surpluses in advance, cut back or eliminated the PAYG programs, or some combination of these. Because no individual account program can entirely protect against investment risk, some countries have adopted individual accounts as a relatively small portion of their national pension system. Others set minimum rates of return or provide a minimum benefit, which may, however, limit investment diversification and individuals' returns. To mitigate high fees, which can erode small account balances, countries have capped fees, centralized the processing of transactions, or encouraged price competition. Although countries have attempted to educate individuals about reforms and how their choices may affect them, some studies indicate that many workers have limited knowledge about their retirement prospects.
Under NEPA, federal agencies are to evaluate the potential environmental effects of projects they are proposing by preparing either an Environmental Assessment (EA) or a more detailed Environmental Impact Statement (EIS), assuming no Categorical Exclusion (CE) applies. Agencies may prepare an EA to determine whether a proposed project is expected to have a potentially significant impact on the human environment. If prior to or during the development of an EA, the agency determines that the project may cause significant environmental impacts, an EIS should be prepared. However, if the agency, in its EA, determines there are no significant impacts from the proposed project or action, then it is to prepare a document—a Finding of No Significant Impact—that presents the reasons why the agency has concluded that no significant environmental impacts will occur if the project is implemented. An EIS is a more detailed statement than an EA, and NEPA implementing regulations specify requirements and procedures—such as providing the public with an opportunity to comment on the draft document—applicable to the EIS process that are not mandated for EAs. If a proposed project fits within a category of activities that an agency has already determined normally does not have the potential for significant environmental impacts—a CE—and the agency has established that category of activities in its NEPA implementing procedures, then it generally need not prepare an EA or EIS. The agency may instead approve projects that fit within the relevant category by using one of its established CEs. For example, the Bureau of Land Management (BLM) within the Department of the Interior (Interior) has CEs in place for numerous types of activities, such as constructing nesting platforms for wild birds and constructing snow fences for safety. For a project to be approved using a CE, the agency must determine whether any extraordinary circumstances exist in which a normally excluded action may have a significant effect. Figure 1 illustrates the general process for implementing NEPA requirements. Private individuals or companies may become involved in the NEPA process when a project they are developing needs a permit or other authorization from a federal agency to proceed, such as when the project involves federal land. For example, a company may apply for such a permit in constructing a pipeline crossing federal lands; in that case, the agency that is being asked to issue the permit must evaluate the potential environmental effects of constructing the pipeline under NEPA. The private company or developer may in some cases provide environmental analyses and documentation or enter into an agreement with an agency to pay a contractor for the preparation of environmental analyses and documents, but the agency remains ultimately responsible for the scope and content of the analyses under NEPA. CEQ within the Executive Office of the President oversees the implementation of NEPA, reviews and approves federal agency NEPA procedures, and issues regulations and guidance documents that govern and guide federal agencies’ interpretation and implementation of NEPA. The Environmental Protection Agency (EPA) also plays two key roles in other agencies’ NEPA processes. First, EPA reviews and publicly comments on the adequacy of each draft EIS and the environmental impacts of the proposed actions reviewed in the EIS. If EPA determines that the action is environmentally unsatisfactory, it is required by law to refer the matter to CEQ. Second, EPA maintains a national EIS filing system. Federal entities must publish in the Federal Register a Notice of Intent to prepare an EIS and file their draft and final EISs with EPA, which publishes weekly notices in the Federal Register listing EISs available for public review and comment. CEQ’s regulations implementing NEPA require federal agencies to solicit public comment on draft EISs. When the public comment period is finished, the agency proposing to carry out or permitting a project is to analyze comments, conduct further analysis as necessary, and prepare the final EIS. In the final EIS, the agency is to respond to the substantive comments received from other government agencies and the public. Sometimes a federal agency must prepare a supplemental analysis to either a draft or final EIS if it makes substantial changes in the proposed action that are relevant to environmental concerns, or if there are significant new circumstances or information relevant to environmental concerns. Further, in certain circumstances, agencies may—through “incorporation by reference,” “adoption,” or “tiering”—use another analysis to meet some or, in the case of adoption, all of the environmental review requirements of NEPA. Unlike other environmental statutes, such as the Clean Water Act or the Clean Air Act, no individual agency has enforcement authority with regard to NEPA’s implementation. sometimes cited as the reason that litigation has been chosen as an avenue by individuals and groups that disagree with how an agency meets NEPA requirements for a given project. For example, a group may allege that an EIS is inadequate, or that the environmental impacts of an action will in fact be significant when an agency has determined they are not. Critics of NEPA have stated that those who disapprove of a federal project will use NEPA as the basis for litigation to delay or halt that project. Others argue that litigation only results when agencies do not comply with NEPA’s procedural requirements. Governmentwide data on the number and type of most NEPA analyses are not readily available, as data collection efforts vary by agency (see app. II for a summary of federal NEPA data collection efforts). Agencies do not routinely track the number of EAs or CEs, but CEQ estimates that EAs and CEs comprise most NEPA analyses. EPA publishes and maintains governmentwide information on EISs. Also, unlike these other laws, while NEPA imposes procedural requirements, it does not establish substantive standards. Many agencies do not routinely track the number of EAs or CEs. However, based on information provided to CEQ by federal agencies, CEQ estimates that about 95 percent of NEPA analyses are CEs, less than 5 percent are EAs, and less than 1 percent are EISs. These estimates were consistent with the information collected on projects funded by the American Recovery and Reinvestment Act of 2009 (Recovery Act). Projects requiring an EIS are a small portion of all projects but are likely to be high-profile, complex, and expensive. As the Congressional Research Service (CRS) noted in its 2011 report on NEPA, determining the total number of federal actions subject to NEPA is difficult, since most agencies track only the number of actions requiring an EIS. The percentages of EISs, EAs, and CEs vary by agency because of differences in project type and agency mission. For example, the Department of Energy (DOE) reported that 95 percent of its 9,060 NEPA analyses from fiscal year 2008 to fiscal year 2012 were CEs, 2.6 percent were EAs, and 2.4 percent were EISs or supplement analyses. Further, in June 2012, we reported that the vast majority of highway projects are processed as CEs, noting that the Federal Highway Administration (FHWA) within the Department of Transportation (DOT) estimated that approximately 96 percent of highway projects were processed as CEs, based on data collected in 2009. Representing the lowest proportion of CEs in the data available to us, the Forest Service reported that 78 percent of its 14,574 NEPA analyses from fiscal year 2008 to fiscal year 2012 were CEs, 20 percent were EAs, and 2 percent were EISs. Of the agencies we reviewed, DOE and the Forest Service officials told us that CEs are likely underrepresented in their totals because agency systems do not track certain categories of CEs considered “routine” activities, such as emergency preparedness planning. For example, DOE officials stated that the department has two types of CEs, those that (1) are routine (e.g., administrative, financial, and personnel actions; information gathering, analysis, and dissemination) and are not tracked and (2) are documented as required by DOE regulations. EPA publishes and maintains governmentwide information on EISs, updated when Notices of Availability for draft and final EISs are published in the Federal Register. CEQ and NAEP publish publicly available reports on EISs using EPA data. As shown in table 1, the three compilations of EIS data produce different totals. According to CEQ and EPA officials, the differences in EIS numbers shown in table 1 are likely due to different assumptions used to count the number of EISs and minor inconsistencies in the EPA data compiled for the CEQ and NAEP reports and for our analysis of EPA’s data. CEQ obtains the EIS data it reports based on summary totals provided by EPA. Occasionally, CEQ also gathers some CE, EA, and EIS data through its “data call” process, by which it aggregates information submitted by agencies that use different data collection mechanisms of varying quality. According to a January 2011 CRS report on NEPA, agencies track the total draft, final, and supplemental EISs filed, not the total number of individual federal actions requiring an EIS. In other words, agency data generally reflect the number of EIS documents associated with a project, not the number of projects. Four agencies—the Forest Service, BLM, FHWA, and the U.S. Army Corps of Engineers within the Department of Defense (DOD)—are generally the most frequent producers of EISs, accounting for 60 percent As of the EISs in 2012, according to data in NAEP’s April 2013 report. shown in table 2, these agencies account for over half of total draft and final EISs from 2008 through 2012, according to NAEP data. NAEP, Annual NEPA Report 2012 of the National Environmental Policy Act (NEPA) Practice (April 2013). Little information exists at the agencies we reviewed on the costs and benefits of completing NEPA analyses. We found that, with few exceptions, the agencies did not routinely track data on the cost of completing NEPA analyses, and that the cost associated with conducting an EIS or EA can vary considerably, depending on the complexity and scope of the project. Information on the benefits of completing NEPA analyses is largely qualitative. Complicating matters, agency activities under NEPA are hard to separate from other environmental review tasks under federal laws, such as the Clean Water Act and the Endangered Species Act; executive orders; agency guidance; and state and local laws. Little information exists on the cost of completing NEPA analyses. With few exceptions, the agencies we reviewed do not track the cost of completing NEPA analyses, although some of the agencies tracked information on NEPA time frames, which can be an element of project cost. In general, we found that the agencies we reviewed do not routinely track data on the cost of completing NEPA analyses. According to CEQ officials, CEQ rarely collects data on projected or estimated costs related to complying with NEPA. EPA officials also told us that there is no governmentwide mechanism to track the costs of completing EISs. Similarly, most of the agencies we reviewed do not track NEPA cost data. For example, Forest Service officials said that tracking the cost of completing NEPA analyses is not currently a feature of their NEPA data collection system. Complicating efforts to record costs, applicants may, in some cases, provide environmental analyses and documentation or enter into an agreement with the agency to pay for the preparation of NEPA analyses and documentation needed for permits issued by federal agencies. Agencies generally do not report costs that are “paid by the applicant” because these costs reflect business transactions between applicants and their contractors and are not available to agency officials. Two NEPA-related studies completed by federal agencies illustrate how it is difficult to extract NEPA cost data from agency accounting systems. An August 2007 Forest Service report on competitive sourcing for NEPA compliance stated that it is “very difficult to track the actual cost of performing NEPA. Positions that perform NEPA-related activities are currently located within nearly every staff group, and are funded by a large number of budget line items. There is no single budget line item or budget object code to follow in attempting to calculate the costs of doing NEPA.” Similarly, a 2003 study funded by FHWA evaluating the performance of environmental “streamlining” noted that NEPA cost data would be difficult to segregate for analysis. However, DOE tracks limited cost data associated with NEPA analyses. DOE officials told us that they track the funds the agency pays to contractors to prepare NEPA analyses and does not track other costs, such as the time spent by DOE employees. According to DOE data, the average payment to a contractor to prepare an EIS from calendar year 2003 through calendar year 2012 was $6.6 million, with the range being a low of $60,000 and a high of $85 million. DOE’s median EIS contractor cost was $1.4 million over that time period. More recently, DOE’s March 2014 NEPA quarterly report stated that for the 12 months that ended December 31, 2013, the median cost for the preparation of four EISs for which cost data were available was $1.7 million, and the average cost was $2.9 million. For context, a 2003 task force report to CEQ—the only available source of governmentwide cost estimates—estimated that an EIS typically cost from $250,000 to $2 million. In comparison, DOE’s payments to contractors to produce an EA ranged from $3,000 to $1.2 million with a median cost of $65,000 from calendar year 2003 through calendar year 2012, according to DOE data. In its March 2014 NEPA quarterly report, DOE stated that, for the 12 months that ended December 31, 2013, the median cost for the preparation of 8 EAs was $73,000, and the average cost was $301,000. For governmentwide context, the 2003 task force report to CEQ estimated that an EA typically costs from $5,000 to $200,000.no cost data on CEs but stated that the cost of a CE—which, in many cases, is for a “routine” activity, such as repainting a building—was generally much lower than the cost of an EA. Some governmentwide information is available on time frames for completing EISs—which can be one element of project cost—but few estimates exist for EAs and CEs because most agencies do not collect information on the number and type of NEPA analyses, and few guidelines exist on time frames for completing environmental analyses (see app. III for information on CEQ NEPA time frame guidelines). NAEP annually reports information on EIS time frames by analyzing information published by agencies in the Federal Register, with the Notice of Intent to complete an EIS as the “start” date, and the Notice of Availability for the final EIS as the “end” date. Our review did not identify other governmentwide sources of these data. Based on the information published in the Federal Register, NAEP reported in April 2013 that the 197 final EISs in 2012 had an average preparation time of 1,675 days, or 4.6 years—the highest average EIS preparation time the organization had recorded since 1997. From 2000 through 2012, according to NAEP, the total annual average governmentwide EIS preparation time increased at an average rate of 34.2 days per year. In addition, some agency officials told us that time frame measures for EISs may not account for up-front work that occurs before the Notice of Intent to produce an EIS—the “start” date typically used in EIS time frame calculations. DOT officials told us that the “start” date is unclear in some cases because of the large volume of project development and planning work that occurs before a Notice of Intent is issued. DOE officials made a similar point, noting that time frames are difficult to determine for many NEPA analyses because there is a large volume of up-front work that is not captured by standard time frame measures. According to technical comments from CEQ and federal agencies, to ensure consistency in its NEPA metrics, DOE measures EIS completion time from the date of publication of the Notice of Intent to the date of publication of the notice of availability of the final EIS. Further, according to a 2007 CRS report, a project may stop and restart for any number of reasons that are unrelated to NEPA or any other environmental requirement. year time frame to complete a project may have been associated with funding issues, engineering requirements, changes in agency priorities, delays in obtaining nonfederal approvals, or community opposition to the project, to name a few. CRS, The National Environmental Policy Act: Streamlining NEPA, RL33267 (Washington, D.C.: Dec. 6, 2007). their EAs are generally completed in about 1 month but that they may take up to 6 months depending on their complexity. In addition, DOT officials said that determining the start time of EAs and CEs is even more difficult than for EISs. The time for completing these can depend in large part on how much of the up-front work was done already as part of the preliminary engineering process and how many other environmental processes are involved (e.g., consultations under the Endangered Species Act). The little governmentwide information that is available on CEs shows that they generally take less time to complete than EAs. DOE does not track completion times for CEs, but agency officials stated that they usually take 1 or 2 days. Similarly, officials at Interior’s Office of Surface Mining reported that CEs take approximately 2 days to complete. In contrast, Forest Service took an average of 177 days to complete CEs in fiscal year 2012, shorter than its average of 565 days for EAs, according to agency documents. The Forest Service documents its CEs with Decision Memos, which are completed after all necessary consultations, reviews, and other determinations associated with a decision to implement a particular proposed project are completed. According to agency officials, information on the benefits of completing NEPA analyses is largely qualitative. We have previously reported that assessing the benefits of federal environmental requirements, including those associated with NEPA, is difficult because the monetization of environmental benefits often requires making subjective decisions on key assumptions. According to studies and agency officials, some of the qualitative benefits of NEPA include its role as a tool for encouraging transparency and public participation and in discovering and addressing the potential effects of a proposal in the early design stages to avoid problems that could end up taking more time and being more costly in the long run. Encouraging public participation. NEPA is intended to help government make informed decisions, encourage the public to participate in those decisions, and make the government accountable for its decisions. Public participation is a central part of the NEPA process, allowing agencies to obtain input directly from those individuals who may be affected by a federal action. DOE officials referred to this public comment component of NEPA as a piece of “good government architecture,” and DOD officials similarly described NEPA as a forum for resolving organizational differences by promoting interaction between interested parties inside and outside the government. Likewise, the National Park Service within Interior uses its Planning, Environment, and Public Comment (PEPC) system as a comprehensive information and public comment site for National Park Service projects, including those requiring NEPA analyses. CRS, The Role of the Environmental Review Process in Federally Funded Highway Projects: Background and Issues for Congress, R42479, (Washington, D.C.: Apr. 11, 2012). environmental outcomes brought about through the NEPA process. DOE has also published a document showing its NEPA “success stories.” In one example from this document, DOE cited the November 28, 2008, Final Programmatic EIS for the Designation of Energy Corridors on Federal Lands in 11 Western States (DOE/EIS-0386), that it had developed in cooperation with BLM. In this case, public comments resulted in the consideration of alternative routes and operating procedures for energy transmission corridors to avoid sensitive environmental resources. Agency activities under NEPA are hard to separate from other required environmental analyses, further complicating the determination of costs and benefits. CEQ’s NEPA regulations specify that, to the fullest extent possible, agencies must prepare NEPA analyses concurrently with other environmental requirements. CEQ’s March 6, 2012, memorandum on Improving the Process for Preparing Efficient and Timely Environmental Reviews under the National Environmental Policy Act states that agencies “must integrate, to the fullest extent possible, their draft EIS with environmental impact analyses and related surveys and studies required by other statutes or executive orders, amplifying the requirement in the CEQ regulations. The goal should be to conduct concurrent rather than sequential processes whenever appropriate.” Different types of environmental analyses may also be conducted in response to other requirements under federal laws such as the Clean Water Act and the Endangered Species Act; executive orders; agency guidance; and state and local laws. As reported in 2011 by CRS, NEPA functions as an “umbrella” statute; any study, review, or consultation required by any other law that is related to the environment should be conducted within the framework of the NEPA process. As a result, the biggest challenge in determining the costs and benefits of NEPA is separating activities under NEPA from activities under other environmental laws. According to DOT officials, the dollar costs for developing a NEPA analysis reported by agencies also includes costs for developing analyses required by a number of other federal laws, executive orders, and state and local laws, which potentially could be a significant part of the cost estimate. Similarly, DOD officials stated that NEPA is one piece of the larger environmental review process involving many environmental requirements associated with a project. As noted by officials from the Bureau of Reclamation within Interior, the NEPA process by design incorporates a multitude of other compliance issues and provides a framework and orderly process—akin to an assembly line— which can help reduce delays. In some instances, a delay in NEPA is the result of a delay in an ancillary effort to comply with another law, according to these officials and a wide range of other sources. Some information is available on the frequency and outcome of NEPA litigation. Agency data, interviews with agency officials, and available studies indicate that most NEPA analyses do not result in litigation, although the impact of litigation could be substantial if a lawsuit affects numerous federal decisions or actions in several states. The federal government prevails in most NEPA litigation, according to CEQ and NAEP data, and legal studies. Agency data, interviews with agency officials, and available studies indicate that most NEPA analyses do not result in litigation. While no governmentwide system exists to track NEPA litigation or its associated costs, NEPA litigation data are available from CEQ, the Department of Justice, and NAEP. Appendix IV describes how these sources gather information in different ways for different purposes. The number of lawsuits filed under NEPA has generally remained stable following a decline after the early years of implementation, according to CEQ and other sources. NEPA litigation began to decline in the mid- 1970s and has remained relatively constant since the late 1980s, as reported by CRS in 2007. More specifically, 189 cases were filed in 1974, according to the twenty-fifth anniversary report of CEQ. In 1994, 106 NEPA lawsuits were filed. Since that time, according to CEQ data, the number of NEPA lawsuits filed annually has consistently been just above or below 100, with the exception of a period in the early- and mid- 2000s. In 2011, the most recent data available, CEQ reported 94 NEPA cases, down from the average of 129 cases filed per year from 2001 through 2008. In 2012, U.S. Courts of Appeals issued 28 decisions involving implementation of NEPA by federal agencies, according to NAEP data. Although the number of NEPA lawsuits is relatively small when compared with the total number of NEPA analyses, one lawsuit can affect numerous federal decisions or actions in several states, having a far-reaching impact. In addition to CEQ regulations and an agency’s own regulations, according to a 2011 CRS report, preparers of NEPA analyses and documentation may be mindful of previous judicial interpretation in an attempt to prepare a “litigation-proof” EIS.an effort may lead to an increase in the cost and time needed to complete NEPA analyses but not necessarily to an improvement in the quality of the documents ultimately produced. The federal government prevails in most NEPA litigation, according to CEQ and NAEP data and other legal studies. CEQ annually publishes survey results on NEPA litigation that identify the number of cases involving a NEPA-based cause of action; federal agencies that were identified as a lead defendant; and general information on plaintiffs (i.e., grouped into categories, such as “public interest groups” and “business groups”); reasons for litigation; and outcomes of the cases decided during the year. In general, according to CEQ data, NEPA case outcomes are about evenly split between those involving challenges to EISs and those involving other challenges to the adequacy of NEPA analyses (e.g., EAs and CEs). The federal government successfully defended its decisions in more than 50 percent of the cases from 2008 through 2011. For example, in 2011, 99 of the 146 total NEPA case dispositions—68 percent— reported by CEQ resulted in a judgment favorable to the federal agency being sued or a dismissal of the case without settlement. In 2011, that rate increased to 80 percent if the 18 settlements reported by CEQ were considered successes. However, the CEQ data do not present enough case-specific details to determine whether the settlements should be considered as favorable dispositions. The plaintiffs, in most cases, were public interest groups. Reporting litigation outcome data similar to CEQ’s, a January 2014 article on Forest Service land management litigation found that the Forest Service won nearly 54 percent of its cases and lost about 23 percent. About 23 percent of the cases were settled, which the study found to be an important dispute resolution tool. Litigants generally challenged logging projects, most frequently under the National Environmental Policy Act and the National Forest Management Act. The article found that the Forest Service had a lower success rate in cases where plaintiffs advocated for less resource use (generally initiated by environmental groups) compared to cases where greater resource use was advocated. The report noted that environmental groups suing the Forest Service for less resource use not only have more potential statutory bases for legal challenges available to them than groups seeking more use of national forest resources, but there are also more statutes that relate directly to enhancing public participation and protecting natural resources. Other sources of information also show that the federal government prevails in most NEPA litigation. For example, NAEP’s 2012 annual NEPA report stated that the government prevailed in 24 of the 28 cases (86 percent) decided by U.S. Courts of Appeals. A NEPA legal treatise similarly reports that “government agencies almost always win their case when the adequacy of an EIS is challenged, if the environmental analysis is reasonably complete. Adequacy cases raise primarily factual issues on which the courts normally defer to the agency. The success record in litigation is more evenly divided when a NEPA case raises threshold questions that determine whether the agency has complied with the statute. An example is a challenge to an agency decision that an EIS was not required. Some lower federal courts are especially sensitive to agency attempts to avoid their NEPA responsibilities.” NAEP also provides detailed descriptions of cases decided by U.S. Courts of Appeals in its annual reports. We provided a draft of this product to the Council on Environmental Quality (CEQ) for governmentwide comments in coordination with the Departments of Agriculture, Defense, Energy, Interior, Justice, and Transportation, and the Environmental Protection Agency (EPA). In written comments, reproduced in appendix V, CEQ generally agreed with our findings. CEQ and federal agencies also provided technical comments that we incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees; Chair of the Council on Environmental Quality; Secretaries of Defense, Energy, the Interior, and Transportation; Attorney General; Chief of the Forest Service within the Department of Agriculture; Administrator of EPA; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact us at (202) 512-3841 or fennella@gao.gov; or gomezj@gao.gov; and (202) 512-4523 or leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. This appendix provides information on the scope of work and the methodology used to collect information on how we described the (1) number and type of National Environmental Policy Act (NEPA) analyses, (2) costs and benefits of completing those analyses, and (3) frequency and outcomes of related litigation. We included available information on both costs and benefits to be consistent with standard economic principles for evaluating federal programs and generally accepted government auditing standards. To respond to these objectives, we reviewed relevant publications, obtained documents and analyses from federal agencies, and interviewed federal officials and individuals from academia and a professional association with expertise in conducting NEPA analyses. Specifically, to describe the number and type of NEPA analyses and what is known about the costs and benefits of NEPA analyses, we reported information identified through the literature review, interviews, and other sources. We selected the Departments of Defense, Energy, the Interior, and Transportation and the Forest Service within the U.S. Department of Agriculture for analysis because they generally complete the most NEPA analyses. Our findings for these agencies are not generalizeable to other federal agencies. To assess the availability of information to respond to these objectives, we (1) conducted a literature search and review with the assistance of a technical librarian; (2) reviewed our past work on NEPA and studies from the Congressional Research Service; (3) obtained documents and analyses from federal agencies; and (4) interviewed officials who oversee federal NEPA programs from the Departments of Defense, Energy, the Interior, Justice, and Transportation; the Forest Service within the Department of Agriculture; the Environmental Protection Agency (EPA); the Council on Environmental Quality (CEQ) within the Executive Office of the President; and individuals with expertise from academia and the National Association of Environmental Professionals (NAEP)—a professional association representing private and government NEPA practitioners. Specifically, to describe the number and type of NEPA analyses from calendar year 2008 through calendar year 2012, we analyzed data identified through the literature review and interviews. We focused on data and documents maintained by CEQ, EPA, and NAEP. CEQ and NAEP periodically report data on the number of certain types of NEPA analyses, and EPA maintains a database of Environmental Impact Statements, one of its roles in implementing NEPA. To generate information on the number of Environmental Impact Statements from EPA’s database, we sorted the data by calendar year and counted the number of analyses for each year. We did not conduct an extensive evaluation of this database, although a high-level analysis discovered potential inconsistencies. For example, EPA’s database contained entries with the same unique identifier, making it difficult to identify the exact number of NEPA analyses. We discussed these inconsistencies with EPA officials, who told us that they were aware of certain errors due to manual data entry and the use of different analysis methods. These officials said that EPA EIS data provided to others may differ because EPA periodically corrects the manually entered data. We did not count duplicate records in our analysis of EPA’s data. We believe these data are sufficiently reliable for the purposes of this report. To describe what is known about the costs and benefits of NEPA analysis, we reported the available information on the subject identified through the literature review and interviews. To describe the frequency and outcome of NEPA litigation we (1) reviewed laws, regulations, and agency guidance; (2) reviewed NEPA litigation data generated by CEQ and NAEP; (3) interviewed Department of Justice officials; and (4) reviewed relevant legal studies. Information from these sources is cited in footnotes throughout this report. To answer the various objectives, we relied on data from several sources. To assess the reliability of data collected by agencies and NAEP, we reviewed existing documentation, when available, and interviewed officials knowledgeable about the data. We found all data sufficiently reliable for the purposes of this report. We conducted this performance audit from June 2013 to April 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Federal National Environmental Policy Act (NEPA) data collection efforts vary by agency. The Council on Environmental Quality’s (CEQ) NEPA implementing regulations set forth requirements that federal agencies must adhere to, and require federal agencies to adopt their own procedures, as necessary, that conform with NEPA and CEQ’s regulations. Federal agencies decide how to apply CEQ regulations in the NEPA process. According to a 2007 Congressional Research Service (CRS) report, the CEQ regulations were meant to be generic in nature, with individual agencies formulating procedures applicable to their own projects. The report states that this approach was taken because of the diverse nature of projects and environmental impacts managed by federal agencies with unique mandates and missions. Consequently, NEPA procedures vary to some extent from agency to agency, and comprehensive governmentwide data on NEPA analyses are generally not centrally collected. As stated by a CEQ official, “there is no master NEPA spreadsheet, and there are many gaps in NEPA-related data collected across the federal government.” To obtain information on agency NEPA activities, the official said that CEQ works closely with its federal agency NEPA contact group, composed of key officials responsible for implementing NEPA in each agency. CEQ meets regularly with these officials and uses this network to collect NEPA-related information through requests for information, whereby CEQ distributes a list of questions to relevant agencies and then collects and reports the answers. According to CEQ officials, NEPA data reported by CEQ are generated through these requests, which have quality assurance limitations because related activities at federal departments are themselves diffused throughout various offices and bureaus. Of the agencies we reviewed, the Departments of Defense, the Interior, and Transportation do not centrally collect information on NEPA analyses, allowing component agencies to collect the information, whereas the Department of Energy and the Forest Service within the Department of Agriculture aggregate certain data. Department of Defense (DOD). Each of the military services and defense agencies collects data on NEPA analyses, but DOD does not aggregate information that is collected on the number and type of NEPA analyses at the departmentwide level. Data collection within the military services and agencies is decentralized, according to DOD officials. For example, the Army collects Environmental Impact Statement (EIS) data at the Armywide level, and responsibility for Environmental Assessments (EA) and Categorical Exclusions (CE) are delegated to the lowest possible command level. DOD officials said that each of the services and defense agencies works to maintain a balance between the work that needs to be completed and the management effort needed to accomplish that work. While the level of information collected may vary by service or defense agency, each collects the information that it has determined necessary to manage its NEPA workload. According to these officials, every new information system and data call must generally come from existing funding, taking resources from other tasks. Department of the Interior (Interior). Data are not collected at the department level, according to Interior officials, and Interior conducts its own departmentwide data calls to component bureaus and entities whenever CEQ asks for NEPA-related information. The data collection efforts of its individual bureaus vary considerably. For example, the National Park Service uses its Planning, Environment, and Public Comment (PEPC) system as a comprehensive information and public comment site for National Park Service projects. Other Interior bureaus are beginning to track information or rely on less formal systems and not formalized databases. For example, the Bureau of Indian Affairs uses its internal NEPA Tracker system—started in September 2012—which the bureau states is to collect information on NEPA analyses to create a better administrative record to potentially identify new categories of CEs for future development and use. Prior to the NEPA Tracker system, the Bureau of Indian Affairs tracked NEPA analyses less formally, with varying information quality across the bureau’s different entities, according to agency officials. According to Bureau of Land Management officials, the bureau has developed and is currently implementing its ePlanning system, a comprehensive, bureau-wide, Internet-based tool for writing, reviewing, publishing, and receiving public commentary on land use plans and NEPA documents. The tool is fully operational, and the bureau expects to complete implementation in 2015. At the Bureau of Reclamation, NEPA activities are cataloged and tracked by each region or area office according to local procedures, and the information on the number and type of NEPA analyses resides with these offices. NEPA information at the Fish and Wildlife Service, according to agency officials, is collected at the refuge level. Department of Transportation (DOT). According to agency officials, each DOT administration—such as the Federal Highway Administration (FHWA), which funds highway projects; the Federal Motor Carrier Safety Administration, which develops commercial motor vehicle and driver regulations; and the Federal Aviation Administration, which is responsible for, among other things, the nation’s air traffic control system—has its own NEPA operating and data collection procedures that track NEPA- related information to varying degrees because each mode of transportation has different characteristics and needs. Environmental reviews for highway projects funded by FHWA have long been of interest to Congress and federal, state, and local stakeholders. FHWA and its 52 division offices have traditionally used an internal data system to track EIS documents. FHWA officials told us that they are in the process of replacing the agency’s legacy system with the new Project and Program Action Information (PAPAI) system, which went online in March 2013. PAPAI is capable of tracking information on EISs, EAs, and CEs, including project completion time frames, but its use is not mandatory, according to DOT officials. Department of Energy (DOE). The Office of NEPA Policy and Compliance within DOE maintains a website where it posts extensive agencywide NEPA documentation, including information on the number and type of NEPA analyses completed since the mid-1990s and a series of quarterly lessons learned reports documenting certain NEPA performance metrics, DOE’s September 2013 quarterly including information on time and cost. report documents available information on its NEPA analysis workload, completion times, and costs from 2003 through 2012. DOE began tracking cost and completion time metrics in the mid-1990s because it was concerned about the timeliness and cost of NEPA reviews.officials told us they collect these data because, in their view, “what gets measured gets done.” Making DOE NEPA analyses easily available allows others to apply the best practices and potentially avoid costly litigation, according to DOE officials. Department of Agriculture’s Forest Service. The Forest Service’s computer system, known as the Planning, Appeals, and Litigation System, provides information for responding to congressional requests for NEPA data, to support preparation for responding to lawsuits, and about overall project objectives and design. As stated by agency officials, data from the system can be used to identify trends in the preparation of NEPA analyses over time. This information can be valuable to managers in managing overall NEPA compliance and can identify innovative ways to deal with recurring environmental issues that affect projects, according to Forest Service officials. The system also provides tools to help the agency meet NEPA requirements, including automatic distribution of the schedule of proposed NEPA actions, a searchable database of draft EISs, and electronic filing of draft and final EISs to EPA. CEQ also identified as a best practice the service’s electronic Management of NEPA (eMNEPA) pilot—a suite of web-based tools and databases to improve the efficiency of environmental reviews by enabling online submission and processing of public comments, among other things. On March 17, 2011, CEQ invited members of the public and federal agencies to nominate projects employing innovative approaches to complete environmental reviews more efficiently and effectively. On August 31, 2011, CEQ announced that eMNEPA was selected as part of the first NEPA pilot project. CEQ officials told us that they would prioritize the use of CEQ oversight resources to focus on identifying, disseminating, and encouraging agencies to use their additional resources in improving operational efficiency through tools like eMNEPA rather than focusing on improved data collection and reporting. Specifically, CEQ officials said that information technology tools that enable easy access to relevant technical information across the federal government are also of value in enhancing the ability of agencies to conduct efficient and timely NEPA environmental reviews. “. . . even large complex energy projects would require only about 12 months for the completion of the entire EIS process. For most major actions, this period is well within the planning time that is needed in any event, apart from NEPA. The time required for the preparation of program EISs may be greater. The Council also recognizes that some projects will entail difficult long-term planning and/or the acquisition of certain data which of necessity will require more time for the preparation of the EIS. Indeed, some proposals should be given more time for the thoughtful preparation of an EIS and development of a decision which fulfills NEPA’s substantive goals. For cases in which only an environmental assessment will be prepared, the NEPA process should take no more than 3 months, and in many cases substantially less, as part of the normal analysis and approval process for the action.” CEQ’s National Environmental Policy Act (NEPA) regulations do not specify a required time frame for completing NEPA analyses. The regulations state that CEQ has decided that prescribed universal time limits for the entire NEPA process are too inflexible. The regulations also state that federal agencies are encouraged to set time limits appropriate to individual actions and should take into consideration factors such as the potential for environmental harm, size of the proposed action, and degree of public need for the proposed action, including the consequences of delay. CEQ’s March 6, 2012, memorandum on Improving the Process for Preparing Efficient and Timely Environmental Reviews under the National Environmental Policy Act encourages agencies to develop meaningful and expeditious timelines for environmental reviews, and it amplifies the factors an agency should take into account when setting time limits, noting that establishing appropriate and predictable time limits promotes the efficiency of the NEPA process. The CEQ regulations also require agencies to reduce delay by, among other things, integrating the NEPA process into early project planning, emphasizing interagency cooperation, integrating NEPA requirements with other environmental review requirements, and adopting environmental documents prepared by other federal agencies. In general, there is no governmentwide system to track National Environmental Policy Act (NEPA) litigation and its associated costs. The Council on Environmental Quality (CEQ), the Department of Justice, and the National Association of Environmental Professionals (NAEP) gather NEPA litigation information in different ways for different purposes. CEQ collects NEPA litigation data through periodic requests for information, whereby it distributes a list of questions to the general counsel offices of relevant agencies and then collects and reports the information on its website. CEQ’s NEPA litigation survey presents information on NEPA-based claims brought against agencies in court, including aggregated information on types of lawsuits and who brought the suits. The survey results do not present information on the cost of NEPA litigation because, according to officials from several of the agencies we reviewed, agencies do not track this information. For example, Forest Service officials told us that they do not centrally track the cost or time associated with the preparation for litigation. As another example, the Department of Energy’s litigation data do not include the cost of litigation or the time spent on litigation-related tasks, although it includes the number of NEPA-related cases over time. The Department of Justice defends nearly all federal agencies when they face NEPA litigation. Management System database tracks limited information on NEPA cases handled by the Environment and Natural Resources Division, and the Executive Office for U.S. Attorneys case management system, called the Legal Information Office Network System, tracks NEPA cases at individual U.S. Attorneys’ Offices to some extent. However, Department of Justice officials told us that these systems do not interface with each other, so it would be impossible to gather comprehensive information on NEPA litigation from the Department of Justice. Such litigation is handled both by the Department of Justice’s Environment and Natural Resources Division and by individual U.S. Attorneys’ Offices depending upon the agency, the type of case, and the expertise of the department’s personnel. Agency personnel provide the Department of Justice with the administrative record that forms the basis of judicial review and provide assistance throughout the litigation process, as needed. Further, Department of Justice officials told us that the department is not able to comprehensively identify all NEPA litigation because a single case could have numerous other environmental claims in addition to a single NEPA claim. In such instances, the Environment and Natural Resources Division’s Case Management System may not capture every claim raised in the case. As a result, the Department of Justice does not track trends in NEPA litigation or staff hours spent on NEPA cases. The cost of collecting the information would outweigh the management benefits of doing so, according to these officials. The Department of Justice’s NEPA litigation data are not comparable to CEQ’s because the department’s system is designed to track cases, while CEQ provides information on NEPA events—such as the number of cases filed, number of injunctions or remands, and other decisions. There could be multiple NEPA events or decisions related to a single case. Department of Justice officials stated that they would not be able to reconcile CEQ’s information with information in Department of Justice systems. NEPA litigation data collected by the third source—NAEP—differ from those collected by CEQ or the Department of Justice. NAEP collects information on NEPA cases decided by U.S. Courts of Appeals because these cases are generally the most significant to the NEPA practitioners that are NAEP’s members, according to NAEP officials. The NAEP report contains case study summaries of the latest developments in NEPA litigation to help NEPA practitioners understand how to account for new court-mandated requirements in NEPA analyses and does not attempt to track all NEPA litigation across the government. In addition to the individuals named above, Anne Johnson and Harold Reich (Assistant Directors); Ronnie Bergman; Cindy Gilbert; Richard P. Johnson; Terence Lam; Alison O’Neill; Giuseppe Thompson; and John Wren made key contributions to this report.
NEPA requires all federal agencies to evaluate the potential environmental effects of proposed projects—such as roads or bridges—on the human environment. Agencies prepare an EIS when a project will have a potentially significant impact on the environment. They may prepare an EA to determine whether a project will have a significant potential impact. If a project fits within a category of activities determined to have no significant impact—a CE—then an EA or an EIS is generally not necessary. The adequacy of these analyses has been a focus of litigation. GAO was asked to review issues related to costs, time frames, and litigation associated with completing NEPA analyses. This report describes information on the (1) number and type of NEPA analyses, (2) costs and benefits of completing those analyses, and (3) frequency and outcomes of related litigation. GAO included available information on both costs and benefits to be consistent with standard economic principles for evaluating federal programs, and selected the Departments of Defense, Energy, the Interior, and Transportation, and the USDA Forest Service for analysis because they generally complete the most NEPA analyses. GAO reviewed documents and interviewed individuals from federal agencies, academia, and professional groups with expertise in NEPA analyses and litigation. GAO's findings are not generalizeable to agencies other than those selected. This report has no recommendations. GAO provided a draft to CEQ and agency officials for review and comment, and they generally agreed with GAO's findings. Governmentwide data on the number and type of most National Environmental Policy Act (NEPA) analyses are not readily available, as data collection efforts vary by agency. NEPA generally requires federal agencies to evaluate the potential environmental effects of actions they propose to carry out, fund, or approve (e.g., by permit) by preparing analyses of different comprehensiveness depending on the significance of a proposed project's effects on the environment—from the most detailed Environmental Impact Statements (EIS) to the less comprehensive Environmental Assessments (EA) and Categorical Exclusions (CE). Agencies do not routinely track the number of EAs or CEs, but the Council on Environmental Quality (CEQ)—the entity within the Executive Office of the President that oversees NEPA implementation—estimates that about 95 percent of NEPA analyses are CEs, less than 5 percent are EAs, and less than 1 percent are EISs. Projects requiring an EIS are a small portion of all projects but are likely to be high-profile, complex, and expensive. The Environmental Protection Agency (EPA) maintains governmentwide information on EISs. A 2011 Congressional Research Service report noted that determining the total number of federal actions subject to NEPA is difficult, since most agencies track only the number of actions requiring an EIS. Little information exists on the costs and benefits of completing NEPA analyses. Agencies do not routinely track the cost of completing NEPA analyses, and there is no governmentwide mechanism to do so, according to officials from CEQ, EPA, and other agencies GAO reviewed. However, the Department of Energy (DOE) tracks limited cost data associated with NEPA analyses. DOE officials told GAO that they track the money the agency pays to contractors to conduct NEPA analyses. According to DOE data, its median EIS contractor cost for calendar years 2003 through 2012 was $1.4 million. For context, a 2003 task force report to CEQ—the only available source of governmentwide cost estimates—estimated that a typical EIS cost from $250,000 to $2 million. EAs and CEs generally cost less than EISs, according to CEQ and federal agencies. Information on the benefits of completing NEPA analyses is largely qualitative. According to studies and agency officials, some of the qualitative benefits of NEPA include its role in encouraging public participation and in discovering and addressing project design problems that could be more costly in the long run. Complicating the determination of costs and benefits, agency activities under NEPA are hard to separate from other required environmental analyses under federal laws such as the Endangered Species Act and the Clean Water Act; executive orders; agency guidance; and state and local laws. Some information is available on the frequency and outcome of NEPA litigation. Agency data, interviews with agency officials, and available studies show that most NEPA analyses do not result in litigation, although the impact of litigation could be substantial if a single lawsuit affects numerous federal decisions or actions in several states. In 2011, the most recent data available, CEQ reported 94 NEPA cases filed, down from the average of 129 cases filed per year from calendar year 2001 through calendar year 2008. The federal government prevails in most NEPA litigation, according to CEQ and legal studies.
According to APTA, 6,800 organizations— ranging from large multi-modal systems in major metropolitan areas to single-vehicle special demand- response service providers that transport senior citizens and the disabled—provided public transportation in 2013. While it is difficult to establish the exact dimensions of urban and rural transit service because transit providers headquartered in urban areas may also serve rural areas, urban transit providers primarily serve areas with populations of 50,000 or more. Within this category, small urbanized areas are those with populations under 200,000, and include small cities, college towns, and vacation or resort areas, while large urbanized areas are those with 200,000 or more people, including the country’s major metropolitan areas. The 834 agencies that serve urban areas accounted for more than 98 percent of all transit passenger trips in 2013, according to APTA. Non-urbanized, or rural, areas have populations of fewer than 50,000 people. In 2013, approximately 1,400 public transit agencies operated in rural areas, accounting for 1.5 percent of all passenger trips, according to APTA. Transit providers in rural areas operate in a variety of environments, serving areas that may span thousands of square miles in remote areas—meaning that trips may be long with only a few riders at any given time—or be located in more developed rural areas surrounding major cities. Compared to large urban systems, rural transit providers generally have low budgets, few employees, and small vehicle fleets. However, these transit systems provide vital mobility and connections to essential services for the approximately 75 million people who live in rural America. Transit providers serve the public through a variety of transportation modes. In this report, we use the following descriptions of transportation modes: Fixed-route bus service: rubber-tired passenger vehicles that operate on fixed routes and schedules over roadways. Diesel, gasoline, battery, or alternative fuel engines power these vehicles. This category includes bus rapid transit, commuter bus, and trolley bus. Paratransit: accessible, origin-to-destination transportation service that operates in response to calls or requests from riders. It is an alternative to fixed-route transit service, which operates according to regular schedules along prescribed routes with designated stops. Demand-response (also referred to as dial-a-ride): vehicles that operate in response to calls or requests from passengers. Small buses, vans, or taxis to provide transportation service that is not on a fixed route or schedule. For example, transportation may be provided for individuals whose access may be limited or whose health condition prevents them from using the regular fixed-route bus service. Commuter rail: vehicles that operate along electric or diesel- propelled railways and provide train service for local, short distance trips between a central city and adjacent suburbs. Heavy rail: vehicles that operate on electric railways with high-volume traffic capacity. This mode has separated rights-of-way, sophisticated signaling, high platform loading and high-speed rapid-acceleration rail cars operating singly or in multi-car trains on fixed rails. Light rail: vehicles that operate on electric railways with light-volume traffic capacity. The mode may have either shared or exclusive rights- of-way, low or high platform loading, or single or double car trains. ITS encompasses a broad range of wireless and wire line communications-based information and electronic technologies, including technologies for collecting, processing, disseminating, or acting on information in real time to improve the operation and safety of the transportation system. DOT identifies 11 core technologies that are useful for public transit providers to deploy. Figure 1 illustrates how seven ITS technologies are used on a transit bus and how the public may interact with them when utilizing fixed-route bus service. Other ITS technologies not depicted in figure 1 include: Communication technologies: technologies that pass information from one user to another in a useable form via wire, wireless, radio, the Internet, or other links to facilitate interaction among drivers, dispatchers, emergency responders, and other personnel. Geographic information systems (GIS) & data management: systems that manage and create spatial data such as location of bus stops, routes, transit facilities and the regional street network. The management, analysis, communication, and display of this information supports automatic vehicle location, automatic passenger counters, computer aided dispatch, and other technologies. Maintenance management systems: technologies that monitor everything from fuel and other fluid levels to engine temperature. Weather information systems: the hardware, software, and communications interfaces necessary to provide real-time information on weather conditions to transportation agencies and their customers. Deployment of transit ITS may involve a variety of transportation stakeholders in the public and private sectors. Transit ITS technologies may be proprietary systems sold by technology firms in the private sector. Transit providers may also hire consulting firms to assist them in the ITS procurement and deployment process, including developing system requirements and the request for proposals from vendors. Further, the operation of certain ITS, such as a transit signal priority system, involves not only the transit provider but the municipality that owns and operates the traffic signal equipment. Smaller neighboring transit providers may also participate in an ITS deployment, such as a regional electronic fare collection system, spearheaded by a larger transit provider. Metropolitan planning organizations may serve a key role in planning ITS deployment, as they have responsibility for the regional transportation processes in urbanized areas. Transit providers may use FTA formula and discretionary grants, among other sources, for projects that include ITS deployments. They may also acquire ITS components such as security systems through funding provided by the Department of Homeland Security. Additionally, state and local governments may use their own funds to finance ITS projects. The primary formula grant programs that transit providers could use to fund ITS are (1) urbanized area grants, which provide funds to urban areas for capital projects, such as purchasing buses, planning, job access and reverse commute projects, and operating and other expenses, and (2) rural area grants, which provide funds to states and tribal areas to be used for capital, operating, and other expenses to support public transportation in rural areas. The Fixing America’s Surface Transportation (FAST) Act authorizes several competitive grant programs that recipients could use to fund transit ITS projects, including (1) the Advanced Transportation and Congestion Technologies Deployment Initiative, which provides grant funding for recipients to deploy a range of technologies, including transit ITS such as advanced traveler information systems and electronic pricing and payment systems, and (2) the Pilot Program for Innovative Coordinated Access and Mobility, which funds innovative projects that improve the coordination of transportation services with non-emergency medical transportation services, and could include ITS projects. Other FTA competitive funding programs that have been used, at least in part, for transit ITS include: Veterans Transportation and Community Living Initiative (VTCLI): VTCLI has funded projects in urban, suburban, and rural communities to strengthen and promote “one-call” information centers and other tools that enable veterans, active service members, military families, and others to learn about and arrange for locally available transportation services that connect them with work, education, health care, and other vital services in their communities. Mobility Services for All Americans (MSAA) Deployment Planning Projects: DOT’s MSAA initiative aims to improve transportation services and access to employment, healthcare, education, and other community activities through a coordinated effort enabled by various ITS technologies and applications. MSAA funds are awarded to selected local and regional organizations to plan coordinated mobility services. Funded projects use ITS to coordinate deployment of on- demand public transportation systems, such as paratransit, for people with mobility issues. The grants help provide vital services for veterans, seniors, people with disabilities, and others who rely on community transportation providers to access everyday needs such as employment, medical care, and groceries. Transit providers often integrate ITS technologies into other capital purchases, like new buses; therefore, it is difficult to determine the total amount of FTA funds transit providers use solely for ITS. Although this does not represent total ITS spending, FTA officials estimated that the federal funds awarded for engineering, acquiring, constructing, rehabilitating/renovating, and/or leasing signal and communication equipment, surveillance/security systems, route signing, mobile fare collection equipment, vehicle locator systems, and signage (all of which, according to officials, would be considered ITS) totaled nearly $527 million in fiscal years 2012 through 2014. Congress established the federal ITS program in the Intelligent Vehicle- Highway Systems Act of 1991, which was enacted as part of the Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA) to research, develop, and operationally test ITS technologies and promote their implementation. More recently, the FAST Act authorized $100 million annually for the federal ITS program for fiscal years 2016 through 2020, the same levels the previous surface transportation authorization— Moving Ahead for Progress in the 21st Century (MAP-21)—authorized for fiscal years 2013 and 2014. Within the Office of the Assistant Secretary for Research and Technology, the JPO coordinates the federal ITS program and initiatives in consultation with other surface transportation modal administrations across DOT, including the Federal Highway Administration, Federal Motor Carrier Safety Administration, Federal Railroad Administration, FTA, Maritime Administration, and the National Highway Traffic Safety Administration. The JPO supports the overall advancement of ITS through investments in major research initiatives, such as research on advanced connected vehicle and automation technologies, exploratory studies, and a deployment support program that includes technology transfer and training. DOT has reported that transit providers located in major U.S. cities have deployed a majority of the core transit ITS technologies described above. To determine the extent to which ITS technologies have been deployed, DOT conducts a survey on a regular basis that measures ITS deployment by state and local transportation agencies—including transit providers. The results of DOT’s most recent survey from 2013 indicated that 142 transit providers had deployed many of the core transit ITS technologies across several types of transit vehicles, including buses (see table 1). The survey also showed that these transit providers had deployed traveler information systems—using technologies such as websites, mobile applications, and electronic message signs at transit stops and stations—to provide customers with information on routes, schedules, fares, and real-time information on vehicle arrival and departure times. Transit providers also reported on planned ITS deployment between 2013 and 2016, and the survey found that future deployment focused on computer-aided dispatch, automatic vehicle location, traveler information systems to provide transit information in real time, and improvements to electronic fare payment systems. Similar to the JPO’s 2013 survey results, the 31 large and medium urban transit providers we interviewed told us they had deployed most of the ITS technologies in our review. As shown in table 2, officials from the large and medium urban transit providers we interviewed reported deploying 9 of the 11 ITS technologies in our review, but medium urban transit providers reported deploying some of these to a slightly lesser extent. Only three of the selected large and medium urban transit providers had deployed a weather information system, which as described above, consists of equipment such as pavement and water- level sensors to monitor weather conditions. The large and medium urban transit providers we interviewed generally told us that they had deployed most of the technologies across several modes of service to some extent, but primarily on their bus services. Exceptions were traveler information systems, which the majority reported using across all modes of services, and GIS, which they used in concert with their computer- aided dispatch and automatic vehicle location systems and for transit planning, and which can be applied across all modes of service. While officials across the selected transit providers reported deploying these technologies, we found there was variation by provider in the specific features and types of technologies deployed. Specifically, there was variation among providers in both the type of traveler information— such as real-time information on vehicles’ schedule adherence versus static information on routes, schedules, and fares—and the ways in which the information was provided, such as through websites, text messages, and electronic message signs at transit stops. We found another example in security systems, where, depending on the provider, different components were deployed, such as on-vehicle cameras, audio surveillance, and silent alarms. There were also differences in the length of time the large and medium urban transit providers had deployed certain ITS technologies. For example, officials from 13 of the 18 large urban transit providers told us that they had deployed automatic vehicle location and computer-aided dispatch technologies prior to 2010, and 6 transit providers said that they are currently updating or have updated these technologies at least once since then. Officials from 7 of the 13 medium urban transit providers said they were in the process of deploying or had deployed these technologies in or after 2010. Although a majority of the large and medium urban transit providers reported that they had deployed transit signal priority, the extent to which they used this technology varied by provider. For example, most of the transit providers that reported using transit signal priority told us they did so in a limited manner, such as along one or two major corridors in their transit system, or on their bus rapid transit service. We previously found that transit signal priority is the most common ITS technology included in bus rapid transit projects. While 18 large and medium urban transit providers reported using transit signal priority, if only to a limited extent, officials from six of these transit providers told us they had plans to expand or would like to expand its use. Officials from three of the medium urban transit providers who were not using transit signal priority told us that the technology was being considered in their plans for proposed bus rapid transit projects. Transit providers are now making some of the data collected from their ITS technologies, such as GIS, computer-aided dispatch, automatic vehicle location, and automatic passenger counters, available to the public, a concept which is known as “open data.” A 2015 Transit Cooperative Research Program (TCRP) study on open data found an increasing number of transit providers have begun making their schedule and real-time operational data available to the public since 2010. Open data has resulted in numerous benefits and innovations that could not have been accomplished solely by transit staff, such as the proliferation of mobile phone applications developed by outside entities that provide passengers with access to transit information. Officials from 22 of the 31 large and medium urban transit providers we interviewed told us that they had made data from their ITS technologies open to the public, and officials from the majority of these providers said that outside software developers had used or hoped to use this data to create mobile applications for their passengers. Officials from three of the large and medium urban transit providers reported that having external entities develop mobile applications reduced costs and saved staff time. To make ITS data available to the public or other users, transit providers must use a data standard that allows users to open and read the information contained. According to the literature, General Transit Feed Specification (GTFS) is the standard adopted by most transit providers and enables them to share static schedule information. Officials from 28 of the 31 large and medium urban transit providers reported using GTFS, while 12 of the 31 large and medium urban transit providers reporting using GTFS-realtime, which allows transit providers to format real-time vehicle information and service alterations. Several of the large and medium urban transit providers told us they use GTFS because it allows them to publish their data into Google transit maps. Two of the transit providers we interviewed told us that while they have not made their data open to the public, they have formatted their data into GTFS to allow it to be used for Google transit maps. In addition to sharing data with the public, the large and medium urban transit providers in our review reported that they share data with regional transportation stakeholders to support multimodal planning and management. For example, officials from one of the large urban transit providers we interviewed told us they shared their data with local university researchers who received funding through the local metropolitan planning organization to archive regional transportation data, including transit and highway performance data. According to these officials, these data have been used for research and regional transportation planning. According to DOT, an Integrated Corridor Management (ICM) approach, where transportation agencies operate transportation corridors in a coordinated and integrated manner, can include providing multimodal traveler information en-route in addition to pre-trip information as travel conditions change. Officials from a large urban transit provider that is using ICM along a major road corridor told us that having integrated data on real-time traffic conditions, transit, and parking availability has enabled travelers to make better travel decisions and reduces congestion on roadways. As we have described above, the large and medium urban transit providers are deploying the majority of the core transit ITS technologies in our review; however, some of the transit providers described using more innovative types or features of these technologies. Many of the ITS technologies in our review represent a range of systems or components from which a transit provider can select different options, depending on needs and desired uses, and some of these options are more sophisticated than others. For example, there are various types of electronic fare payment systems available to transit providers. Some types, such as mobile phone payment applications, are considered more advanced than others, including magnetic stripe cards. Transit providers may also select different fare systems, including closed systems, which use smart cards that store cash value and can only be used within that transit system or on other transit systems that accept that smart card, or a more advanced open system, which accepts numerous payment types, such as credit or debit cards, issued by other organizations. For technologies such as computer-aided dispatch, maintenance management systems, and security systems, transit providers can choose from a variety of offered features, some of which may be considered more advanced than others. In addition, according to DOT, the proliferation of mobile devices and real-time information has led to a shift over the past several years in the way transit providers can disseminate traveler information to their existing and potential passengers. For example, new opportunities have emerged for transit providers to offer mobile ticketing and mobile applications for passengers to retrieve real-time transit information, conduct trip planning, and make transit reservations (see fig. 2). Below are examples of some of the more advanced types and features of ITS technologies. Smart card electronic fare payment: Officials from 15 of the 18 large urban transit providers and 4 of the 13 medium urban transit providers we interviewed told us that they have deployed a smart card electronic fare payment system. Four large urban transit providers have deployed an open payment system. Predictive and real-time maintenance management systems: Officials from six large urban transit providers and one medium urban transit provider told us they have deployed maintenance management systems that can transmit maintenance information to the provider in real time or make predictions when vehicle parts may fail. Traveler information and mobile fare payment smartphone applications: Many of the large and medium urban transit providers we interviewed have made smartphone applications available to their passengers. These applications can provide traveler information or the ability to pay fares electronically. Specifically, 15 of the 18 large urban transit providers and 4 of the 13 medium urban transit providers we interviewed said they have deployed smartphone applications that provide passengers real-time transit information. Officials from one of the large urban transit providers told us that they were about to deploy a smartphone application that allows passengers to make ride requests, instead of calling a dispatcher, on their demand-response service that connects residents living in less-populated areas to transit. Additionally, officials from five large urban transit providers and two medium urban transit providers told us they have deployed mobile ticketing, and two of these providers told us they were using or were developing a smartphone application that used mobile ticketing in creative ways. These capabilities included providing passengers with the ability to purchase mobile tickets for transit and special events, such as tickets to the state fair or local zoo, and linking mobile ticketing to private ride-hailing companies to help passengers reach destinations that are outside the transit service area. There are several factors beyond the size of the transit provider and population served that may contribute to a transit provider’s decision to adopt more advanced technologies. We found several examples from literature and our interviews of transit providers located in smaller areas deploying advanced ITS technologies. The JPO has reported that there are a number of factors that influence ITS adoption across transportation agencies, including: agency characteristics, such as their risk tolerance, level of knowledge and expertise, and adoption rate of peer agencies; external environmental characteristics, such as agency budgets, funding opportunities, agency priorities, and presence of a technology champion; and the characteristics of the transportation user, such as public acceptance and attitudes toward proposed technologies. We found examples of advanced ITS adoption among small urban and rural transit providers in some of the studies that we reviewed and from stakeholder interviews. For example, a 2015 Transit Cooperative Research Program report on next generation electronic fare payment systems highlighted the experiences of one small urban transit provider’s upgrade to smart card-enabled electronic fare payment. Officials from one of the industry associations we interviewed told us that smaller transit providers may receive and use grants to invest in more innovative technologies. For example, FTA officials provided us with examples of how transit providers are using MSAA and VTCLI grants to deploy ITS technologies in innovative ways to help improve human service transportation in rural areas. An official from another industry association told us that smaller transit providers located in niche communities, such as cities where universities or vacation destinations are located and communities that border metropolitan cities, are using more innovative ITS technologies. These communities have riders that have certain expectations of and are more reliant on transit and these factors drive providers to adopt advanced technologies. We surveyed a stratified random sample of 312 small urban and rural transit providers to learn about the extent of their use of ITS technologies. This sample is generalizable to a target population of 314 Section 5307 recipients serving small urbanized areas and 582 Section 5311 sub-recipients serving non-urbanized (rural) areas that reported to the FTA’s National Transit Database in reporting year 2013. We refer to this target population as “small urban and rural transit providers.” We estimate that nearly 75 percent of small urban and rural transit providers use ITS through the deployment of security systems. Of those providers, approximately 81 percent are using closed circuit TV cameras and 58 percent are using audio surveillance; other less-common systems include silent alarms, object detection sensors, and covert microphones (see fig. 3). Further, we estimate that of those transit providers that are using this technology, about 69 percent of small urban and rural providers use security systems on their bus fleet, and approximately 72 percent use this technology on demand-response vehicles. According to the JPO, urban and rural public transportation systems can benefit from the implementation of security systems because they can be used to monitor the safety and security of passengers, employees, equipment, and materials. Small urban and rural transit providers are also using other ITS technologies. Based on the survey results, we estimate that about half of small urban and rural providers are using computer-aided dispatch, automatic vehicle location, and GIS. Approximately 55 percent of small urban and rural providers are using computer-aided dispatch software. According to a 2010 North Dakota State University study on technology adoption by small urban and rural transit providers, computer-aided dispatch packages are a core component of rural transit technology systems, and may provide record-keeping and billing capabilities, improve the accuracy of reservations, and give transit providers the ability to provide real-time customer information. Further, we estimate that approximately 51 percent of the target population is using automatic vehicle location technology. By providing the real-time position of transit vehicles to a central location, this technology can enable transit dispatchers to increase the average number of rider pick-ups per hour. Additionally, approximately 47 percent of small urban and rural providers reported using a GIS system. According to a DOT report on rural ITS, although GIS is assumed to be a component in urban ITS deployment, it can be a significant stand-alone technology for rural transit agencies. The report states that GIS applications have given smaller operators new tools for improving service planning and operations and may provide the basis for additional deployment, such as automatic vehicle location and computer-aided dispatch. Although about half of small urban and rural transit providers reported using the three aforementioned technologies, based on our survey results, we estimate that most small urban and rural transit providers are not using each of five other technologies in our review: maintenance management systems, traveler information systems, automatic passenger counters, electronic fare payment, and transit signal priority. Estimated use of these five technologies is illustrated in figure 4. In some cases, small urban providers are using technologies that we did not find widely deployed by rural providers. For example, according to our survey, approximately 50 percent of small urban providers are using a variety of means to provide traveler information (see table 3). We estimate that about half or more of small urban and rural providers that are not using traveler information systems, automatic passenger counters, electronic fare payment, and maintenance management systems reported the cost of the technology as the reason they are not using that technology. Additionally, most of the small urban and rural transit providers that are not using transit signal priority indicated that they do not perceive a need for this technology in their operations. In open- ended responses to the survey, some small urban and rural transit providers offered other reasons they were not currently using ITS. For example, five providers reported they are not using a maintenance management system because they contract out their maintenance services; four providers said that they do not use automatic passenger counters because they either provide only demand-response service, or they manually count passengers; and finally, four providers reported that they do not use electronic fare payment because they do not charge a fare for their transportation services. Small urban and rural providers reported that their plans to deploy ITS in the future focus on security systems and automatic vehicle location. For each of the nine technologies, our survey asked transit providers that indicated they were not using the technology if they had plans to deploy it in the next five years (see fig. 5). Each ITS technology a transit provider deploys may provide a unique set of benefits, and DOT has reported on some of these benefits based on the results of its regular ITS deployment survey and in evaluations of ITS benefit studies from the JPO’s Knowledge Resources Databases. For example, DOT’s 2013 ITS deployment survey showed that transit providers rated communication technologies, automatic vehicle location, and security cameras as having provided them with the highest benefits. Also, in DOT’s 2014 updated report on information on the benefits, costs, and lessons learned regarding ITS deployment studies, the agency reported findings that transit providers experienced improvements in operations and fleet management, such as achieving improved service reliability through computer-aided dispatch, decreased transit travel times through the use of transit signal priority, and increased ridership from using traveler information systems. We asked the large and medium urban transit providers in our review to describe the types of benefits their ITS has collectively generated. Transit providers we interviewed identified benefits from ITS broadly related to improvements in administration, operations, and customer satisfaction. Below are descriptions and examples of the five main types of benefits reported by the majority of the large and medium urban transit providers we interviewed. Improvements in on-time performance and schedule adherence: Officials from 25 of the 31 large and medium urban transit providers said that data from their ITS technologies—such as automatic passenger counters, automatic vehicle location, and computer-aided dispatch—have improved or were expected to improve the extent to which service remains on schedule, which has improved their on-time performance. For example, officials from 7 of these providers told us computer-aided dispatch and automatic vehicle location enable them to monitor service in real time and react to situations that might create service delays—such as traffic, accidents, or vehicle breakdowns—by holding buses or creating route detours. Further, officials from 4 of the large and medium urban transit providers said they have used information from these technologies to change schedules to better reflect actual arrival and departure times, which has improved their on-time performance. Enhanced safety: Officials from 24 of the 31 large and medium urban transit providers we interviewed told us that ITS technologies—such as automatic vehicle location, computer-aided dispatch, and various elements of their security systems—have improved the safety of their passengers and operators by helping them prevent, manage, and review incidents, such as criminal behavior and accidents. For example, officials from 3 of these transit providers told us that automatic vehicle location and computer-aided dispatch have reduced the number of accidents by automating some of the driver’s tasks, including providing drivers with turn-by-turn directions and automating bus stop announcements, and eliminating some of their distractions. Also, officials from 7 of the large and medium urban transit providers told us that audio and video surveillance technologies have enabled their organizations and emergency responders to monitor and better respond to incidents. More efficient scheduling and routing: Officials from 24 of the 31 large and medium urban transit providers told us that data from their ITS technologies—such as automatic passenger counters, automatic vehicle location, computer-aided dispatch, and electronic fare payment systems—have enabled them to make improvements to their transit service. For example, officials from 16 of the transit providers experiencing this benefit told us these technologies provide them with more precise information, such as passenger travel behavior and traffic congestion. This information enables them to make data-driven decisions about service—such as routes, schedules, and bus stop locations—that make travel more efficient. Some of these officials told us that prior to these systems, agencies made service changes based on customer complaints and on-site observations, which was less efficient, required more resources, and was less accurate. Improvements in reporting and record-keeping: Officials from 21 of the 31 urban transit providers told us that ITS technologies including automatic passenger counters, automatic vehicle location, computer-aided dispatch, and maintenance management systems have improved their ability to document and report new or more accurate data. For example, officials from 11 of these providers said that they are now able to collect additional and more accurate statistics, such as on their on-time performance, number of bus passengers by stop, and vehicle health and parts inventory. In addition, officials from 7 of the large and medium urban transit providers we interviewed told us that these technologies have made it easier to collect and report data on transit service to their governing boards and to meet federal reporting requirements. For example, 3 of the large and medium urban transit providers told us that they are able to use ITS technologies to automatically collect information such as number of passengers rather than sending staff out to collect this information. Increased customer satisfaction: Officials from 17 of the 31 large and medium urban transit providers told us that ITS technologies, especially traveler information systems, have improved customer satisfaction. For example, officials from 10 of these transit providers attributed this increase in customer satisfaction to their expanded use of traveler information systems, which have enabled them to provide their customers with improved ability to access travel information, through such venues as websites, mobile phone applications, and electronic signs at transit stops. Additionally, 3 of the large and medium urban transit providers told us that customer satisfaction has improved with the deployment of electronic fare payment options. For example, officials from 1 provider told us that they believe that some of their customers want to be able to make all of their transactions using smartphones. According to our survey results, small urban and rural transit providers rated the same top five benefits from using ITS as the 31 large and medium urban transit providers we interviewed (see table 4). In order to reduce the respondent burden and due to potential difficulties isolating the impacts of individual ITS technologies, our survey asked small urban and rural transit providers to report on the great or slight benefits of their collective ITS technologies. We are therefore unable to attribute the benefits they reported to individual technologies. We also found from our interviews that the 31 large and medium urban transit providers achieved other types of benefits to a lesser extent, such as cost savings, increased operator satisfaction, increased ridership, greater staffing efficiencies, and reduced travel and wait times. Officials from the selected large and medium urban transit providers also described other types of benefits they have experienced from ITS technologies, such as enhanced communication capabilities between dispatchers and drivers, improved marketing, and the ability to keep drivers more accountable. For example, officials from one large urban transit provider told us they were able to use data from their electronic fare payment system to measure the impact that a recent marketing promotion had on ridership, and officials from another large urban transit provider told us they have used their adoption of some ITS technologies in their marketing campaigns to improve their image and attract new customers. In addition, transit providers can use their technologies together, and officials said this combined use can increase the magnitude of the benefit they experience. For example, three of the large urban transit providers told us they use data from automatic passenger counters, which indicate many passengers get on and off at particular transit stops, in tandem with electronic fare payment data, which can provide the exact travel patterns of passengers because it can track the locations passengers get on and off vehicles and show how riders are transferring between service modes. Such combinations of technologies can lead to precise information on ridership behavior that can contribute to benefits such as more efficient routing and scheduling. See figure 6 for an illustration of how other benefits may be derived from combinations of ITS technologies. About half of the transit providers we interviewed and most small urban and rural transit providers surveyed found it difficult to measure or have not measured the benefits they experienced from ITS deployment. Officials from 11 of the large and medium urban transit providers we interviewed told us that it can be difficult to quantify the benefits of using ITS technologies for a number of reasons, such as that it may be difficult to identify a unit of measurement for enhanced safety or greater staff efficiency, for example. Several of these officials also told us that it was difficult for them to attribute benefits exclusively to ITS deployment or identify the specific ITS technology that created the benefit. For example, officials from three of the large and medium urban transit providers told us that ITS technologies are integrated—often installed at the same time— and may result in similar benefits, making it challenging for them to specify which ITS technology made the positive impact. In addition, factors other than ITS deployment may contribute to an observed benefit. For example, officials from two large urban transit providers told us they have experienced reduced travel times, but it would be difficult to determine whether this was caused by using transit signal priority due to factors such as the ability of passengers to pay for their fares prior to entering the vehicle, city traffic, and the number of boarding passengers. Also, officials from four of the large and medium urban transit providers we interviewed told us that their ridership levels have increased, but this could be a result of different ITS technologies, such as traveler information systems or electronic fare payment, or other factors, such as improved service. Furthermore, we estimate that approximately 71 percent of the small urban and rural transit providers were not able to quantitatively measure any benefits received from ITS. Officials from five large and medium urban transit providers told us that they had not measured benefits from ITS deployment for a variety of reasons, such as the deployment had occurred too recently to be able to measure any benefits. Despite these challenges, we found several examples in our interviews, survey, and review of recently published ITS studies where transit providers and researchers quantified some of the benefits of ITS deployment. Officials from several of the large and medium urban transit providers that we interviewed reported that they had quantified several benefits using a variety of methods, such as: Increased customer satisfaction, through passenger surveys and reviews of customer service call rates; Improvements in on-time performance and schedule adherence, through reviewing performance data; and Cost savings, by estimating the value of conducting preventative maintenance or reduction of staff that was a result of deployed ITS technologies. Officials from two of the large urban transit providers told us that they collaborated with university researchers to measure the benefits obtained from specific ITS deployments and found that traveler information systems had a positive impact on customer satisfaction and transit signal priority resulted in reduced travel times. According to officials from these providers, the university researchers were able to isolate these technologies from some of the factors mentioned above that may also influence the experienced benefit. Among the small urban and rural transit providers that reported taking steps to measure ITS benefits, 15 providers told us they analyzed either ridership or on-time performance data to document the impact of ITS deployment. We also found recent ITS studies that measured the benefits experienced by transit providers that had deployed ITS technologies such as traveler information systems and transit signal priority. For example, a 2011 study that analyzed the impact of implementing transit signal priority on 27 intersections along a corridor in Minneapolis found that transit signal priority reduced bus travel times by 3 to 6 percent. Transit providers face a variety of challenges in securing funding for an ITS deployment. For example, officials from 12 of the 31 large and medium urban transit providers we interviewed told us that ITS projects may compete for funding with an agency’s state-of-good-repair needs. In 2013, FTA estimated that more than 40 percent of buses and 25 percent of rail transit assets were in marginal or poor condition. We have previously reported that transportation officials must identify priorities and make tradeoffs between funding projects that preserve or add new infrastructure and those that improve operations, like ITS. Officials from one large urban provider told us that technology has historically been a second-tier funding project next to capital funds for bridges, stations, and upkeep of infrastructure, and a medium urban provider stated that because transit providers have so many needs, it can be difficult to say that acquiring new technology is a bigger need than new buses, for example. Another large urban provider told us that every project within an agency has to obtain funds based on its merits, and while providing real- time information at every transit center in a city may be useful, for example, this project may rank lower among the agency’s priorities. A 2014 JPO report identified securing funding as a challenge when ITS is competing for attention with “ribbon-cutting” projects that have higher visibility. According to officials from one large urban provider, it can be difficult for ITS to compete with other projects internally, in part because it can be hard to measure the return on investment from ITS. Officials from another large urban provider told us they have seen an increase in competition for funding between bus and rail needs, due to rail maintenance and costs associated with positive train control requirements. Officials from seven large and medium urban transit providers told us that competition for external funding with other transportation agencies can also be a challenge. For example, officials from a large urban provider told us that highway projects tend to receive more funding than public transit from federal programs such as the Congestion Mitigation and Air Quality Improvement Program (CMAQ). Transit providers may also face obstacles in funding the operations and maintenance costs associated with ITS systems, as we reported in 2012. Officials from 16 large and medium urban providers we interviewed indicated that preparing for the future operations and maintenance costs related to ITS deployment is a key challenge. For example, officials from one large urban transit provider said that the maintenance and support contracts for ITS technologies are expensive, and that those expenses are more difficult to predict than the capital costs associated with implementing ITS. The officials said they also anticipate higher operational costs in the future based on the need for unlimited cellular data plans to collect real-time data from their vehicles. Finally, limited opportunities to fund ITS are a challenge, according to officials from 20 of the 31 medium and large urban transit providers we interviewed. As we reported in 2012, funding is an ongoing challenge in the transit community, as transportation agencies face difficult decisions regarding the allocation of their transportation funding. Many have faced severe revenue declines in recent years, restricting the availability of funds for transportation improvements. For example, officials from one medium urban provider said that the economic recession resulted in fewer local funds available for transit. Officials from a large urban provider told us that transit providers must plan and execute new software deployments effectively because there may not be funding available to correct a mistake for 5 to 10 years if the agency makes a poor decision in selecting a vendor or the software selected does not meet a business need. In our survey of small urban and rural transit providers, we asked respondents to rate their experiences with a number of different challenges, including several similar funding-related challenges they have encountered with ITS (see table 5). Additionally, we estimate that 22 percent of small urban and rural providers experienced unexpected costs in deploying, operating, or maintaining ITS technology. Costs cited include increases in annual licensing and maintenance fees, the need for additional internet speed and storage, software upgrades, cellular service, and training costs. The familiarity and comfort of a transit provider’s leadership and workforce with ITS technologies and their benefits—from a board of directors to bus operators—may have a significant impact on its ability to successfully deploy these technologies. According to a 2015 ITS America report on ITS deployment challenges, a transit provider’s board of directors may not be familiar with ITS technologies and the potential benefits they bring to operations and ridership. Similarly, we reported in 2012 that leaders do not always place a priority on ITS, especially in the context of limited funding, and other infrastructure projects can take precedence. Officials from nine of the large and medium urban transit providers we interviewed reported that obtaining support for deploying technologies from leadership and decision-makers in the organization can be a challenge. Officials from a large urban provider, for example, told us that because ITS projects may not be as exciting as projects such as implementing a new rail line or replacing rail cars, staff may have to spend time explaining the value of an ITS project to board members. Officials from another urban provider told us that their general manager has been able to gain the support of their board members by taking them to ITS conferences so they can see firsthand what other transit providers are doing. The introduction of transit ITS also has the potential to significantly alter the work and responsibilities of a transit provider’s workforce, including dispatchers and operators. Officials from 21 of the 31 large and medium urban transit providers indicated that the workforce may be reluctant to embrace new technology that changes their job responsibilities. For example, officials from a medium urban provider explained that bus operators were initially resistant to the installation of surveillance systems, but their apprehension subsided after they learned that the video footage could prove that they were not at fault for particular incidents that occurred on the bus. Officials from a large urban provider also told us that transit staff tends to include “lifers” who were hired with one expertise and it can be difficult to train them to work with new technology, or the funding for that training may not be available. ITS is a rapidly developing field that requires a specialized workforce familiar with emerging technologies. Officials from 14 of the 31 providers we interviewed said that a lack of technical expertise in the workforce is a deployment challenge. For example, officials from one large urban provider said that it can be difficult to find applicants who have worked with certain proprietary ITS products, and as a result, they train new staff in-house with vendor support. The agency risks losing its investment if staff leave the organization or department. Additionally, officials from one medium urban provider told us that it can be difficult to attract and maintain staff with technical expertise because their union rules are more protective of senior staff, and it is largely younger, more recent hires who can adapt to new technologies. In our survey, we asked small urban and rural transit providers about the extent to which they encountered similar leadership and workforce challenges with ITS (see table 6). The success of an ITS deployment may depend on effective coordination between several transportation stakeholders in a region, and we have previously found that ITS coordination across agencies is a challenge. Complex systems such as electronic fare payment and transit signal priority may involve multiple entities, including neighboring transit providers and cities, among others. Officials from seven large and medium urban transit providers considered coordinating ITS deployment across agencies to be a challenge. Officials from two large urban providers told us that obtaining buy-in from regional partners on their respective regional fare collection systems was difficult because of resource limitations and apprehension from smaller regional providers about a larger agency moving forward with decisions about the system without their input. Officials from a large and medium provider told us they have had difficulty implementing transit signal priority in their cities because state or local transportation authorities have opposed the system or have not upgraded the fiber optic network so that traffic signals are connected. Transit providers using federal funds typically purchase ITS technologies from technology vendors through the federal procurement process. However, officials at three large and medium providers said that it may take months to years to procure technology from the request for proposal to actual deployment, a point at which the deployed technology is already old and could be replaced or upgraded. Officials from two large urban transit providers told us that FTA’s “Buy America” requirements—which require manufactured products used in a project receiving FTA funds to be produced in the U.S.—are also a factor in prolonging the procurement process, as agencies may have difficulty meeting the requirements. Officials from 16 of the 31 large and medium urban providers told us they have experienced challenges in working with ITS vendors. Issues cited by providers we spoke with include (1) difficulty changing vendors after ITS has been deployed, (2) turnover among vendor staff during ITS projects, and (3) difficulty getting vendors to work with one another to integrate ITS amid concerns about making changes to proprietary systems. Officials from a large urban provider told us that even though contracts may make the vendors responsible for integrating ITS technologies, the costs are passed on to the transit provider. According to DOT, including ITS standards such as Transit Communications Interface Profiles (TCIP) in procurements can help to integrate different technologies by establishing a common framework for the exchange of information between systems, and allows the transit provider to go beyond a single vendor when considering an upgrade or adding to an existing system. However, an ITS consultant we spoke with said that developing standards to enable different vendors’ products to work together is one of the biggest challenges in the industry as it requires vendors to share information, and implementing the interfaces between technologies may add to the cost of a project. Our survey asked small urban and rural transit providers about the challenges they encountered related to working with ITS vendors. Although most small urban and rural providers did not rate limited vendor support as a particular challenge, 33 percent indicated that vendors offer ITS technology solutions that are not designed for the smaller scale of small urban and rural transit systems. (See table 7.) Successful ITS deployment requires the capacity to reliably transmit data, such as vehicle location, between systems. We have previously reported that rural areas can have conditions that increase the cost of deploying broadband Internet infrastructure and services, such as remote areas with challenging terrain like mountains, which increase construction costs, or conditions that make it difficult to recoup deployment costs, such as relatively low population densities and incomes. Similarly, in their comments on our survey, three rural transit providers reported that geographic conditions in rural areas, such as mountains and large service areas, can limit connectivity. For example, one rural provider reported that “mapping technology” (e.g., GIS) may not recognize all of the rural roads in an area, which limits its usefulness for a demand-response service. Finally, an official from the National Rural Transit Assistance Program (RTAP) told us that infrastructure and access to data are inadequate in rural areas, and the lack of investment in making communications more reliable to reduce cell phone dead zones and connect drivers to dispatchers is making rural communities structurally isolated. The JPO and FTA provide a variety of information resources related to transit ITS deployment. In addition to their responsibility for conducting ITS research, development, and testing, the JPO runs programs to support transportation providers in the deployment of ITS technologies. According to JPO officials, they design their programs to be applicable to any transportation mode, including highways, railroads, and transit, but they also develop resources that are transit-specific. To help inform them of the transit community’s resource needs, JPO officials told us that they coordinate with officials from FTA and transit industry groups, such as APTA and CTAA, and consider other information, such as relevant research and ITS deployment information from DOT’s ITS deployment survey of state and local transportation agencies. The following are some of the ITS information resources that are made available to the transportation community, including some that are more targeted to transit providers: JPO Technical Assistance Programs: The JPO offers a number of technical assistance programs covering various ITS topics, including ITS standards implementation, systems engineering, and ITS architecture implementation, which JPO officials told us include the interests of the transit community. ITS Professional Capacity Building Program (PCB Program): The JPO offers different ITS learning opportunities for transportation agencies, including transit providers, to ensure the effective implementation and operation of ITS. These opportunities include web-based and classroom training, webinars, on-line resources, peer-to-peer assistance, and their Knowledge Resources Databases, which include past studies of ITS benefits, costs, and lessons learned, and some of these are more focused on transit. For example, the JPO provides online training modules on ITS transit standards, ITS transit fact sheets that describe transit-specific ITS technologies, transit-targeted webinars, and has identified transit ITS research in its Knowledge Resources Databases. Additionally, according to JPO officials, it coordinates jointly with FTA and APTA on an annual ITS Best Practices Workshop for transit providers and with APTA and ITS America on the Passenger Transportation Systems and Services Committee of the Transportation Management Forum, which is an industry forum that focuses on transit ITS issues. In addition to the ITS resources the JPO provides, FTA—and within FTA, RTAP, which promotes the delivery of transportation services in rural areas—also provides support to transit providers in their ITS deployment through research, testing, evaluation, training, and outreach. For example, the National Transit Institute (NTI), which provides training and educational programs for the transit industry and is funded by an FTA grant, offers transit ITS courses, such as an introductory workshop on ITS data management, training on using ITS standards when purchasing ITS technologies, and a course on rural technology adoption, which FTA officials told us includes ITS technologies. FTA officials also told us that FTA headquarters staff provide guidance to transit providers that contact them with questions about ITS deployment and have quarterly calls with their 10 regional offices to discuss ITS development in their regions. Some of RTAP’s activities include providing technical assistance and training materials to rural transit providers, surveying state RTAP managers, and participating in conferences and webinars, which an RTAP official told us include information on ITS deployment. For example, RTAP developed technical guidance for moving data into the GTFS format to enable transit providers to adopt website trip planning, and on-line training that introduces ITS technologies for scheduling and dispatching to rural transit systems. Few of the transit providers we interviewed and surveyed reported using DOT resources, particularly JPO resources. For example, officials from 10 of the 31 large and medium urban transit providers we interviewed told us they had used JPO resources. Additionally, based on our survey results, we estimate that about two percent of small urban and rural transit providers received some form of technical assistance from the JPO, such as for the planning, deployment, operation, or maintenance of ITS technologies. In addition to asking these providers questions about receiving JPO’s general technical assistance, we also asked them about resources received through the JPO’s PCB Program. And, of the 233 small urban and rural transit providers who responded to our survey, 43 indicated they were aware of the training, technical assistance, and knowledge resources programs provided by the JPO PCB Program and 24 reported using any of these resources. Consistent with this information, JPO officials told us the data they collect on use of PCB Program resources showed low participation rates by transit providers and estimated that, based on historical participation, transit providers comprise 3 to 5 percent of the program’s users, or about 1,100 to 1,800 transit providers in fiscal year 2015. JPO officials told us that transit providers participate in certain PCB Program offerings more than others. For example, they said that approximately 6 to 10 percent of the webinar attendees and archives users represent transit providers and these figures may be higher depending on the topic of the webinar. More transit providers in our review reported using FTA resources for ITS deployment than JPO resources. For example, officials from 14 of the 31 large and medium urban transit providers told us that they had received FTA assistance. Most of these transit providers reported that FTA assistance was related to the administration of grants rather than technical deployment, or that they received assistance through NTI courses that officials from four providers said were focused on technology in general and may not have included information on ITS. Likewise, our survey found that small urban and rural transit providers had also used FTA resources more than JPO resources. For example, based on our survey results, we estimate that about 33 percent and 17 percent of small urban and rural transit providers had received some form of technical assistance from FTA and RTAP, respectively, such as in the planning, deployment, operation, or maintenance of ITS technologies. The transit providers in our review reported relying mostly on non-federal resources for assistance with ITS deployment. For example, 22 of the 31 large and medium urban transit providers that we interviewed told us that they rely on peer or regional transit providers and officials from 7 of these transit providers told us that other transit providers are their main source of ITS information. Officials from several of these transit providers told us they are part of peer networking groups where information about ITS is shared, such as a consortium of transit chief information officers and an organization of bus transit providers that compare performance and identify best practices. The large and medium urban transit providers also reported relying on industry groups and vendors for ITS information. For example, officials from 18 of the 31 selected large and medium transit providers said they rely on groups such as APTA, and officials from 7 of the 31 large and medium urban transit providers said that they rely on vendors and attending annual vendor user conferences. Based on our survey results, we estimate that small urban and rural transit providers receive technical assistance—such as in the planning, deployment, operation, and maintenance of ITS technologies—from state DOTs (52 percent) and ITS vendors (48 percent) more frequently than from FTA (33 percent). JPO and FTA officials told us that transit providers may not be using federal ITS resources because these providers may not have the ability to send staff to training opportunities and the topics may not be of interest to them. For example, FTA officials told us they have not received a lot of demand for NTI courses to include ITS-focused training because transit providers have high turnover of staff and staff may not have ITS expertise. Although JPO officials told us they coordinate with various stakeholders to ensure their resources are responsive to transit community needs, officials from JPO, FTA, and RTAP told us that transit providers may not be using JPO resources because these resources are more focused on urban areas and highway transportation, including advanced connected vehicle technologies, which are not of current interest to the transit community. Officials from five of the large and medium urban transit providers and four public transit stakeholders we interviewed, including officials from two industry associations and one ITS consultant, told us that the information provided was either outdated, focused on highways rather than public transit, or otherwise did not match their needs. JPO officials told us that they expect participation rates to increase among the transportation community with their publication of new ITS Standards Training Modules in late 2015, which they say are applicable to transit. JPO officials also told us that they solicit and review feedback on their PCB Program offerings. For example, officials said they obtain feedback from users of their Knowledge Resources Databases through an online feedback link and formally twice a year through a webinar. They said that the transit providers that have provided feedback have reported that the information on lessons learned is valuable, but would like more reports on costs of technologies. These officials said that overall, given the complexity and number of different types of users of the databases, it is very difficult to meet everyone’s needs. Officials noted that they also collect live participation and on-demand use numbers of the PCB Program, but only began collecting more detailed user data for these programs in 2014. With more detailed data on each PCB Program offering, they said they hope to focus more on the types of transportation stakeholders using their resources in 2016 to help them better understand the reach and effectiveness to various stakeholders, and incorporate this information into the PCB Program’s strategic plan and assign PCB Program resources as needed. Although DOT, through JPO and FTA, offers a variety of different ITS resources, as discussed above, most of the transit providers in our review were unaware of the resources offered through the JPO and reported relying largely on resources other than those offered through DOT. In addition, officials from DOT and industry groups, as well as an ITS consultant and several transit agencies, generally said DOT’s ITS support programs and research may not reflect the needs of the transit community, particularly in rural areas, because, for example, these programs focus on urban areas, highway ITS, and connected vehicle technologies. We and the National Academies’ Transportation Research Board (TRB) have previously identified a number of leading practices for successfully encouraging the adoption of new technologies that may improve the extent to which transit providers use DOT resources. These leading practices include (1) choosing appropriate methods to promote the use of technology by the target audience and (2) monitoring technology adoption. Improving the availability and awareness of DOT resources is a key component to promoting the use of technology by the target audience and can enhance efforts to assist others in making decisions regarding the use of technologies. Also, monitoring technology adoption can provide lessons about efforts to encourage technology implementation. For example, according to a TRB report on promoting technology use, such monitoring of information is needed for managing technology promotion activities and for successfully assessing progress toward the goals of those activities. Making users aware of ITS resources JPO officials told us that they advertise PCB Program offerings through e- mail lists that include the major transit industry associations and FTA staff, who they say consistently share news and information with transit, state, regional, and local stakeholders, and the private sector. They also told us that many individual transit providers subscribe to their e-mails. There are other ways that transit providers hear about their offerings, according to JPO officials, including advertisements through ITS America- sponsored webinars and newsletters. When there are products for a specific audience, such as transit-specific offerings, JPO officials said they will make additional efforts to inform that audience of the products’ availability. DOT officials also told us that FTA publishes information on PCB Program opportunities and JPO has a dedicated multimodal knowledge and technology task that includes outreach and marketing. Despite these efforts, officials from RTAP, two ITS consultants, and officials from 12 of the 31 large and medium urban transit providers we interviewed told us they were unaware of the resources offered through the JPO. We estimate that the majority of small urban and rural transit providers are also unaware of JPO’s PCB Program offerings. Specifically, based on our survey, we estimate that about 75 percent of small urban and rural transit providers are unaware of JPO’s PCB Program training, 85 percent are unaware of PCB Program technical assistance, and 85 percent are unaware of PCB Program knowledge resources information. The ITS consultants we spoke with told us that outside of federal resources the level of support transit providers may receive for ITS deployment varies and transit providers generally rely on vendors for technical support. Improving the availability and awareness of DOT resources could help transit providers take advantage of these resources. Without greater efforts from DOT to make the transit community more aware of federal resources, transit providers may be missing information that could help them make the most informed ITS deployment decisions. As described earlier, the JPO monitors the adoption of ITS technologies through the ITS deployment survey and uses this information, according to JPO officials, to understand the level of deployment and to help them make decisions on how to encourage the future deployment of ITS technologies through its information resources. However, the deployment survey has focused on technology adoption by transportation providers, including but not limited to transit, that are located in major metropolitan areas and does not collect deployment data from transit providers that primarily serve small urban and rural areas. JPO officials told us they have no plans to survey rural transit providers because these providers are generally very small and serve specialized functions often associated with federal programs, such as the transportation of elderly to medical appointments, which would have required them to contact each provider individually to get information and that this was outside of the project’s scope. Additionally, officials told us the purpose of the survey is to document trends and to understand how ITS deployment, which generally occurs in larger cities, has progressed, although some of the information JPO collects may include deployment in small cities because providers in larger cities may also provide service in those areas. However, we estimate from our survey results that a majority of small urban and rural transit providers are using several ITS technologies— such as security systems, computer-aided dispatch, and automatic vehicle location—but have experienced challenges in using funding opportunities for ITS as well as the operational costs associated with these technologies. Additionally, we estimate from our survey that small urban and rural providers have plans to continue deploying ITS in the next 5 years. While there may be some difficulties in including small urban and rural transit providers in DOT’s deployment survey, there may be other ways to monitor their ITS deployment, including engaging with state DOTs to collect information on rural transit ITS. FTA considers state DOTs a useful resource in understanding rural transit issues as they distribute section 5311 rural grant funding to the state’s subrecipient rural transit providers. Including the deployment of ITS by small urban and rural transit providers in its ITS monitoring efforts may provide the JPO with information to customize ITS resources to address the challenges faced by this transit community. Without greater efforts from DOT to tailor its resources to include the needs of small urban and rural transit, these transit providers may be missing information that could inform their ITS deployment decisions. As public transit ridership grows, ITS technologies provide opportunities for transit providers to improve planning, increase the efficiency of their operations, and make their services more attractive to riders. In the nation’s major metropolitan areas, transit providers are using ITS in new and innovative ways to improve fare collection and keep the public informed about the status of transit services, even in real time through smartphone applications. Although less prevalent, ITS technologies are helping transit providers in small communities and rural areas increase their scheduling capabilities, enhance the safety of their services, and improve their reporting and billing processes. However, in deploying ITS, the transit community faces challenges related to identifying ITS funding opportunities, paying for the operations and maintenance costs of technology, integrating systems, and managing the disruption that the introduction of new technologies can bring to the transit workforce. Although DOT provides a variety of information resources to promote and support the use of transit ITS technology including technical assistance and classroom and online training, few of the transit providers in our review were aware or making use of these resources, relying instead on information and support from peer transit providers, industry groups, ITS vendors, and state DOTs. DOT could improve the awareness and applicability of these resources through greater use of leading practices for successfully encouraging the adoption of new technologies. Specifically, DOT could better ensure that federal ITS resources reach their intended audience and help make informed ITS deployment decisions by developing a strategy to increase the transit community’s awareness of these resources. Additionally, including the deployment of ITS by small urban and rural transit providers in ITS monitoring efforts could help DOT customize ITS resources to address the challenges these providers face. Without greater efforts from DOT to make the transit community more aware of federal ITS resources and tailor these resources to the needs of smaller providers, transit providers may be missing information that could help them make the most informed ITS deployment decisions. To improve access to and awareness and applicability of ITS resources for ITS deployment, we recommend that the Secretary of Transportation direct the ITS JPO, in coordination with FTA, to take the following two actions: develop a strategy to raise awareness of JPO’s training, technical assistance, and knowledge resources for transit ITS deployment in the transit community, and include ITS adoption by small urban and rural transit providers in ITS monitoring efforts. We provided a draft of this report to DOT for review and comment. DOT concurred with both of our recommendations. In its comments, which we have reproduced in appendix II, DOT noted that it is leveraging its FAST Act authorities to further evaluate and validate its efforts to advance urban and rural ITS; for example, the JPO is developing a course catalog to describe its knowledge resource offerings and is considering developing a small urban and rural ITS transit survey as part of the 2019 ITS Deployment Survey. DOT also provided technical comments on the draft, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Transportation and the appropriate congressional committee. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or members of your staff have questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. This report addresses: (1) the extent to which selected transit providers in large urban areas use Intelligent Transportation Systems (ITS); (2) the extent to which transit providers in small urban and rural areas use ITS; (3) the benefits and challenges transit providers experience in deploying ITS; and (4) the extent to which transit providers have utilized the Department of Transportation’s (DOT) resources to promote and support ITS. To determine the extent of ITS use among transit providers in large urban areas, we reviewed 2013 data on national ITS deployment from DOT. On the basis of interviews with DOT officials and analysis of the 2013 ITS deployment data, we determined that the data were sufficiently reliable for our purposes. We conducted site visits to Pittsburgh, Pennsylvania; Portland and Eugene, Oregon; and Tampa and Orlando, Florida, to observe transit ITS deployments. We selected these site visits based on criteria including geographic dispersion and recommendations by JPO and FTA officials and industry stakeholders. During these site visits, we obtained documentation and interviewed officials from stakeholders in public transit decision-making including municipalities, academic researchers, state departments of transportation, and metropolitan planning organizations. We are not able to generalize our findings in these site visits to the whole country but used the other sources mentioned above to gain a more general perspective. We also conducted semi-structured interviews on the use of ITS with a judgmental sample of 31 transit providers serving large urban areas. We selected transit providers that were geographically dispersed across the country and represented the variety of transit modes offered in these areas. We separated the transit providers into two categories: medium urban: 13 providers serving urbanized areas with populations of 200,000–1 million, and large urban: 18 providers serving urbanized areas with populations of more than 1 million. Because we used a judgmental sample of transit providers, findings from these interviews cannot be generalized to a broader population. However, we determined that the selection of these transit providers was appropriate for our design and objectives and that these interviews would generate valid and reliable evidence to support our work. We also interviewed officials from related industry associations such as the American Public Transportation Association (APTA), Community Transportation Association of America (CTAA), and Intelligent Transportation Society of America (ITS America), and representatives from two ITS vendors and four independent ITS consultants. We selected the ITS vendors based on interviews with several transit providers in large urban areas that utilized their products, and the consultants based on a review of published transit ITS reports. To determine the extent of ITS use among transit providers in small urban and rural areas, we conducted a web-based survey of transit providers from November through December 2015. Results of this survey and the survey instrument have been published in GAO-16-639SP, an E- supplement to GAO-16-638 and can be found at the GAO website. We constructed the population of transit providers for our survey sample using reporting year 2013 data for recipients of Section 5307 FTA urbanized area formula grants and sub-recipients of Section 5311 FTA non-urbanized area formula grants in FTA’s National Transit Database (NTD). Using data from the NTD’s urban module, we determined that there were 314 providers that primarily served small urban areas. Using the NTD’s rural module, we identified 1310 providers that primarily serve rural areas. We excluded from this population transit providers that reported as urban recipients, rural recipients reporting separately, intercity bus providers, and 7 agencies that were also included in the urban module. To target the population of rural providers to those that are most likely using ITS, we also excluded transit providers with fleets of 10 or fewer vehicles. The outcome was a survey sample frame of 314 small urban providers and 582 rural providers. We selected a stratified random sample of 312 transit providers: 146 small urban providers and 166 rural providers. We obtained contact information for the rural transit providers from CTAA. During our data collection, we identified two organizations that were not currently providing transit service and removed them from our sample as out of scope. We obtained completed questionnaires from 233 respondents, or about a 75 percent response rate. The survey results can be generalized to the target population of 314 transit providers that serve small urban areas and 582 transit providers with more than 10 vehicles that serve rural areas. And, as noted above, we are issuing an electronic supplement to this report that shows a more complete tabulation of our survey results. We developed a questionnaire to obtain information about transit providers’ use of ITS technologies. On November 23, 2015, we sent an initial e-mail alerting agency contacts to the upcoming web-based survey, and a week later, the web-based survey was also delivered to recipients via e-mail message. Our e-mail message described the purpose and topic of the survey, and encouraged the respondent to consult with other individuals in the provider’s organization if that would increase the accuracy of their responses. The web-based survey requested information on types of ITS technologies deployed and any reasons for not deploying a technology; costs, benefits, and challenges associated with ITS; sources of funding and technical support; and federal resources used. To help increase our response rate, we sent a reminder e-mail on December 14 and called agency officials. The survey was available to respondents from November 30 through December 18, 2015. To pretest the questionnaire, we conducted cognitive interviews with officials from 7 transit providers with knowledge about their organization’s use of ITS. Each pretest was conducted on the phone. We selected pretest respondents to represent small urban and rural areas in different parts of the country. We conducted these pretests to determine if the questions were burdensome, understandable, and measured what we intended, and to ensure we could identify an appropriate individual who was knowledgeable about ITS use to respond to the survey. On the basis of feedback from the pretests and expert review we modified the questions as appropriate. To produce the estimates from this survey, answers from each responding case were weighted in the analysis to account statistically for all the members of the population, including those who were not selected or did not respond to the survey. Estimates produced from this sample are generalizable to the population of transit providers that served small urban areas and transit providers with more than 10 vehicles that served rural areas as reported to the FTA’s National Transit Database in reporting year 2013. Because our results are based on a sample and different samples could provide different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (for example, plus or minus 10 percentage points). We are 95 percent confident that each of the confidence intervals in this report include the true values in the study population. Unless we note otherwise, percentage estimates based on all transit agencies have 95 percent confidence intervals of within plus or minus 10 percentage points. Confidence intervals for survey estimates are presented in our supplemental survey product (GAO-16-639SP). In addition to the reported sampling errors, the practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as non-sampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages for the purpose of minimizing such non-sampling errors. We took the following steps to increase the response rate: developing the questionnaire, pre-testing the questionnaires with small urban and rural transit providers, conducting multiple follow-ups to identify the appropriate contact at some organizations and to encourage responses to the survey, and contacting respondents to clarify unclear responses. To identify the benefits and challenges that transit providers in large urban, small urban, and rural areas are experiencing from deploying ITS, we interviewed JPO and FTA officials, industry associations, officials from public transit stakeholders in our site visits, and 31 transit providers in large urban areas, surveyed transit providers in small urban and rural areas, and reviewed published research on ITS. We analyzed the interviews, survey results, and published research to identify commonly cited benefits and challenges. To determine how DOT promotes and supports the use of ITS technologies, we interviewed officials from the JPO and FTA about the federal resources and assistance available to support deployment and how transit providers use these resources. We reviewed the JPO’s program and strategic planning documents, including documents related to the Professional Capacity Building Program. In addition, we reviewed the JPO’s efforts to promote and support ITS technologies, including various studies, guidance, websites, and the JPO’s ITS databases. We determined the extent to which transit providers are utilizing DOT’s ITS resources by asking transit provider officials about their awareness and use of the training, technical assistance, or knowledge resources programs offered by the JPO, whether they had used these programs, and how helpful they had found them to be, in interviews and through the survey. In prior work, we and the National Academies’ Transportation Research Board identified leading practices for successfully encouraging the adoption of new technologies. We conducted this performance audit from March 2015 through June 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Public Transit Stakeholders GAO Interviewed Federal agencies Federal Transit Administration U.S. Department of Transportation, Office of the Assistant Secretary for Research and Technology, Intelligent Transportation Systems Joint Program Office Academic institutions Carnegie Mellon University North Dakota State University, Upper Great Plains Transportation Institute, Small Urban and Rural Transit Center University of South Florida, Center for Urban Transportation Research Industry associations American Public Transportation Association Community Transportation Association of America Intelligent Transportation Society of America ITS consultants Brendon Hemily C.R. Peterson Consulting, LLC Trillium Solutions ITS vendors Clever Devices RouteMatch Software Metropolitan planning organizations Lane Council of Governments Southwestern Pennsylvania Commission Municipalities City of Orlando City of Portland Public transit providers Antelope Valley Transit Authority Bi-State Development Agency (Metro Transit—St. Louis) Casco Bay Island Transit District Central Florida Regional Transportation Authority (LYNX) In addition to the contact named above, the following individuals made important contributions to this report: Susan Zimmerman (Assistant Director), James Ashley, Namita Bhatia-Sabharwal, Anne Doré, Heather MacLeod, Cheryl Peterson, Justin Reed, Malika Rice, Michelle Weathers, and Elizabeth Wood.
Public transit providers are adopting electronics and information-processing applications called ITS to help improve operations and service. ITS technologies can play an important role in facilitating multimodal choices in a rapidly changing transportation environment. This report describes: (1) the extent to which selected transit providers in large urbanized areas are using ITS, (2) the extent to which transit providers in small urban and rural areas are using ITS, (3) the benefits and challenges these transit providers experience in deploying ITS, and (4) the extent to which transit providers have utilized DOT resources to promote and support ITS. GAO reviewed DOT's ITS deployment data and ITS studies; interviewed DOT officials and public transit stakeholders; conducted three site visits, selected based on geographic dispersion and DOT recommendations; interviewed 31 transit providers serving large urbanized areas selected for geographic dispersion and use of multiple transit modes; and conducted a national survey of small urban and rural transit providers to obtain information on ITS technologies used. Selected large and medium urban transit providers have deployed most Intelligent Transportation Systems (ITS) technologies, such as automatic vehicle location (AVL) and electronic fare payment. Most of these providers reported sharing data collected from ITS with the public or regional transportation providers to enable technology innovations and improve regional planning. Large and medium urban transit providers have also deployed advanced types of ITS technologies, such as smart phone applications to provide passengers with travel information and mobile ticketing. GAO estimates that small urban and rural transit providers are using security systems, computer-aided dispatch, AVL, and geographic information systems to, among other things, monitor safety and security and improve record-keeping and billing capabilities. However, most small urban and rural transit providers are not using other ITS technologies—such as automatic passenger counters or electronic fare payment—due to the cost of the technologies or because there is no perceived need. Transit providers GAO surveyed and interviewed reported various benefits from ITS including improved scheduling and routing, on-time performance and schedule adherence, and customer satisfaction. In addition, many large and medium urban transit providers reported that using combinations of technologies can increase benefits. By using technologies such as AVL and electronic fare payment together, for example, transit providers can obtain more precise ridership information, which can further improve their planning. However, transit providers GAO interviewed and surveyed noted that it can be difficult to quantify the benefits of using ITS technologies because, as reported by large and medium urban providers, it may be difficult to identify a unit of measurement, such as for greater staff efficiency, or attribute benefits to either ITS deployment or a specific technology. Transit providers also face an assortment of deployment challenges, including competing for funding internally with state-of-good-repair needs, reluctance from the transit workforce and leadership to embrace ITS technologies, coordinating deployment across regional agencies, and integrating technologies purchased from different vendors. The Department of Transportation (DOT) offers a variety of information resources to support ITS deployment, but few of the transit providers interviewed or surveyed reported using these resources. DOT officials, selected large and medium transit providers, and other public transit stakeholders told GAO that the transit community may not be using these resources because transit providers lack sufficient staff and the information provided may not reflect the transit community's needs. Additionally, DOT does not include small urban and rural transit providers in its ITS deployment survey, a tool officials said is used in designing information resources. DOT could improve the awareness and applicability of ITS resources by developing a strategy to raise awareness of DOT's resources available to the transit community and monitoring the adoption of ITS by transit providers in small urban and rural areas. Without greater efforts from DOT to make the transit community more aware of federal ITS resources and to tailor these resources to the needs of smaller providers, transit providers may be missing information that could help them make the most informed ITS deployment decisions. GAO recommends that the Secretary of Transportation develop a strategy to raise awareness of federal resources for ITS deployment in the transit community and include ITS adoption by small urban and rural transit providers in ITS-monitoring efforts. DOT agreed with the recommendations and provided technical comments, which GAO incorporated.
All international mail and packages entering the United States through the U.S. Postal Service and private carriers are subject to potential CBP inspection at the 14 USPS international mail facilities and 29 express consignment carrier facilities operated by private carriers located around the country. CBP inspectors can target certain packages for inspection or randomly select packages for inspection. CBP inspects for, among other things, illegally imported controlled substances, contraband, and items— like personal shipments of noncontrolled prescription drugs—that may be inadmissible. CBP inspections can include examining the outer envelope of the package, using X-ray detectors, or opening the package to physically inspect the contents. Each year the international mail and carrier facilities process hundreds of millions of pieces of mail and packages. Among these items are prescription drugs ordered by consumers over the Internet, the importation of which is prohibited under current law, with few exceptions. Two acts—the Federal Food, Drug, and Cosmetic Act and the Controlled Substances Import and Export Act—specifically regulate the importation of prescription drugs into the United States. Under the Federal Food, Drug, and Cosmetic Act, as amended, FDA is responsible for ensuring the safety, effectiveness, and quality of domestic and imported drugs and may refuse to admit into the United States any drug that appears to be adulterated, misbranded, or unapproved for the U.S. market as defined in the act. Under the act and implementing regulations, this includes foreign versions of FDA-approved drugs if, for example, neither the foreign manufacturing facility nor the manufacturing methods and controls were reviewed by FDA for compliance with U.S. statutory and regulatory standards. The act also prohibits reimportation of a prescription drug manufactured in the United States by anyone other than the original manufacturer of that drug. According to FDA, prescription drugs imported by individual consumers typically fall into one of these prohibited categories. However, FDA has established a policy that allows local FDA officials to use their discretion to not interdict personal prescription drug imports that do not contain controlled substances under specified circumstances, such as importing a small quantity for treatment of a serious condition, generally not more than a 90-day supply of a drug not available domestically. The importation of prohibited foreign versions of prescription drugs like Viagra (an erectile dysfunction drug) or Propecia (a hair loss drug), for example, would not qualify under the personal importation policy because approved versions are readily available in the United States. In addition, the Controlled Substances Import and Export Act, among other things, generally prohibits personal importation of those prescription drugs that are controlled substances, such as Valium. Under the act, shipment of controlled substances to a purchaser in the United States from another country is only permitted if the purchaser is registered with DEA as an importer and is in compliance with the Controlled Substances Import and Export Act and DEA requirements. As outlined in the act, it would be difficult, if not impossible, for an individual consumer seeking to import a controlled substance for personal use to meet the standards for registration and related requirements. CBP is to seize illegally imported controlled substances it detects on behalf of DEA. CBP may take steps to destroy the seized and forfeited substance or turn the seized substance over to other federal law enforcement agencies for further investigation. CBP is to turn over packages suspected of containing prescription drugs that are not controlled substances to FDA. FDA investigators may inspect such packages and hold those that appear to be adulterated, misbranded, or unapproved, but must notify the addressee and allow that individual the opportunity to present evidence as to why the drug should be admitted into the United States. If the addressee does not provide evidence that overcomes the appearance of inadmissibility, then the item is refused admission and returned to the sender. Investigations that may arise from CBP and FDA inspections may fall within the jurisdiction of other federal agencies. DEA, ICE, and FDA investigators have related law enforcement responsibilities and may engage in investigations stemming from the discovery of illegally imported prescription drugs. Although USPS’s Inspection Service does not have the authority, without a federal search warrant, to open packages suspected of containing illegal drugs, it may collaborate with other federal agencies in certain investigations. Also, ONDCP is responsible for formulating the nation’s drug control strategy and has general authority for addressing policy issues concerning the illegal distribution of controlled substances. ONDCP’s authority does not, however, include prescription drugs that are not controlled substances. My statement will now focus on what the available data show about the volume and safety of prescription drugs imported into the United States for personal use through the international mail and private carriers. In our report, we state that CBP and FDA do not systematically collect data on the volume of prescription drugs and controlled substances they encounter at the mail and carrier facilities. CBP and FDA officials have said that in recent years they have observed increasingly more packages containing prescription drugs being imported through the mail facilities, but neither agency has complete data to estimate the volume of importation. FDA officials told us that CBP and FDA currently have no mechanism for keeping an accurate count of the volume of illegally imported drugs, because of the large volume of packages arriving daily through the international mail and carriers. Furthermore, FDA officials told us that FDA did not routinely track items that contained prescription drugs potentially prohibited for import that they released and returned for delivery to the recipient. However, they said that FDA had begun gathering from the field information on the imported packages it handles, but as of July 2005, this effort was still being refined. We also report that CBP and FDA, in coordination with other federal agencies, have conducted special operations targeted to identify and tally the packages containing prescription drugs imported through a particular facility during a certain time period and to generate information for possible investigation. The limited data collected have shown wide variations in volume. For example, at one mail facility CBP officials estimated that approximately 3,300 packages containing prescription drugs entered the facility in 1 week and at another mail facility CBP officials estimated that 4,300 such packages entered the facility in 1 day. While these data provide some insight regarding the number of packages containing prescription drugs at a selected mail facility during a certain time period, the data are not representative of other time periods or projectable to other facilities. Our report also notes that during congressional hearings over the past 4 years, FDA officials, among others, have presented estimates of the volume of prescription drugs imported into the United States through mail and express carrier facilities ranging from 2 million to 20 million packages in a given year. Each estimate has its limitations; for example, some estimates were extrapolations from data gathered at a single mail facility. More recently, a December 2004 HHS report stated that approximately 10 million packages containing prescription drugs enter the United States— nearly 5 million packages from Canada and another 5 million mail packages from other countries. However, these estimates also have limitations, being partially based on extrapolations from limited FDA observations at international mail branch facilities. Without an accurate estimate of the volume of importation of prescription drugs, federal agencies cannot determine the full scope of the importation issue. Regarding the safety of prescription drug imports, we report that FDA officials have said that they cannot provide assurance to the public regarding the safety and quality of drugs purchased from foreign sources, which are largely outside of their regulatory system. FDA officials also said that consumers who purchase prescription drugs from foreign-based Internet pharmacies are at risk of not fully knowing the safety or quality of what they are importing. While some consumers may purchase genuine products, others may unknowingly purchase counterfeit products, expired drugs, or drugs that were improperly manufactured. In addition, we report on CBP’s and FDA’s limited analysis of the imported prescription drugs identified during special operations. The results of these efforts have raised questions about the safety of some of the drugs. For example, during a special operation in 2003 to identify and assess counterfeit and potentially unsafe imported drugs at four mail facilities, CBP and FDA inspected 1,153 packages that contained prescription drugs. According to a CBP report, 1,019, or 88 percent, of the imported drug products were in violation of the Federal Food, Drug, and Cosmetic Act or the Controlled Substances Import and Export Act. Consistent with these concerns, we report on the findings of our June 2004 report in which we identified several problems associated with the handling, FDA approval status, and authenticity of 21 prescription drug samples we purchased from Internet pharmacies located in several foreign countries—Argentina, Costa Rica, Fiji, Mexico, India, Pakistan, the Philippines, Spain, Thailand, and Turkey. Our work showed that most of the drugs, all of which we received via consignment carrier shipment or the U.S. mail, were unapproved for the U.S. market because, for example, the labeling or the foreign manufacturing facility, methods, and controls were not reviewed by FDA. We observed during the site visits undertaken for our current report that in addition to some prescription drugs imported through the mail and carrier facilities not being shipped in protective packages, some drugs also lacked product identifications, directions for use, or warning labels. Furthermore, for some drugs, the origin and contents could not be immediately determined by CBP or FDA inspection. Our report also noted that federal agencies and professional medical and pharmacy associations have found that consumers of any age can obtain highly addictive controlled substances from Internet pharmacies, sometimes without a prescription or consultation with a physician. Both DEA and ONDCP have found that the easy availability of controlled substances directly to consumers over the Internet has significant implications for public health, given the opportunities for misuse and abuse of these addictive drugs. In addition, the American Medical Association recently testified that Internet pharmacies that offer controlled substances without requiring a prescription or consultation with a physician contribute to the growing availability and increased use of addictive drugs for nonmedical purposes. My statement will now focus on the procedures and practices used at selected facilities to inspect and interdict prescription drugs unapproved for import. With regard to procedures and practices used at selected facilities to inspect and interdict prescription drugs unapproved for import, our report cites our July 2004 testimony in which we reported that CBP and FDA officials at selected mail and carrier facilities used different practices and procedures to inspect and interdict packages that contain prescription drugs. While each of the facilities we visited targeted packages for inspection, the basis upon which packages were targeted could vary and was generally based on several factors, such as the inspector’s intuition and experience, whether the packages originated from suspect countries or companies, or were shipments to individuals. At that time, we also reported that while some targeted packages were inspected and interdicted, many others either were not inspected and were released to the addressees or were released after being held for inspection. FDA officials said that because they were unable to process the volume of targeted packages, they released tens of thousands of packages containing drug products that may violate current prohibitions and could have posed a health risk to consumers. In August 2004, FDA issued standard operating procedures outlining how FDA personnel are to prioritize packages for inspection, inspect the packages, and make admissibility determinations of FDA-regulated pharmaceuticals imported into the United States via international mail. Under the procedures, CBP personnel are to forward to FDA personnel any mail items, from FDA’s national list of targeted countries and based on local criteria, that appear to contain prescription drugs. Deviations from the procedures must be requested by facility personnel and approved by FDA management. According to FDA officials, these procedures have been adopted nationwide. While the new procedures should encourage processing uniformity across facilities, many packages that contain prescription drugs are still released. Specifically, according to the procedures, all packages forwarded by CBP but not processed by FDA inspectors at the end of each workday are to be returned for delivery by USPS to the recipient. However, according to the procedures, packages considered to represent a significant and immediate health hazard may be held over to the next day for processing. Prohibitions on Personal Importation, GAO-04-839T (Washington, D.C.: July 22, 2004). to deal with the volume, nor were they designed to do so. While the packages that are not targeted are released without inspection, so are many packages that are targeted and referred to FDA personnel. At one facility, FDA officials estimated that each week they return without inspection 9,000 to 10,000 of the packages referred to them by CBP. They said these packages were given to USPS officials for delivery to the addressee. Regarding the procedures and practices used to inspect and interdict certain controlled substances, our report cites our July 2004 testimony in which we reported that CBP officials were to seize the illegally imported controlled substances they detected. However, at that time, some illegally imported controlled substances were not seized by CBP. For example, CBP officials at one mail facility told us that they experienced an increased volume of controlled substances and, in several months, had accumulated a backlog of over 40,700 packages containing schedule IV substances. According to our report, CBP field personnel said they did not have the resources to seize all the controlled substances they detected. Officials said that the seizure process can be time-consuming, taking approximately 1 hour for each package containing controlled substances. According to CBP officials, when an item is seized, the inspector records the contents of each package—including the type of drugs and the number of pills or vials in each package. If the substance is a schedule I or II controlled substance, it is to be summarily forfeited without notice, after seizure. However, if it is a schedule III through V controlled substance, CBP officials are to notify the addressee that the package was seized and give the addressee an opportunity to contest the forfeiture by providing evidence of the package’s admissibility and trying to claim the package at a forfeiture hearing. Our report goes on to say that to address the seizure backlog and give CBP staff more flexibility in handling controlled substances, in September 2004, CBP implemented a national policy for processing controlled substances, schedule III through V, imported through the mail and carrier facilities. According to the policy, packages containing controlled substances should no longer be transferred to FDA for disposition, released to the addressee, or returned to the sender. CBP field personnel are to hold the packages containing controlled substances in schedules III through V as unclaimed or abandoned property as an alternative to a seizure. According to a CBP headquarters official, processing a controlled substance as abandoned property is a less arduous process because it requires less information be entered into a database than if the same property were to be seized. Once CBP deems the controlled substance to be unclaimed property, the addressee is notified that he or she has the option to voluntarily abandon the package or have the package seized. If the addressee voluntarily abandons the package or does not respond to the notification letter within 30 days, the package will be eligible for immediate destruction. If the addressee chooses to have the package seized, there would be an opportunity to contest the forfeiture and claim the package, as described above. CBP also instituted an on-site data collection system at international mail and express carrier facilities to record schedule III through V controlled substances interdicted using this new process. CBP reported that from September 2004 to the end of June 2005, a total of approximately 61,700 packages of these substances were interdicted, about 61,500 at international mail facilities and 200 at express carrier facilities. We report that generally, CBP officials we interviewed told us that the recent policy improved their ability to record information about and destroy schedule III through V controlled substances they detected. A CBP official at one facility said that the abandonment process is faster than the seizure process, as it requires much less paperwork. A CBP headquarters official told us that the abandonment process takes an inspector at a mail facility about 1 minute to process a package. He added that the new policy was intended to eliminate the backlog of schedule III through V controlled substances at the facilities. However, we also report that CBP officials in the field and in headquarters said that they do not know whether the new policy has had any impact on the volume of controlled substances illegally entering the country that reach the intended recipient. Generally, CBP officials do not know the extent of packages that contain controlled substances that are undetected and released. For example, CBP officials at one facility told us that they used historical data to determine the countries that are likely sources for controlled substances and target the mail from those countries. They do not know the volume of controlled substances contained in the mail from the nontargeted countries. A CBP official at another facility said that he believed the volume of controlled substances imported through the facility had begun to decrease, but he had no data to support his claim. According to our report, packages containing prescription drugs can also bypass FDA inspection at carrier facilities because of inaccurate information about the contents of the package. Unlike packages at mail facilities, packages arriving at carrier facilities we visited are preceded by manifests, which provide information from the shipper, including a description of the packages’ contents. While the shipments are en route, CBP and FDA officials are to review this information electronically and select packages they would like to inspect when the shipment arrives. FDA officials at two carrier facilities we visited told us they review the information for packages described as prescription drugs or with a related term, such as pharmaceuticals or medicine. CBP and FDA officials told us that there are no assurances that the shipper’s description of the contents is accurate. The FDA officials at the carrier facilities we visited told us that if a package contains a prescription drug but is inaccurately described, it would not likely be inspected by FDA personnel. My statement will now focus on the three factors that our report identified as affecting federal agency efforts to enforce the prohibition on prescription drug importation for personal use through international mail and carrier facilities. In our report, we state that the current volume of prescription drug imports, coupled with competing agency priorities, has strained federal inspection and interdiction resources allocated to the mail facilities. CBP and FDA officials told us that the recent increase in American consumers ordering drugs over the Internet has significantly contributed to increased importation of these drugs through the international mail. CBP officials said that they are able to inspect only a fraction of the large number of mail and packages shipped internationally. FDA officials have said that the large volume of imports has overwhelmed the resources they have allocated to the mail facilities and they have little assurance that the available field personnel are able to inspect all the packages containing prescription drugs illegally imported for personal use through the mail. In addition, agencies have multiple priorities, which can affect the resources they are able to allocate to the mail and carrier facilities. For example, FDA’s multiple areas of responsibility include, among other things, regulating new drug product approvals, the labeling and manufacturing standards for existing drug products, and the safety of a majority of food commodities and cosmetics, which, according to FDA officials, all go to FDA’s mission of protecting the public health while facilitating the flow of legitimate trade. CBP’s primary mission is preventing terrorists and terrorist weapons from entering the United States while also facilitating the flow of legitimate trade and travel. DEA’s multiple priorities include interdicting illicit drugs such as heroin or cocaine, investigating doctors and prescription forgers, and pursuing hijackings of drug shipments. We also report on HHS and CBP assessments of resources needed to address the volume of illegally imported drugs coming into the country. In a 2004 report on the importation of prescription drugs, the Secretary of HHS stated that substantial resources are needed to prevent the increasing volume of packages containing small quantities of drugs from entering the country. The Secretary found that despite agency efforts, including those with CBP, FDA currently does not have sufficient resources to ensure adequate inspection of the current volume of personal shipments of prescription drugs entering the United States. CBP is also in the early stages of assessing the resources it needs at the mail facilities to address the volume of controlled substance imports. However, CBP officials admit that an assessment of resource needs is difficult because they do not know the scope of the problem and the impact of the new procedures. A CBP official told us that CBP has a statistician working on developing estimates on the volume of drugs entering mail facilities; however, he was uncertain whether this effort would be successful or useful for allocating resources. Likewise, in March 2005, FDA officials told us that they had begun to gather from the field information on the imported packages it handles, such as the number of packages held, reviewed, and forwarded for further investigation. However, as of July 2005, they could not provide any data because, according to the officials, this effort was new and still being refined. According to our report, Internet pharmacies, particularly foreign-based sites, which operate outside the U.S. regulatory system, pose a challenge for regulators and law enforcement agencies. In an earlier 2004 report, we described how traditionally, in the United States, the practice of pharmacy is regulated by state boards of pharmacy, which license pharmacists and pharmacies and establish and enforce standards. To legally dispense a prescription drug, a licensed pharmacist working in a licensed pharmacy must be presented a valid prescription from a licensed health care professional. The Internet allows online pharmacies and physicians to anonymously reach across state and national borders to prescribe, sell, and dispense prescription drugs without complying with state requirements or federal regulations regarding imports. In addition, we report that the nature of the Internet has challenged U.S. law enforcement agencies investigating Internet pharmacies, particularly foreign-based sites. Internet sites can easily be installed, moved, or removed in a short period of time. This fluidity makes it difficult for law enforcement agencies to identify, track, monitor, or shut down those sites that operate illegally. Moreover, investigations can be more difficult when they involve foreign-based Internet sites, whose operators are outside of U.S. boundaries and may be in countries that have different drug approval and marketing approaches than the United States has. For example, according to DEA officials, drug laws and regulations regarding controlled substances vary widely by country. DEA officials told us their enforcement efforts with regard to imported controlled substances are hampered by the different drug laws in foreign countries. Internet pharmacy sites can be based in countries where the marketing and distribution of certain controlled substances are legal. Steroids, for example, sold over the Internet may be legal in the foreign country in which the online pharmacy is located. Federal agencies can also face challenges when working with foreign governments to share information or develop mechanisms for cooperative law enforcement. For example, FDA officials have testified that they possess limited investigatory jurisdiction over sellers in foreign countries and have had difficulty enforcing the law prohibiting prescription drug importation when foreign sellers are involved. A DEA official told us that it was difficult to convince some foreign governments that the illegal sale of prescription drugs over the Internet is a global problem and not restricted to the United States. In our report, we also note that FDA and DEA officials told us that they work with commercial firms, including express carriers, credit card organizations, Internet providers, and online businesses to obtain information to investigate foreign pharmacies, but these investigations are complicated by legal and practical considerations. FDA and DEA officials said that the companies have been willing to work with government agencies to stop transactions involving prescription drugs prohibited from import, and some have alerted federal officials when suspicious activity is detected. However, officials also identified current legal and practical considerations that complicated obtaining information from organizations, such as credit card organizations. For example, according to FDA, DEA, and ICE officials, credit card organizations and banks and other financial institutions that issue credit cards will not provide to the agencies information about the parties involved in the transaction without a subpoena. Representatives from the credit card companies we contacted explained that these issues generally are resolved if the agency issues a properly authorized subpoena for the desired information. We also report that FDA headquarters officials said that packages that contain prescription drugs for personal use that appear to be prohibited from import pose a challenge to their enforcement efforts because these packages cannot be automatically refused. Before any imported item is refused, the current law requires FDA to notify the owner or consignee that the item has been held because it appears to be prohibited and give the product’s owner or consignee an opportunity to submit evidence of admissibility. If the recipient does not respond or does not present enough evidence to overcome the appearance of inadmissibility, then the item can be returned to the sender, or in some cases destroyed. FDA officials told us that this requirement applies to all drug imports that are held under section 801(a) of the Federal Food, Drug, and Cosmetic Act. Nonetheless, they said that they believe this notification process is time consuming because each package must be itemized and entered into a database; a letter must be written to each addressee; and the product must be stored. The process can take up to 30 days per import—and can hinder their ability to quickly handle packages containing prescription drugs prohibited from import. According to FDA investigators, in most instances, the addressee does not present evidence to support the drugs’ admissibility, and the drugs are ultimately provided to CBP or the U.S. Postal Service for return to sender. FDA headquarters officials told us that the Standard Operating Procedures, introduced in August 2004 and discussed earlier in this report, were an attempt to help FDA address the burden associated with the notification process because the procedures were designed to focus resources on packages containing drugs considered to be among the highest risk. Our report further indicates that FDA and the Secretary of HHS have raised concerns about FDA’s notification process, noting that it is time- consuming and resource intensive, in testimony before Congress, but did not propose any legislative changes to address the concerns identified. In May 2001, FDA’s Acting Principal Deputy Commissioner wrote a memorandum to the Secretary of HHS expressing concern about the growing number of drugs imported for personal use and the dangers they posed to public health. The memorandum explained that because of the notice and opportunity to respond requirements, detaining and refusing entry of mail parcels was resource intensive. The Acting Principal Deputy Commissioner proposed, among other things, the removal of the requirement that FDA issue a notice before it could refuse and return personal use quantities of FDA-regulated products that appear violative of the Food, Drug, and Cosmetic Act. He noted that removal of the notification requirement would likely require legislation, but without this change, FDA could not effectively prohibit mail importation for personal use. As of July 2005, according to FDA officials and an HHS official, the Secretary had not responded with a specific legislative proposal to change FDA’s notification requirement. FDA officials said that there are some complicating issues associated with eliminating the notification requirement; for example, the importance of providing due process, which basically gives individuals the opportunity to present the case as to why they should be entitled to receive the property (e.g., prescription drugs that they ordered from a foreign source), and/or the extent the law should be changed to cover all imported prescription drugs and other products. In addition, USPS indicated that any discussion of options to expedite the processing and disposition of prescription drugs must consider international postal obligations, specifically the requirements of the Universal Postal Union (UPU). FDA officials said that currently, the notification requirement also applies to large commercial quantities of prescription drugs and other nonpharmaceutical products for which the requirement is not a problem. They said it has become a burden only because FDA and CBP are overwhelmed with a large volume of small packages. Furthermore, we report that FDA officials said that they have considered other options for dealing with this issue, such as summarily returning each package to the sender without going through the process. However, they said that the law would likely need to be changed to allow this, and, as with the current process, packages that are returned to the sender could, in turn, be sent back by the original sender to go through the process again. They said that another option might be destruction, but they were uncertain whether they had the authority to destroy drugs FDA intercepts; they indicated that the authority might more likely lie with CBP. Regardless, FDA officials said that whatever approach was adopted, FDA might continue to encounter a resource issue because field personnel would still need to open and examine packages to ascertain whether they contained unapproved prescription drugs. My statement will now focus on efforts federal agencies have undertaken to coordinate the enforcement of the prohibitions on personal importation of prescription drugs. According to our report, since 1999, federal law enforcement and regulatory agencies have organized various task forces and working groups to address issues associated with purchasing prescription drugs over the Internet; however, recent efforts have begun to focus particular attention on imported prescription drugs. For example, according to an FDA official, many of FDA’s efforts, started in 1999, focused on Internet pharmaceutical sales by illicit domestic pharmacies and the risks associated with purchasing those drugs, rather than drugs that are being imported from foreign countries. As our report discusses, more recent efforts have focused on prescription drugs entering international mail and express carrier facilities. In January 2004, the CBP Commissioner initiated an interagency task force on pharmaceuticals, composed of representatives from CBP, FDA, DEA, ICE, and ONDCP as well as legal counsel from the Department of Justice. According to the Commissioner, the proposal to create the task force was prompted by “intense public debate and congressional scrutiny, which has resulted in increasing pressure being applied to regulatory and law enforcement agencies to develop consistent, fair policies” to address illegal pharmaceuticals entering the United States. The Commissioner proposed that the task force achieve five specific goals, and according to a CBP official, five working groups were established to achieve these goals. Figure 1 shows the task force goals, the five working groups, and the goals of each working group. CBP officials and other members of the task force provided examples of activities being carried out or planned by task force working groups. For example, the working group on mail and express consignment operator facilities procedures has carried out special operations at five international mail and three express carrier facilities to examine parcels suspected of containing prohibited prescription drugs over specific periods of time, such as 2 or 3 days. While similar operations have occurred since 2000, a CBP official told us that those conducted under the task force are multiagency efforts; they are expected to continue during the remainder of 2005 at all of the remaining mail facilities and some of the carrier facilities. Our report describes activities of the other working groups. In addition, we report that the task force members are working with ONDCP to address the importation of controlled substances through international mail and carrier facilities. In October 2004, ONDCP issued a plan for addressing demand and trafficking issues associated with certain man-made controlled substances—such as pain relievers, tranquilizers, and sedatives. Among other things, ONDCP recommended that DEA, CBP, ICE, State Department, National Drug Intelligence Center, and FDA work with USPS and private express mail delivery services to target illegal mail order sales of chemical precursors, synthetic drugs, and pharmaceuticals, both domestically and internationally. ONDCP officials said that a multiagency working group is meeting to discuss what can be done to confiscate these controlled substances before they enter the country. Finally, we report that USPS is exploring what additional steps it can take to further help the task force. USPS officials said that they proposed, during a July 2004 hearing, the possibility of cross-designating U.S. Postal Inspectors with Customs’ authority so that Postal Inspectors can conduct warrant-less searches, at the border, of incoming parcels or letters suspected of containing illegal drugs. According to USPS officials, such authority would facilitate interagency investigations. They said that their proposal has yet to be finalized with CBP. In addition, internationally, USPS has drafted proposed changes to the U.S. listing in the Universal Postal Union List of Prohibited Articles. This action is still pending. In our report, we state that although the task force has taken positive steps toward addressing issues associated with enforcing the laws on personal imports, it has not fully developed a strategic framework that would allow the task force to address many of the challenges we identify in this report. Our review showed that the task force has already begun to establish some elements of a strategic framework, but not others. For example, the Commissioner’s January 2004 memo laid out the purpose of the task force and why it was created. However, it has not defined the scope of the problem it is trying to address because, as discussed earlier, CBP and FDA have yet to develop a way to estimate the volume of imported prescription drugs entering specific international mail and carrier facilities. In addition, while the task force and individual working groups have goals that state what they are trying to achieve, the task force has not established milestones and performance measures to gauge results. Furthermore, the task force has not addressed the issue of what its efforts will cost so that it can target resources and investments, balancing risk reduction with costs and considering task force members’ other law enforcement priorities. Instead, according to a CBP official, working group projects are done on an ad hoc basis wherein resources are designated for specific operations. Carrying out enforcement efforts that involve multiple agencies with varying jurisdictions is not an easy task, especially since agencies have limited resources and often conflicting priorities. According to our report, the challenges we identify could be more effectively addressed by using a strategic framework that more clearly defines the scope of the problem by estimating the volume of drugs entering international mail and carrier facilities, establishes milestones and performance measures, determines resources and investments needed to address the flow of imported drugs entering the facilities and where those resources and investments should be targeted, and evaluates progress. Advancing such a strategic framework could establish a mechanism for accountability and oversight. Our report acknowledges that such a strategic framework needs to be flexible to allow for changing conditions and could help agencies adjust to potential changes in the law governing the importation of prescription drugs for personal use. While acknowledging the complexities of enforcing the laws governing prescription drug imports for personal use, including the involvement of multiple agencies with various jurisdictions and differing priorities, our report concludes that current inspection and interdiction efforts at the international mail branches and express carrier facilities have not prevented the reported substantial and growing volume of prescription drugs from being illegally imported from foreign Internet pharmacies into the United States. CBP and other agencies have taken a step in the right direction by establishing a task force designed to address many of the challenges discussed in this report. However, a strategic framework that facilitates comprehensive enforcement of prescription drug importation laws and measures results would provide the task force with an opportunity to better focus agency efforts to stem the flow of prohibited prescription drugs entering the United States. In addition to the issues addressed by the task force, FDA has also expressed continuing concern to Congress that it encounters serious resource constraints enforcing the law at mail facilities because packages containing personal drug imports must be handled in accordance with FDA’s time-consuming and resource- intensive notification process. FDA has stated that it cannot effectively enforce the law unless the requirement to notify recipients is changed. Accordingly, to help ensure that the government maximizes its ability to enforce laws governing the personal importation of prescription drugs, our report recommends that the CBP Commissioner, in concert with ICE, FDA, DEA, ONDCP, and USPS, develop and implement a strategic framework for the task force that would promote accountability and guide resource and policy decisions. At a minimum, this strategic framework establishment of an approach for estimating the scope of the problem, such as the volume of drugs entering the country through mail and establishment of objectives, milestones, and performance measures and a methodology to gauge results; determination of the resources and investments needed to address the flow of prescription drugs illegally imported for personal use and where resources and investments should be targeted; and an evaluation component to assess progress, identify barriers to achieving goals, and suggest modifications. In view of FDA’s continuing concern about the statutory notification requirement and its impact on enforcement, our report also recommends that the Secretary of HHS assess the ramifications of removing or modifying the requirement, report on the results of this assessment, and, if appropriate, recommend changes to Congress. In commenting on our report, DEA and ONDCP generally agreed with our recommendation that the CBP task force develop a strategic framework. DEA agreed that such a framework needs to be flexible to allow for changing conditions and said DEA will, in concert with other task force agencies, support the CBP Commissioner’s strategic framework for the interagency task force. DHS generally agreed with the contents of our report and said that CBP is convening a task force meeting to discuss our recommendation. While generally concurring with our recommendation for a strategic framework, HHS questioned the need to include an approach for estimating the volume of unapproved drugs entering the country, because it believed its current estimates are valid. HHS also said our statement that the task force agencies could develop statistically valid volume estimates and realistic risk-based estimates of the number of staff needed to interdict parcels at mail facilities did not recognize FDA’s current level of effort at these facilities relative to its competing priorities. We believe that developing more systematic and reliable volume estimates might position agencies to better define the scope of the problem so that decision makers can make informed choices about resources, especially in light of competing priorities. Regarding our recommendation to assess the ramifications of removing or modifying FDA’s statutorily required notification process, HHS generally agreed and stated that it intended to pursue an updated assessment. USPS did not state whether it concurred with our recommendations, but it noted that discussions of options to expedite the processing and disposition of prescription drugs must consider international postal obligations. Mr. Chairman, this concludes my prepared testimony. I would be happy to respond to any questions you or other members of the committee may have at this time. GAO Contacts and Staff Acknowledgments For further information about this testimony, please contact me at (202) 512-8816. John F. Mortin, Leo M. Barbour, Frances A. Cook, Katherine M. Davis, Michele C. Fejfar, and Barbara A. Stolz made key contributions to this statement. Prescription Drugs: Strategic Framework Would Promote Accountability and Enhance Efforts to Enforce the Prohibitions on Personal Importation. GAO-05-372. Washington, D.C.: September 8, 2005. Prescription Drugs: Preliminary Observations on Efforts to Enforce the Prohibitions on Personal Importation. GAO-04-839T. Washington, D.C.: July 22, 2004. Internet Pharmacies: Some Pose Safety Risks for Consumers. GAO-04-820. Washington, D.C.: June 17, 2004. Internet Pharmacies: Some Pose Safety Risks for Consumers and Are Unreliable in Their Business Practices. GAO-04-888T. Washington, D.C.: June 17, 2004. Combating Terrorism: Evaluation of Selected Characteristics in National Strategies Related to Terrorism. GAO-04-408T. Washington, D.C.: February 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony summarizes a GAO report on federal efforts to address the importation of prohibited prescription drugs through international mail and carrier facilities for personal use. U.S. Customs and Border Protection (CBP), in the Department of Homeland Security (DHS), and the Food and Drug Administration (FDA), in the Department of Health and Human Services (HHS), work with other federal agencies at international mail and express carrier facilities to inspect for and interdict these drugs. This testimony addresses (1) available data about the volume and safety of these drugs, (2) the procedures and practices used to inspect and interdict them, (3) factors affecting federal efforts to enforce the laws governing these drugs, and (4) federal agencies' efforts to coordinate enforcement of the prohibitions on personal importation of these drugs. The information currently available on the safety of illegally imported prescription drugs is very limited, and neither CBP nor FDA systematically collects data on the volume of these imports. Nevertheless, on the basis of their own observations and limited information they collected at some mail and carrier facilities, both CBP and FDA officials said that the volume of prescription drugs imported into the United States is substantial and increasing. FDA officials said that they cannot assure the public of the safety of drugs purchased from foreign sources outside the U.S. regulatory system. FDA has issued new procedures to standardize practices for selecting packages for inspection and making admissibility determinations. While these procedures may encourage uniform practices across mail facilities, packages containing prescription drugs continue to be released to the addressees. CBP has also implemented new procedures to interdict and destroy certain imported controlled substances, such as Valium. CBP officials said the new process is designed to improve their ability to quickly handle packages containing these drugs, but they did not know if the policy had affected overall volume because packages may not always be detected. GAO identified three factors that have complicated federal enforcement of laws prohibiting the personal importation of prescription drugs. First, the volume of imports has strained limited federal resources at mail facilities. Second, Internet pharmacies can operate outside the U.S. regulatory system and evade federal law enforcement actions. Third, current law requires FDA to give addressees of packages containing unapproved imported drugs notice and the opportunity to provide evidence of admissibility regarding their imported items. FDA and HHS have testified before Congress that this process placed a burden on limited resources. In May 2001, FDA proposed to the HHS Secretary that this legal requirement be eliminated, but according to FDA and HHS officials, as of July 2005, the Secretary had not respondedwith a proposal. FDA officials stated that any legislative change might require consideration of such issues as whether to forgo an individual's opportunity to provide evidence of the admissibility of the drug ordered. Prior federal task forces and working groups had taken steps to deal with Internet sales of prescription drugs since 1999, but these efforts did not position federal agencies to successfully address the influx of these drugs imported from foreign sources. Recently, CBP has organized a task force to coordinate federal agencies' activities to enforce the laws prohibiting the personal importation of prescription drugs. The task force's efforts appear to be steps in the right direction, but they could be enhanced by establishing a strategic framework to define the scope of the problem at mail and carrier facilities, determine resource needs, establish performance measures, and evaluate progress. Absent this framework, it will be difficult to oversee task force efforts; hold agencies accountable; and ensure ongoing, focused attention to the enforcement of the relevant laws.
The decennial census is the nation’s largest, most complex survey. In April 2009, address canvassing—a field operation for verifying and correcting addresses for all households and street features contained on decennial maps—will begin. One year later, the Bureau will mail census questionnaires to the majority of the population in anticipation of Census Day, April 1, 2010. Those households who do not respond will be contacted by field staff through the NRFU operation to determine the number of people living in the house on Census Day, among other information. In addition to address canvassing and NRFU, the Bureau conducts other operations, for example, to gather data from residents from group quarters, such as prisons or college dormitories. The Bureau also employs different enumeration methods in certain settings, such as remote Alaska enumeration, in which people living in inaccessible communities must be contacted in January 2010 in anticipation of the spring thaw which makes travel difficult, or update/enumerate, a data collection method involving personal interviews, used in communities where many housing units may not have typical house number-street name mailing addresses. Further, the efforts of state and local government are enlisted to obtain a more complete address file through the LUCA program. The census is also conducted against a backdrop of immutable deadlines, and the census’s elaborate chain of interrelated pre- and post-Census Day activities is predicated upon those dates. The Secretary of Commerce is legally required to (1) conduct the census on April 1 of the decennial year, (2) report the state population counts to the President for purposes of congressional apportionment by December 31 of the decennial year, and (3) send population tabulations to the states for purposes of redistricting no later than 1 year after the April 1 census date. To meet these mandated reporting requirements, census activities must occur at specific times and in the proper sequence. The table below shows some dates for selected, key decennial activities. The Bureau estimates that the 2010 Census will cost $11.3 billion over its life-cycle, making it the most expensive in the nation’s history. While some cost growth is expected, partly because the number of housing units has increased, the estimated cost escalation has far exceeded the housing unit increase. The Bureau estimates that the number of housing units for the 2010 Census will increase by 10 percent over 2000 Census levels, but the average 2010 cost to enumerate a housing unit is expected to increase by about 29 percent from 2000 levels (from $56 to $72) (see fig. 1). As the Bureau plans for 2010, maintaining cost effectiveness will be one of the single greatest challenges confronting the agency. According to the Bureau, the increasing cost of the census is caused by various societal trends—such as increasing privacy concerns, more non- English speakers, and people residing in makeshift and other nontraditional living arrangements—making it harder to find people and get them to participate in the census. The Bureau has reengineered the decennial census, including implementing new initiatives aimed at increasing the response rate. Furthermore, the Bureau also plans to begin to implement its outreach and communications campaign, an effort used in the 2000 Census that was designed to increase awareness and encourage individuals to respond to the census questionnaire. Increasing the decennial’s response rate can result in significant savings because the Bureau can reduce the staffing and costs related to NRFU, as well as yield more complete and accurate data. According to the Bureau, for every one-percentage point increase in the response rate, the Bureau will be able to save $75 million. The Bureau plans to increase response rate by several means, including conducting a short-form-only census. The Bureau is able to do this because in 1996 the Bureau began efforts to replace the decennial long form with the American Community Survey. Since 1970, the overall mail response rate to the decennial census has been declining steadily, in part, because of the burden of responding to the long form, which was sent to a sample of respondents. In the 1980 Census, the overall mail response rate was 75 percent, 3 percentage points lower than it was in the 1970 Census. In the 1990 census, the mail response rate dropped to 65 percent but in 2000 appeared to be leveling off at about 64 percent. In the 2000 Census when comparing the short form to the long form the Bureau found the short form response rate of 66.4 percent was 12.5 percentage points higher that the long form response rate of 53.9 percent. While the difference between the long and short form response rates are significant, the Bureau in its initial assumption for the 2010 Census predicted that conducting a short-form-only census will yield only a 1-percent increase in the overall mail response rate. A targeted second mailing to households that fail to respond to the initial census questionnaire can increase the ultimate response rate. According to Bureau studies, sending a second questionnaire could yield a gain in overall response of 7 to 10 percentage points from non-responding households, thus potentially saving the Bureau between $525 million to $700 million dollars (given that every 1 percentage point increase in response may save $75 million). In reports, we have highlighted that a targeted second mailing could boost the mail response rate, which in turn would result in considerable savings by reducing the number of costly personal visits enumerators would need to make to nonresponding households. The Bureau has never before included this operation as part of a decennial census and over the decade has been testing its feasibility. A targeted second mailing was a part of 2006 test and boosted the response rate by 8.8 percent at the Austin, Texas test site. According to Bureau officials targeted second mailing will be a part of the 2010 Census design. For the 2010 Census the Bureau also intends to increase response rates by undertaking a public awareness campaign as it did in the previous census. In the 2000 Census that effort was comprised of two major activities: conducting the first-ever paid advertising campaign aimed at increasing the mail response rate, including the historically undercounted populations, and leveraging the value of local knowledge by building 140,000 partnerships at every level including state, local , and tribal governments; community- based organizations; and the media and private-sector organizations to elicit public participation in the census. In 2001 we reported that for the 2000 Census, it appeared that encouraging people to respond to the census questionnaire was successful, in part due to the Bureau’s partnership efforts. For example, according to the Bureau, it achieved an initial mail response rate of about 64 percent, 3 percentage points higher that it had anticipated when planning for NRFU. This was a noteworthy accomplishment and, as a result, the Bureau had over 3 million fewer housing units to follow-up with than it had initially planned. The Bureau will soon begin its outreach and communication effort for 2010. The Bureau plans to award the communications contract in August 2007 and will begin hiring partnership specialists at headquarters starting in fiscal year 2008. The MCD is a keystone to the reengineered census. It allows the Bureau to automate operations and eliminate the need to print millions of paper questionnaires and maps used by census workers to conduct address canvassing and NRFU, as well as assisting to manage field staff’s payroll. The benefits of using the MCD were tested in the 2004 and 2006 tests. According to the Bureau, during the 2004 Census Test, the MCD allowed the Bureau to successfully remove over 7,000 late mail returns from enumerators’ assignments, reducing the total NRFU workload by nearly 6 percent. The ability to remove late mail returns from the Bureau’s NRFU workload reduces costs, because census field workers no longer need to make expensive follow-up visits to households that return their questionnaire after the mail-back deadline. If the Bureau had possessed this capability during the 2000 Census, it could have eliminated the need to visit nearly 773,000 late-responding households and saved an estimated $22 million (based on our estimate that a 1-percentage-point increase in workload could add at least $34 million in direct salary, benefits, and travel costs to the price tag of NRFU). However, the Bureau’s ability to collect and transmit data using the MCD is not fully tested and, at this point, constitutes a risk to the cost-effective implementation of the 2010 Census. During the 2004 test of NRFU and the 2006 test of address canvassing, the MCDs experienced significant reliability problems. For example, during the 2004 Census Test, the MCDs experienced transmission problems, memory overloads, and difficulties with a mapping feature—all of which added inefficiencies to the NRFU operation. Moreover, during the 2006 test, the MCD’s global positioning system (GPS) receiver, a satellite-based navigational system to help workers locate street addresses and collect coordinates for each structure in their assignment area, was also unreliable. Bureau officials believe the MCD’s performance problems will be addressed through a contract awarded on March 30, 2006, to develop a new MCD. A prototype of the MCD has been developed and delivered by the contractor for use in the 2008 Dress Rehearsal. However, operational testing of the MCD will not occur until May 2007, when address canvassing for the 2008 Dress Rehearsal occurs, and if problems do emerge, little time will be left to develop, test, and incorporate refinements. In our May 2006 report, we highlighted the tight time frames to develop the MCD and recommended that systems being developed or provided by contractors for the 2010 Census—including the MCD—be fully functional and ready to be assessed as part of the 2008 Dress Rehearsal. We are currently reviewing the cost, schedule and performance status of the contract for the MCDs. We plan to visit the dress rehearsal sites to determine the functionality of the devices to collect and transmit data. If after the 2008 Dress Rehearsal the MCD is found not to be reliable, the Bureau could be faced with the daunting possibility of having to revert to the costly, paper-based census used in 2000. Although the greater use of automation offers the prospect of greater efficiency and effectiveness, these actions also introduce new risks. The automation of key census processes involves an extensive reliance on contractors. Consequently, contract oversight and management becomes a key challenge to a successful census. As part of the Bureau’s plans to increase the use of automation and technology for the 2010 Census, the Bureau estimates that it will spend about $ 3 billion on information technology (IT) investments. The Bureau will be undertaking several major acquisitions, including the Decennial Response Integration System (DRIS)—a system for integrating paper and telephone responses; the Field Data Collection Automation (FDCA) program—the systems and support equipment for field office data collection activities including the MCDs to be used by enumerators; the Data Access and Dissemination System (DADS II)—a system for tabulating and disseminating data from the decennial census and other Bureau surveys to the public; and the modernization of the Master Address File/Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) system, which provides the address list, maps, and other geographic support services for the decennial and other Bureau surveys, known as the MAF/TIGER Accuracy Improvement Project (MTAIP). Together these and other systems are to support collection, processing, and dissemination of census data. In March 2006, we testified on the Bureau’s acquisition and management of two key information technology system acquisitions for the 2010 Census— FDCA and the DRIS. We reported on the Bureau’s progress in implementing acquisitions and management capabilities for these initiatives. To effectively manage major IT programs, organizations should use sound acquisition and management processes to minimize risk and thereby maximize chances for success. Such processes include project and acquisition planning, solicitation, requirement development and management, and risk management. We reported that while the project offices responsible for these two contracts have carried out initial acquisition management activities, neither office had the full set of capabilities they needed to effectively manage the acquisitions, including implementing a full risk management process. We also made recommendations for the Bureau to implement key activities needed to effectively manage acquisitions. For example, we recommended that the Bureau’s project office for DRIS complete a project plan and obtain stakeholder concurrence before initiating additional development work and obtain validation, management, and customer approval of DRIS requirements. In response to our recommendation, the Bureau has finalized the project plan for DRIS and has obtained stakeholders’ commitment. As a result, the DRIS project office will have the direction that it needs to successfully avoid unanticipated changes. Prior to Census Day, Bureau field staff perform the address canvassing operation, during which they verify the addresses of all housing units. The Bureau estimates spending $350 million to hire about 74,000 field workers for the address canvassing operation. About 1 year later, the Bureau mails out questionnaires to about 130 million households nationwide. However, the Bureau expects that about 40 million households will not return the questionnaire. To collect information from those households, the Bureau hires temporary field staff—based out of local census offices—to visit each nonresponding household in its NRFU operation. The Bureau expects to spend over $2 billion to employ about 525,000 temporary field staff for that activity. As shown in fig. 2, in total the Bureau will recruit and test 3.8 million applicants for addressing canvassing and NRFU, hiring some 600,000 people for the 2010 Census. For the 2010 Census, the Bureau plans to use a similar approach to recruit and hire workers as it used during Census 2000. These strategies made the Bureau a more attractive employer to prospective candidates and helped provide a steady stream of applicants during Census 2000. Despite a tight labor market, the Bureau attracted about 3.7 million qualified applicants and hired about half a million enumerators at peak. Some of the broad approaches from 2000 that the Bureau plans on implementing for the 2010 census include recruiting five times more applicants than the needed number of field workers to ensure a considerable depth in the applicant pool from which to hire; “frontloading” or hiring twice the number of people needed to do the work in anticipation of high levels of turnover; exercising the flexibility to raise pay rates for local census offices that were encountering recruiting difficulties; and launching a recruitment advertising campaign, which totaled over $2.3 million for Census 2000. As in 2000, the Bureau faces the daunting tasks of meeting its recruiting and hiring goals. However, it also faces additional challenges, such as demographic shifts whereby the population is increasingly diverse and difficult to locate, and newer challenges, like the Bureau’s use of handheld computers for data collection in the field. It does plan some improvements to how it recruits and hires its temporary workforce to carry out the 2010 Census. For example, the Bureau has conducted and incorporated information collected from employee debriefings that could improve its recruiting and hiring processes. Bureau officials believe this feedback would be helpful in evaluating and refining its hiring and recruiting processes and intend to incorporate some of that information for the 2008 Dress Rehearsal. However, it can do more to target its recruitment of field staff. The Bureau casts a wide net to recruit its temporary workforce to ensure it has a large enough applicant pool from which to hire. In commenting on a draft of this work, Commerce noted that the Bureau’s priority is to reach out as broadly a possible to the diverse communities in the county to attract several million applicants. We recognize that when recruiting and hiring for hundreds of thousands of positions, the Bureau faces a challenge in assessing applicants’ potential success or willingness to work. However, opportunities exist for the Bureau to hone its recruiting efforts to identify individuals who would be more likely to be effective at census work and willing to continue working throughout an operation. Along those same lines, the Bureau could also evaluate the factors associated with an applicant’s success, willingness to work in an operation, and likelihood of attrition to refine its hiring. Despite Commerce’s reservations about refining its current recruiting and hiring strategies, we believe that the Bureau could do more to understand what makes for a successful recruit and, by hiring such applicants, reduce operating costs. Another recruiting and hiring issue we identified in our completed work is related to how the crew leaders are selected. We found that the Bureau’s tools for hiring crew leaders could better distinguish the skills needed for those positions. Crew leaders fill an important role in the Bureau’s field activities because they supervise the work of crews of field workers; train field workers; and will be counted on to troubleshoot the MCDs. We found that despite the different skill requirements of crew leaders and other field staff—for example, while it was important for field staff working in the NRFU operation to have arithmetic and visual identification skills, crew leaders need those skills as well as additional skills, such as management, leadership, and creative thinking—the Bureau used the same set of hiring tools to hire individuals for crew leaders and other field positions during the 2006 Census Test. In its review of the 2004 Census Test, the Department of Commerce Office of Inspector General OIG also reported that Bureau officials said that the applicants’ the multiple-choice test does not capture the technical or supervisory skills needed by crew leaders. The Bureau hired a contractor to assess whether the tools used during the 2006 Census Test selected individuals with the skills necessary to conduct field work using MCDs; however, the Bureau has no current plans to make changes to its hiring process that would include differentiated hiring tools for crew leaders and other positions. Without hiring tools that distinguish between skills needed for the crew leader and other positions, the Bureau does not have assurances that it is selecting crew leaders that can best perform important duties like providing training, managing other field staff, and troubleshooting handheld computers. In commenting on our draft, Commerce indicated that the Bureau needs to evaluate its hiring tools. It is also working to identify and test what the appropriate skills are for the crew leader position. Finally, we found that the Bureau does not collect performance data needed to rehire former workers from prior or ongoing operations to whom it may give hiring priority. Officials say they try to exclude those terminated for cause (such terminations can result when workers have performance or conduct problems such as selling drugs or striking another worker). Bureau officials point to its internal systems, which, they say preclude the rehiring of employees who were terminated for cause. However, the OIG and field officials told us that poor performers may not always be terminated. Without better information on employee performance, the Bureau cannot ensure that the weakest performers are not rehired. Over the course of the 2006 Census Test, almost 15 percent of all field staff were rehired. If this percentage were to be rehired during the 2010 Census, the Bureau would not have performance data to meaningfully evaluate whether to rehire approximately 90,000 individuals. The Bureau believes that the pace of the decennial, particularly NRFU, is such that local census officials would not have enough time to consider past performance when making hiring decisions. However, we believe that the Bureau has enough time. For example, performance data could be collected during address canvassing to be used to assess workers for NRFU, nearly one-year later. The Bureau has employed essentially the same approach to training since the 1970 Census. To conduct training, the Bureau solicits free or low-cost training spaces from local organizations, such as churches or libraries. Training classes typically include 15 to 20 students. Crew leaders usually train their crews, with the assistance of at least one crew leader assistant, using a verbatim training approach, whereby crew leaders read training scripts word-for-word over the course of several days. Similarly, the crew leaders were themselves trained by their supervisors in a “train-the- trainers” approach. The length of training varies by operation; for NRFU, training took almost 42 hours over the course of 6 days during the 2006 test. The Bureau and others, including us, have reported that the Bureau should consider alternate approaches to training delivery. Our review of the 2004 Census Test found that, as a result of the demographic and technological changes that have taken place since 1970, the Bureau might want to explore alternatives to its verbatim approach to training. Moreover, in 2004, the OIG suggested the Bureau explore the use of interactive training methods, as the Bureau does for other non-decennial surveys. For example, while many field staff we contacted during the 2006 test said their overall impression of training was generally positive, many added that videos or visuals would or might improve training. In addition, while the Bureau is providing some computer-based training on using the handheld computers in key operations, overall the Bureau has made limited changes to the approach it uses to deliver training and has not evaluated alternative approaches to providing training. It is notable that observations during the 2004 and 2006 tests showed that field staff may have missed important parts of training. Contractor employees saw students playing games on their MCDs during training for the 2006 test, and in 2004 the OIG saw students not paying attention and falling asleep in class, concluding that some may not have learned how to conduct census operations. The content of the Bureau’s training for field staff also has not changed substantially since Census 2000, despite the fact that, according to the Bureau itself, collecting data from the nation’s population has become increasingly difficult. Field workers we spoke to during the 2006 test noted two related issues on which they had not received sufficient training— dealing with reluctant respondents and handling location-specific challenges. According to the Department of Commerce OIG, in 2004 field staff complained that they felt unprepared to deal with reluctant respondents; the OIG report recommended the Bureau consider adding content to enhance training on this topic. Moreover, our review of the Bureau’s summaries of debriefings it conducted after the 2006 test indicated that field staff found respondent reluctance to be a challenge. Crew leaders noted that this was the most difficult task enumerators faced. In our field visits, we observed that without adequate preparation in dealing with reluctant responders, field staff developed their own strategies when confronted with these situations, resulting in inconsistent and sometimes inappropriate data collection methods. For example, when unable to contact respondents, one Texas enumerator looked up respondent information online, tried to find a phone number for another respondent from a neighborhood cat’s collar, and illegally went through residents’ mail. Field staff may also need more training in overcoming location-specific challenges, such as rural conditions on the Cheyenne River Indian Reservation in South Dakota; and counting the transient student population in Austin, Texas. For example, in Austin, one crew leader explained that training spent a lot of time on mobile homes—which did not exist in his area—but very little time on apartment buildings, which are common there. Based on our observations of the 2004 test, we suggested that the Bureau supplement the existing training with modules geared toward addressing the particular enumeration challenges that field staff are likely to encounter in specific locales. During this review, the Bureau told us that it works with regional offices to develop 10-minute training modules for specific locations. For example, in 2000, Bureau officials said enumerators in Los Angeles were trained to look for small, hidden housing units, such as apartments in converted garages. Bureau officials said they provide guidance on the length of the modules and when they should be presented. However, they said they were not sure how often this kind of specialized training took place, nor had they allocated time during training to present specialized information. We believe the Bureau could do more to assist local offices provide training that recognizes local conditions. Specifically, based on work we will be reporting shortly, we will recommend that the Bureau centrally develop training modules covering enumeration strategies in a variety of situations, such as mobile homes, large apartment buildings, and migrant worker dwellings, which local officials can selectively insert into their training if there is a need to train their field staff on that topic. Such modules would enhance the effectiveness of training by giving greater attention to the challenges field staff are likely to face. In commenting on this recommendation, Commerce noted that the Bureau works with managers in each regional census center to customize a location-specific training module for local census offices. Nonetheless, developing modules for different types of locations centrally would allow the Bureau to control the consistency and quality of training throughout the nation. As part of our evaluation of the Bureau’s LUCA dress rehearsal, we visited the localities along the Gulf Coast to assess the effect the devastation caused by Hurricanes Katrina and Rita might have on LUCA and possibly other operations. The effects of Hurricanes Katrina and Rita are still visible throughout the Gulf Coast region. Hurricane Katrina alone destroyed or made uninhabitable an estimated 300,000 homes; in New Orleans, local officials reported that Hurricane Katrina damaged an estimated 123,000 housing units. Such changes in housing unit stock continue to present challenges to the implementation of the 2010 LUCA Program in the Gulf Coast region and possibly other operations. Many officials of local governments we visited in hurricane-affected areas said they have identified numerous housing units that have been or will be demolished as a result of hurricanes Katrina and Rita and subsequent deterioration. Conversely, many local governments estimate that there is new development of housing units in their respective jurisdictions. The localities we interviewed in the Gulf Coast region indicated that such changes in the housing stock of their jurisdictions are unlikely to subside before local governments begin reviewing and updating materials for the Bureau’s 2010 LUCA Program—in August 2007. Local government officials told us that changes in housing unit stock are often caused by difficulties families have in deciding whether to return to hurricane- affected areas. Local officials informed us that a family’s decision to return is affected by various factors, such as the availability of insurance; timing of funding from Louisiana’s “Road Home” program; lack of availability of contractors; school systems that are closed; and lack of amenities such as grocery stores. As a result of the still changing housing unit stock, local governments in hurricane-affected areas may be unable to fully capture reliable information about their address lists before the beginning of LUCA this year or address canvassing in April 2009. Furthermore, operation of local governments themselves has been affected by the hurricanes (see fig. 3). These local governments are focused on reconstruction and at least two localities we spoke to questioned their ability to participate in LUCA. The mixed condition of the housing stock in the Gulf Coast will increase the Bureau’s address canvassing workload. During our field work, we found that hurricane-affected areas have many neighborhoods with abandoned and vacant properties mixed in with occupied housing units. Bureau staff conducting address canvassing in these areas may have an increased workload due to the additional time necessary to distinguish between abandoned, vacant and occupied housing units. Another potential issue is that due to continuing changes in the condition in the housing stock, housing units that are deemed vacant or abandoned during address canvassing may be occupied on Census Day (Apr. 1, 2010). Bureau officials said that they recognize there are issues with uninhabitable structures in hurricane-affected zones. They noted that addresses marked as vacant or uninhabitable during address canvassing in the Gulf Coast region will not be deleted from the MAF, and said that they may adjust training for Bureau staff in hurricane-affected areas. Workforce shortages may also pose significant problems for the Bureau’s hiring efforts for address canvassing. The effects of hurricanes Katrina and Rita caused a major shift in population away from the hurricane-affected areas, especially in Louisiana. This migration displaced many low-wage workers. Should this continue, it could affect the availability of such workers for address canvassing and other decennial census operations. Bureau officials recognize the potential difficulty of attracting these workers, and have recommended that the Bureau be prepared to meet hourly wage rates for future decennial staff that are considerably higher than usual. It has noted that its Dallas regional office, which has jurisdiction over hurricane-affected areas in Texas, Louisiana, and Mississippi, will examine local unemployment rates to adjust pay rates in the region, and use “every single entity” available to advertise for workers in the New Orleans area. Early in 2006, we recommended that the Bureau develop plans (prior to the start of the 2010 LUCA Program in August 2007) to assess whether new procedures, additional resources, or local partnerships, may be required to update the MAF/TIGER database along the Gulf Coast—in the areas affected by hurricanes Katrina and Rita. The Bureau responded to our recommendations by chartering a team to assess the effect of the storm damage on the Bureau’s address list and maps for areas along the Gulf Coast and develop strategies with the potential to mitigate these effects. The chartered team recommended that the Bureau consult with state and regional officials (from the Gulf Coast) on how to make LUCA as successful as possible, and hold special LUCA workshops for geographic areas identified by the Bureau as needing additional assistance. While the Bureau (through its chartered team, headquarters staff and Dallas regional office) has proposed several changes to the 2010 LUCA Program for the Gulf Coast region, there are no specific plans for implementing the proposed changes. In summary, Mr. Chairman, we recognize the Bureau faces formidable challenges in successfully implementing a redesigned decennial census. It must also overcome significant challenges of a demographic and socioeconomic nature due to the nation’s increasing diversity in language, ethnicity, households, and housing type, as well as an increase in the reluctance of the population to participate in the census. The need to enumerate in the areas devastated by hurricanes Katrina and Rita is one more significant difficulty the Bureau faces. We applaud the moves the Bureau has undertaken to redesign the census; we have stated in the past, and believe still, that the reengineering, if successful, can help control costs and improve cost effectiveness and efficiency. Yet, there is more that the Bureau can do in examining and refining its recruiting, hiring, and training practices and in preparing to enumerate in the hurricane-affected areas. Also, the functionality and usability of the MCD—a key piece of hardware in the reengineered census—-bears watching as does the oversight and management of information technology investments. All told, these areas continue to call for risk mitigation plans by the Bureau and careful monitoring and oversight by the Commerce Department, the Office of Management and Budget, the Congress, GAO, and other key stakeholders. As in the past, we look forward to supporting this subcommittee’s oversight efforts to promote a timely, complete, accurate, and cost-effective census. For further information regarding this testimony, please contact Mathew J. Scire on (202) 512-6806, or by email at sciremj@gao.gov. Individuals making contributions to this testimony included Betty Clark, Carlos Hazera, Shirley Hwang, Andrea Levine, Lisa Pearson, Mark Ryan, Niti Tandon, and Timothy Wexler. 2010 Census: Redesigned Approach Holds Promise, but Census Bureau Needs to Annually Develop and Provide a Comprehensive Project Plan to Monitor Costs. GAO-06-1009T. Washington, D.C.: July 27, 2006. 2010 Census: Census Bureau Needs to Take Prompt Actions to Resolve Long-standing and Emerging Address and Mapping Challenges. GAO-06- 272. Washington, D.C.: June 15, 2006. 2010 Census: Costs and Risks Must be Closely Monitored and Evaluated with Mitigation Plans in Place. GAO-06-822T. Washington, D.C.: June 6, 2006. 2010 Census: Census Bureau Generally Follows Selected Leading Acquisition Planning Practices, but Continued Management Attention Is Needed to Help Ensure Success. GAO-06-277. Washington, D.C.: May 18, 2006. Census Bureau: Important Activities for Improving Management of Key 2010 Decennial Acquisitions Remain to be Done. GAO-06-444T. Washington, D.C.: March 1, 2006. 2010 Census: Planning and Testing Activities Are Making Progress. GAO-06-465T. Washington D.C.: March 1, 2006. Information Technology Management: Census Bureau Has Implemented Many Key Practices, but Additional Actions Are Needed. GAO-05-661. Washington, D.C.: June 16, 2005. 2010 Census: Basic Design Has Potential, but Remaining Challenges Need Prompt Resolution. GAO-05-09. Washington, D.C.: January 12, 2005. Data Quality: Census Bureau Needs to Accelerate Efforts to Develop and Implement Data Quality Review Standards. GAO-05-86. Washington, D.C.: November 17, 2004. Census 2000: Design Choices Contributed to Inaccuracies in Coverage Evaluation Estimates. GAO-05-71. Washington, D.C.: November 12, 2004. American Community Survey: Key Unresolved Issues. GAO-05-82. Washington, D.C.: October 8, 2004. 2010 Census: Counting Americans Overseas as Part of the Decennial Census Would Not Be Cost-Effective. GAO-04-898. Washington, D.C.: August 19, 2004. 2010 Census: Overseas Enumeration Test Raises Need for Clear Policy Direction. GAO-04-470. Washington, D.C.: May 21, 2004. 2010 Census: Cost and Design Issues Need to Be Addressed Soon. GAO- 04-37. Washington, D.C.: January 15, 2004. Decennial Census: Lessons Learned for Locating and Counting Migrant and Seasonal Farm Workers. GAO-03-605. Washington, D.C.: July 3, 2003. Decennial Census: Methods for Collecting and Reporting Hispanic Subgroup Data Need Refinement. GAO-03-228. Washington, D.C.: January 17, 2003. Decennial Census: Methods for Collecting and Reporting Data on the Homeless and Others Without Conventional Housing Need Refinement. GAO-03-227. Washington, D.C.: January 17, 2003. 2000 Census: Lessons Learned for Planning a More Cost-Effective 2010 Census. GAO-03-40. Washington, D.C.: October 31, 2002. The American Community Survey: Accuracy and Timeliness Issues. GAO-02-956R. Washington, D.C.: September 30, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The decennial census is a Constitutionally-mandated activity that produces data used to apportion congressional seats, redraw congressional districts, and allocate billions of dollars in federal assistance. The Census Bureau (Bureau) estimates the 2010 Census will cost $11.3 billion, making it the most expensive in the nation's history. This testimony discusses the Bureau's progress in preparing for the 2010 Census to (1) implement operations to increase the response rate and control costs; (2) use technology to increase productivity; (3) hire and train temporary staff; and (4) plan an accurate census in areas affected by hurricanes Katrina and Rita. The testimony is based on previously issued GAO reports and work nearing completion in which GAO observed recruiting, hiring, and training practices in the 2006 test, and visited localities that participated in the Local Update of Addresses Dress Rehearsal as well in the Gulf Coast region. The Bureau has made progress towards implementing a re-engineered census design that holds promise for increasing the response rate, thereby controlling the cost of the census while promoting accurate results. The re-engineered design includes a short form only census designed to increase the response rate by about 1 percent and a targeted second mailing, which is expected to increase response by between 7 to 10 percent. Both of these initiatives are new, have been tested, and will be a part of the 2010 Census design. According to Bureau officials, a 1 percent increase in the response rate can save $75 million, making these initiatives critical to the new design. Uncertainty surrounds a keystone to the reengineered census, the mobile computing device (MCD). The MCD allows the Bureau to automate operations and eliminate the need to print millions of paper questionnaires and maps used by census workers to conduct census operations and to assist in managing payroll. The MCD, tested in the 2004 and 2006 census tests, was found to be unreliable. While a contractor has developed a new version of the MCD, the device will not be field tested until next month, leaving little time to correct problems that might emerge during the 2008 Dress Rehearsal. The Bureau faces challenges in recruiting, hiring, and training an estimated 600,000 temporary employees. For example, opportunities exist for the Bureau to hone its recruiting efforts to identify individuals who would be more likely to be effective at census work and willing to work throughout an operation. Also, census workers indicated a need for additional training on reluctant respondents as well as location-specific challenges they encounter. The Bureau must also be prepared to accurately count the population affected by hurricanes Katrina and Rita. The Bureau has contacted local officials in the Gulf Area and is developing a plan that includes workshops and special staffing considerations.
The Automobile Industry The automobile industry affects industries that manufacture steel, glass, plastics, and rubber. The sector also supports the refining and selling of gasoline and road construction, as well as maintaining, repairing, and selling motor vehicles. In 2008, the automobile sector employed 1.7 million people in the United States, according to the Center for Automotive Research. Employment in the automobile sector reaches beyond manufacturing, including 686,000 people employed by the automotive parts sector and 737,000 salespeople and service repair professionals at auto dealers. Further, the 1.7 million direct jobs contributed to an estimated 8 million total private sector jobs accounting for more than $500 billion in annual compensation and more than $70 billion in personal tax revenues, according to the Center for Automotive Research. Manufacturing plays a key role in creating high-wage jobs, fueling exports, and driving innovation. In 2015, according to the Bureau of Labor Statistics (BLS), manufacturing employees earned on average $74,785 annually, including pay and benefits, while workers in all U.S. industries earned on average $63,045. Manufacturing companies have also traditionally hired more employees with lower levels of education than other parts of the economy, according to the Economic Policy Institute, making these companies an option for individuals whose job choices may be limited by higher education degree requirements. Also in 2015, manufacturing firms shipped $1.3 trillion of goods abroad, according to DOC. In addition, the sector supported the development of new technologies in the United States by performing 75 percent of private sector research and development (R&D) and issuing the vast majority of new patents, despite the fact that manufacturing made up 12 percent of U.S. Gross Domestic Product (GDP) in 2014, according to the Executive Office of the President. The influence of the manufacturing sector reaches many industries, as demonstrated by the automobile industry, which can be viewed as a barometer and beneficiary of American growth and economic achievement, according to BLS (see text box). The size of the workforce in U.S. manufacturing decreased from more than 17 million employees in 1997 to approximately 12 million in 2015, according to BLS data (see fig. 1). The recent decline in manufacturing employment included sectors such as apparel, computers and electronics, and furniture, according to CRS. Manufacturing’s share of U.S. economic output has decreased over the last decade as well, according to Bureau of Economic Analysis data. As a fraction of U.S. GDP, manufacturing declined from 16.1 percent in 1997 to 12.1 percent by 2015 (see fig. 2). There are a number of roles that the federal government plays and a variety of tools that it uses to influence the manufacturing sector and the broader economy, including: Providing funds. The federal government funds programs that support manufacturing in different ways, such as by providing grants or awarding contracts. Assuming risk. The federal government assumes risk (and potential costs associated with risk) by making direct loans, guaranteeing loans, and providing insurance. Collecting or forgoing revenue from taxes. The federal government collects revenues through the tax system, and forgoes revenues through tax expenditures such as exemptions, deductions, credits, and deferrals, as well as preferential tax rates. Directly procuring goods and providing services. The federal government procures manufactured products, such as weapons systems, and provides services, such as technical assistance, directly through government agencies. Setting standards and requirements. Federal laws and regulations also set standards and requirements that can influence manufacturing activities. Standards, like energy, environmental, and workplace safety standards, can help to enhance the societal benefits of manufacturing, but they may also affect the costs related to manufacturing. In some cases, the tools the federal government uses to influence the manufacturing sector are applied directly, such as in the case of a grant provided to a specific company. In other cases, the federal government influences the manufacturing sector in a more indirect way. For example, the federal government delivers tools and technical assistance that manufacturers use to help invent, innovate, and create new products and services. Specifically, the mission of DOC’s National Institute of Standards and Technology (NIST) is to promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve quality of life. As part of this mission, NIST administers measurement science research programs across five laboratories and two national user facilities, which provide industry with precision measurement technologies, tests, protocols, and scientific and engineering knowledge, according to NIST officials. The outputs of the NIST laboratories include scientific data and methods that are used in the processes, products, and services of nearly every U.S. manufacturing industry, as well as the national service sector, according to agency officials. The federal government established advisory groups and an office that promote coordination across federal agencies and programs to better support manufacturers. In recent years, these groups have made recommendations, formed partnerships, and developed a strategic plan for advanced manufacturing. For example: The Manufacturing Council, established in 2004, serves as the principal private sector advisory committee to the Secretary of Commerce on the U.S. manufacturing sector. It typically meets several times a year and develops recommendations for the Secretary of Commerce. The America COMPETES Reauthorization Act of 2010 (COMPETES Act) called for a committee to plan and coordinate federal programs and activities in advanced manufacturing R&D, and NSTC designated the Committee on Technology to take on this task. The COMPETES Act required this committee to (1) develop a strategic plan for advanced manufacturing, (2) update the plan every 5 years, and (3) specify and prioritize near-term and long-term R&D objectives, the anticipated time frame for achieving the objectives, and the metrics for use in assessing progress toward the objectives, among other things. In 2011, PCAST recommended that the federal government work to better coordinate efforts to support advanced manufacturing, among other things. In general, advanced manufacturing includes activities that depend on the use and coordination of information, automation, computation, software, sensing, and networking. Or advanced manufacturing can use cutting edge materials and emerging capabilities enabled by the physical and biological sciences. In 2011, DOC established the Advanced Manufacturing National Program Office, which is designed to support public-private partnerships to increase advanced manufacturing. In 2012, OSTP published the National Strategic Plan for Advanced Manufacturing with five objectives: (1) accelerating investment in advanced manufacturing technology; (2) expanding the number of workers with the skills needed by a growing advanced manufacturing sector and making education and training systems more responsive; (3) creating and supporting national and regional public-private partnerships among government, academia, and the private sector; (4) taking a portfolio perspective and coordinating investments across agencies; and (5) increasing total U.S. public and private investment in advanced manufacturing R&D. In 2014, the Revitalize American Manufacturing and Innovation Act of 2014 (RAMI Act) was enacted as part of the Consolidated and Further Continuing Appropriations Act, 2015. The RAMI Act amended the requirements for the strategic plan for advanced manufacturing to say that the Committee on Technology shall develop, and update as required, a strategic plan to improve government coordination and provide long-term guidance for federal programs and activities in support of U.S. manufacturing competitiveness, including advanced manufacturing R&D. The RAMI Act also requires that the strategic plan describe the progress made in achieving the objectives from prior strategic plans, including a discussion of why specific objectives were not met. The next update of the strategic plan is due by May 1, 2018. In 2015, NSTC’s Committee on Technology reestablished a Subcommittee on Advanced Manufacturing (SAM) to identify gaps in the federal advanced manufacturing R&D portfolio and policies, identify and evaluate policies and programs that support technology commercialization, and identify and promote opportunities for public- private collaboration, among other things. The SAM’s scope includes support for implementation of recommendations from PCAST as well as support for implementation of and updates to the national strategic plan for advanced manufacturing. According to the SAM’s charter, it was previously known as the Advanced Manufacturing Subcommittee. The SAM’s charter expired on March 1, 2017, and has not been renewed by the Chair of the Committee on Technology. We identified 58 programs in 11 federal agencies that reported they support U.S. manufacturing. Based on our review of survey responses, these programs support manufacturing by fostering innovation, helping manufacturers compete in the global marketplace, helping workers enhance skills and obtain employment, and providing general financing or general business assistance. All of the categorizations of the programs in this report are based on our review of program officials’ responses to our survey. Most of the manufacturing-related programs we identified are administered by the Departments of Commerce, Defense, Energy, or the National Science Foundation (see table 1). The 58 programs we identified that support manufacturing obligate varying amounts of funds. (See app. II for a full list and description of these programs, including obligation amounts for 2014-16 and estimated proportions of these obligations used to support U.S. manufacturing.) In some cases, agency officials were able to estimate for our survey the amount of funds that supported manufacturing because they considered the entire program to be supporting manufacturing. In other cases, officials did not estimate the amount their programs obligated to support U.S. manufacturing, and instead they provided us with a range. Officials with 21 programs reported that 100 percent of their program’s obligations were used to support manufacturing. Reported obligations for these programs ranged from $750,000 to $203,568,000 in fiscal year 2015. For example, the National Science Foundation’s (NSF) Nanomanufacturing program provides grants that support fundamental research that may lead to the production of useful nano-scale materials, structures, devices, and systems. According to our survey, the program obligated $8,912,533 in fiscal year 2015, and agency officials estimated that all of the obligations supported U.S. manufacturing. Twenty-six other programs reported using funding to support manufacturing—in additional to other sectors—and provided ranges of estimates for the obligations directly supporting manufacturing. For example, agency officials told us the Department of Labor’s (DOL) Registered Apprenticeship Program prepares American workers to compete in a global 21st century economy by training millions of them through a network of 21,000 Registered Apprenticeship programs consisting of over 150,000 employers. Also according to agency officials, the program obligated $34,000,000 in fiscal year 2015, and because manufacturing programs make up approximately 20 percent of all Registered Apprenticeship programs managed by federal staff, they estimated that between 10 and 20 percent of federal staff resources are used to assist with the establishment of new manufacturing programs and to support existing manufacturing programs. The remaining 11 programs either did not provide an estimate of their support to manufacturing or reported zero program obligations in fiscal year 2015. Table 2 indicates the obligations, by agency, for the 58 programs reported by agency officials. Among the 58 programs we identified, 30 fostered innovation through their support for basic and applied R&D, based on our analysis of survey responses. (See table 3.) Advocates of targeted innovation policy argue that it is important because the manufacturing sector depends on continually creating new ideas for products and strategies. Supporting basic R&D by providing grants to educational institutions and others. Ten of the 30 programs support basic R&D—that is, research that is conducted without a specific commercial application but which may spur private sector innovation—based on our analysis of survey responses. The public sector, through government scientific agencies, public universities, and other research institutions, may be well-suited to support basic R&D, as we reported in our July 2013 report on global manufacturing. Basic R&D supports innovation by creating opportunities for technological advances, according to the National Strategic Plan for Advanced Manufacturing. Eight of the 10 programs that seek to support basic R&D are directed by NSF. For example, the Design of Engineering Material Systems program seeks to support research to inform the accelerated design and development of materials that can be used in manufacturing processes. In fiscal year 2015, this program obligated $3,223,434, according to agency officials, and provided grants to approximately 12 educational institutions, which supported about 140 students and other individuals conducting research. Another NSF program, the Manufacturing Machines and Equipment program, aims to support basic research in engineering and science that enables the development of new manufacturing machines and equipment, among other things. These machines and equipment are used to manufacture mechanical and electromechanical products. According to our survey data, in fiscal year 2015, this program obligated $10,461,608 and supported approximately 45 educational institutions and 440 students and other individuals. Supporting applied R&D. Twenty of the 30 programs fostered innovation by supporting applied R&D—that is, research designed to solve practical problems or develop and commercialize new products—based on our analysis of survey responses. Applied R&D helps to bridge the gap between new ideas and commercially viable products or processes. We visited locations where two of these programs—the Hollings Manufacturing Extension Partnership (MEP) program and the Manufacturing USA program—were operating to obtain more specific information on how they were supporting manufacturing: The MEP program, with reported obligations of $144,556,000 in fiscal year 2015, consists of a national network of centers located in all 50 states and Puerto Rico that seeks to help small- and medium-sized manufacturers adopt new technologies and commercialize their products. The centers are funded through a cost-sharing arrangement. In addition to federal funding from NIST, the centers receive funding from state and local governments, and from fees charged to manufacturers. Officials from the Illinois center reported they observed production processes at a local manufacturer and made suggestions to improve efficiency. For example, they identified an instance where equipment could be used to free up a worker for other purposes. Center officials also said they had plans to support local manufacturers by providing “market intelligence” or estimating how much the market would value a particular product or feature. The Manufacturing USA program aims to support manufacturing through applied R&D by coordinating a network of institutes where public and private sector stakeholders work together to, among other things, resolve technical barriers to innovative manufacturing technologies or processes. As of 2016, the program supported nine institutes, and five or six additional institutes will be announced in fiscal year 2017, according to agency officials. The Manufacturing USA program is overseen by DOC, and as of 2016, each institute was funded by either the Department of Defense or the Department of Energy, with other agencies planning to fund additional institutes, according to Commerce officials. The Manufacturing USA institute we visited in Knoxville, Tennessee, conducts research related to advanced composites and utilizes a manufacturing demonstration facility that conducts research on additive manufacturing (also known as three dimensional (3D) printing). Officials at the institute told us that private companies may find it difficult to purchase manufacturing machines, such as 3D printers, due to their size and cost. They said that the combination of resources available through the institute and the Manufacturing Demonstration Facility allows members to use these machines at a fraction of the normal purchase cost to test or demonstrate new manufacturing technologies. (See fig. 3.) Similarly, the Manufacturing USA institute we visited in Chicago, Illinois, allows its members to use its advanced technology machines to demonstrate the benefits of digital manufacturing, which involves using an integrated computer-based system to improve manufacturing. An institute official noted that allowing its members to use machines at the institute may increase the pace of R&D by allowing manufacturers to quickly create prototypes to test designs. Eleven of the 58 programs assist manufacturers to trade in the global marketplace, based on our analysis of survey responses. These programs engage in activities like promoting U.S. exports and open trade, providing financial support, providing technical assistance, and enforcing trade laws and supporting policy formulation (see table 4). According to our survey data, DOC administers all but one of these programs, and they devote a majority of their resources and activities to manufacturing. Promoting U.S. exports. Export promotion programs can assist U.S. companies with trade in the global marketplace by helping them overcome barriers to entry in foreign markets, as we have previously reported. Of the 11 programs we identified that help manufacturers with trade in the global marketplace, 6 reported promoting U.S. exports. For example, 1 of these 6 programs, DOC’s Domestic Field program, reported providing services in partnership with the Export-Import Bank and the Small Business Administration (SBA), through a network of U.S. Export Assistance Centers. These centers seek to provide customized assistance to local small-and medium-sized companies, including manufacturers, by helping to identify relevant partners and markets and by assisting with export mechanics and financing options, according to program officials. With 108 offices in 48 states and Puerto Rico, these centers served about 25,000 businesses in fiscal year 2015, according to agency officials. We visited a center located in Knoxville, Tennessee, which, according to local officials, worked with a company to create an outreach seminar and develop an invitation list for it in Mexico. According to local officials, the seminar helped increase the company’s sales of manufactured medical products to Mexico. The Knoxville center also seeks to assist local companies with the export process by helping to locate distributors for their products, according to local officials. Providing financial support. Based on our survey, 2 of the 11 programs we identified provide various types of financial support to companies or their customers that are unable to obtain financing from the private sector or have been affected by import competition. For example, according to the Export-Import Bank, it supports companies by using three main types of financial products: fixed-rate loans directly to foreign buyers of U.S. goods and services; loan guarantees to commercial lenders to cover repayment risks on foreign buyers’ debt obligations incurred to purchase U.S. exports; and export credit insurance, which supports U.S. exporters selling goods overseas by protecting them against the risk of foreign buyer or other foreign debtor default. According to the Export-Import Bank’s fiscal year 2015 annual report, the Bank authorized 41 loans and 344 loan guarantees, in a range of sectors, including, but not limited to, manufacturing. In one instance, the Export- Import Bank guaranteed commercial loans to support the export of bridges manufactured by a U.S.-based company. According to the Export-Import Bank, these guarantees allowed the manufacturer to gain entry into foreign markets, and they also supported jobs in the United States. Another program we surveyed that seeks to provide financial support, among other things, is the Trade Adjustment Assistance for Firms program, which is administered by DOC. According to our survey, the program supports a network of trade adjustment assistance centers that work with manufacturers that have been affected by competition from imported products. These centers aim to help manufacturers develop and implement business recovery plans, among other things, and provide matching funds for consultants to work with manufacturers to implement projects in business recovery plans, according to agency officials. Enforcing trade laws and agreements and supporting policy formulation. Based on our analysis of survey responses, DOC administers three programs that enforce trade agreements and support policy formulation and negotiations. One of the three programs, the Industry Trade Policy and Analysis program, seeks to provide analysis and expertise to conduct policy formulation or represent industry members in trade negotiations. These activities aim to help expand exports and bolster foreign direct investments in the United States, which can assist many industries, including manufacturing. Another program, the Trade Enforcement and Compliance Policy and Negotiations program, oversees policies and programs related to the negotiation of and pursuing foreign compliance with trade and investment disciplines in international agreements, the administration of U.S. antidumping and countervailing duty laws, and the negotiation and administration of suspension agreements of U.S. antidumping and countervailing duty investigations. The Antidumping and Countervailing Duty Operations program investigates if petitions provide sufficient evidence that dumping or unfair subsidization is occurring. Generally, if the results of an investigation indicate that goods are being dumped or unfairly subsidized, and the U.S. International Trade Commission determines that U.S. industry is being injured, DOC will issue an order requiring importers subject to the order to make cash deposits equal to the amount of dumping and/or subsidization found. Later, Antidumping and Countervailing Duty Operations program officials might conduct an administrative review to determine the actual amount of dumping or subsidization and calculate a final duty rate. According to our analysis of the survey, these activities help manufacturers by promoting fair competition in the marketplace, and by ensuring that U.S. firms are not adversely affected by actions of foreign producers and governments. The Antidumping and Countervailing Duty Operations program reported that it assisted at least 440 U.S. companies and unions in fiscal year 2015. According to agency officials we surveyed, other U.S. manufacturing companies also may have benefited from actions taken by the program. For example, as we found in 2013, trade remedy duties, such as those determined by the Antidumping and Countervailing Duty Operations program, add to the price of foreign products imported into the United States, and they can benefit domestic producers of these products regardless of whether the producers file a petition. Of the 58 programs, we identified 8 in the training policy area that help job seekers enhance skills and obtain employment, based on our analysis of survey responses. (See table 5). Some of these programs help job seekers bolster their skills in response to technological advances in the manufacturing sector, while others help job seekers find reemployment when laid off from their manufacturing jobs. Most training programs we identified are administered by the Departments of Labor and Education, and while manufacturing workers may be eligible for such training, the programs generally address workforce changes across many sectors of the economy. Supporting the enhancement of job seekers’ skills. Seven of the eight programs help job seekers enhance job skills, based on our analysis of survey responses. For example, one program, the Trade Adjustment Assistance Community College and Career Training Grant (TAACCCT) program, provided multi-year grants to universities and community colleges to support education and career training programs that aim to help job seekers bolster their skills and obtain employment in higher skilled jobs. This program is administered by the Department of Labor (DOL) and implemented in partnership with the Department of Education (Education), and while it currently oversees existing grants, it has stopped providing new grant funding. In fiscal year 2014, the program obligated $463,994,493, according to agency officials. Community colleges use these grants to develop workforce training programs that are aligned with the needs of local industry. The grants are also used to develop workforce training programs that prepare job seekers for employment in a range of industries, and many community colleges identified manufacturing as a significant industry in their area, according to responses made by agency officials in our survey. For example, an official from a network of community colleges we visited in Illinois that received a TAACCCT grant said that the grant served as a catalyst for discussions between local manufacturers and community colleges. According to this official, these discussions helped community colleges identify topics to include in their curriculum and the skillsets required in the manufacturing sector, and as a result, the grant aimed to ensure that job seekers are adequately prepared for employment in the manufacturing sector. Providing support for job seekers who have been laid off from their job in the manufacturing sector. One of the eight programs, the Trade Adjustment Assistance (TAA) program, administered by DOL, provides benefits and employment services for job seekers who have lost their jobs due to global trade, based on our analysis of the survey. According to information provided by agency officials in our survey, job seekers are eligible to participate in the TAA program if they have been adversely affected by increased imports or a shift in production to other countries, among other factors. Job seekers in the TAA program can receive a skills assessment, training, and individual career counseling, among other things, to help train them for new jobs in fields that may require advanced skills. In fiscal year 2015, states obligated $507 million to serve TAA- eligible workers in all industry sectors, according to DOL. Although funding was not targeted specifically for the manufacturing sector, agency officials estimated that approximately 86 percent of certified workers in fiscal year 2015 were from the manufacturing sector. Although most job seekers obtain employment in other sectors after participating in the TAA program, DOL officials reported that about 4,500 job seekers were reemployed in the manufacturing sector after participating in the TAA program in fiscal year 2015. We identified nine programs that provide general financing or general business assistance that cuts across all three policy areas (innovation, trade, and training) or that support the manufacturing of public health products, based on our analysis of survey responses (see table 6). Some of these programs provide general financing through loans or loan guarantees to businesses in all sectors, including manufacturing, or provide direct payments to specific manufacturing industries, such as defense and bioenergy. A few programs also assist the health care sector by supporting the manufacturing of public health products. Providing general financing. Six of the nine programs we identified support manufacturing by providing various types of general financing, including loans, loan guarantees, or direct payments, based on our analysis of survey responses. For example, according to the survey, the Department of Agriculture’s Business and Industry Guaranteed Loan Program, the SBA’s 7(a) and Certified Development Company (CDC)/504 Loan programs, and the Department of Energy’s Advanced Technology Vehicles Manufacturing Loan program provide loans or loan guarantees but each agency targets different types of manufacturers and has different eligibility requirements. The Business and Industry Guaranteed Loan Program generally supports rural businesses by issuing loan guarantees, while both SBA programs generally provide loan guarantees to eligible small businesses to finance a wide range of needs, including working capital, revolving credit, asset acquisition, and re-financing. Further, the Advanced Technology Vehicles Manufacturing Loan program supports the manufacturing of advanced technology vehicles by providing direct loans to automotive and component manufacturers. In addition to providing loans or loan guarantees to assist manufacturers, the Defense Production Act Title III program, for example, provides direct payments to manufacturers by purchasing advanced materials and technologies that develop their production capabilities. According to our survey, the program obligated $203,568,000 in fiscal year 2015, all of which supported U.S. manufacturing. Another program, the Bioenergy Program for Advanced Biofuels, supports the production of advanced biofuels by providing payments to advanced biofuel producers. In fiscal year 2015, the program provided payments to 225 manufacturers, according to our survey data. Providing general support to manufacturing across the areas of innovation, trade, and training. We identified one program—the Investing in Manufacturing Communities Partnership (IMCP) program—that supports manufacturing by cutting across all three policy areas. Based on our survey, as of calendar year 2015, the IMCP program had designated 24 locations across the country as manufacturing communities through a competitive selection process. As part of the IMCP program, locations designated as manufacturing communities receive technical assistance from 12 federal agencies—in addition to DOC—and are eligible for preferential consideration for funding consistent with each agency’s program eligibility requirements and evaluation criteria, according to DOC. According to program officials we met with in Knoxville, an advantage of being designated a manufacturing community is the increased level of collaboration among federal, state, and regional stakeholders, which has helped create new business opportunities, attract businesses to the region, and provide manufacturing training to workers. The IMCP community we visited in Chicago focuses on metal manufacturing, and its members collaborate with other organizations to provide such training to workers. For instance, the Jane Addams Resource Corporation, a training provider, administers a training curriculum that addresses the specific skills required for employment in the manufacturing sector. Officials told us that they provide students with hands-on training and adjusted their training curriculum to match industry trends. For instance, they said that they acquired a robotic welder and used it to train their students on this new, automated technology (see fig. 4). Supporting the manufacturing of public health products. Two of the nine programs we identified support the development, acquisition, and testing of public health supplies, according to our analysis of survey responses. The Department of Health and Human Services (HHS) awarded contracts to establish the Centers for Innovation in Advanced Development and Manufacturing, which is composed of three manufacturing organizations. These organizations develop and manufacture medical countermeasures, such as influenza vaccines and protections against chemical, biological, radiological and nuclear threats. Contractors for another program, the Fill Finish Manufacturing Network, provide packaging support for medical countermeasure products. Among other things, HHS has engaged with the Fill Finish Manufacturing Network to transfer sterile drug products that would be required in a public health emergency, according to our survey data. Nine tax expenditures provide benefits to manufacturers, according to a CRS report and Treasury officials. These tax expenditures provide incentives to manufacturers through tax deductions, deferral, credits, and other methods. These tax expenditures are available to manufacturers, as well as other corporations or individual taxpayers that meet the qualifying requirements. (See table 7.) Tax Expenditures Tax expenditures are reductions in an individual or corporate taxpayer’s tax liability that are the result of special exemptions and exclusions from taxation, deductions, credits, deferrals of tax liability, or preferential tax rates. They often aim to achieve policy goals similar to those of federal spending programs. There are three general trends in the manufacturing sector: movement toward advanced manufacturing, need for workers with higher skills, and more globalization and competition for U.S. manufacturers, according to our analysis of selected reports and experts we interviewed. The U.S. manufacturing sector is changing from a traditional manufacturing sector (i.e., one based on assembly lines and large numbers of employees) to an advanced manufacturing sector, according to PCAST. PCAST also reported that a highly skilled workforce will be critical to the deployment of an advanced manufacturing sector in the United States. At the same time, the manufacturing sector is becoming more globalized. PCAST reported that supporting advanced manufacturing innovation in the United States is critical to U.S. global competitiveness. U.S. manufacturers are increasingly competing with manufacturers in other countries as supply chains are becoming global and other countries are providing support for their manufacturing sector to make them more competitive, according to an expert we spoke to. Fifty-one of the 58 federal programs selected for our review are addressing one or more of the manufacturing trends in different ways, according to our survey of agency officials. Our analysis of survey responses shows that more than two thirds of programs are addressing the shift toward advanced manufacturing, approximately half of the programs are taking steps to address increased globalization and competition, and less than half are addressing the need for a higher skilled workforce. Table 8 shows how many programs reported addressing each of these trends. Programs reported addressing trends in several ways, including providing funding and resources, sharing information, and promoting coordination. Programs Reported Providing Funding and Resources Programs have provided funding and resources to address trends in the manufacturing sector, according to agency officials surveyed. Funding and resources include providing grants for R&D or training programs, targeting research funding through public-private partnerships, or supporting the development and testing of training tools related to new manufacturing technologies. Providing funding and resources was used to address all three of our identified manufacturing trends, though it was most commonly used to address advanced manufacturing, based on our survey results, as shown in table 9. Examples of programs that reported providing funding and resources, according to officials: The Manufacturing USA program was identified by multiple experts we spoke to as a prominent federal effort addressing the advanced manufacturing trend. Federal agencies funded nine innovation institutes through a public-private partnership model to support R&D projects and workforce development in advanced technologies such as additive manufacturing and digital manufacturing, among others. The Department of Defense’s (DOD) Mentor2 program seeks to ensure that training remains relevant to workforce needs by funding training programs in digital manufacturing that can also be accessed by a wider cross-section of the DOD workforce. The Department of Energy’s (DOE) Concentrating Solar Power and Photovoltaics programs fund research, development, and demonstration of innovation technologies that have largely moved overseas. According to DOE officials, applicants to these programs must demonstrate a commitment to promoting domestic manufacturing to receive grant funding. More specifically, funding applicants must develop a U.S. manufacturing plan that commits to, among other things, investing in new or existing U.S. manufacturing facilities, keeping certain activities such as final assembly in the U.S., and supporting a specific number of manufacturing jobs in the United States, according to DOE officials. Programs reported sharing information to address trends in the manufacturing sector, including developing training materials and preparing industry sector reports, among other things. Programs are sharing information to address all three trends, with the largest number addressing advanced manufacturing, based on our survey results, as shown in table 10. Examples of programs that reported sharing information, according to agency officials: Officials with the MEP program told us that they developed training materials on new technologies and created a community of practice to promote information sharing across the national network of MEP Centers. MEP also collaborated with the NIST Engineering Lab to hold a regional workshop for clients to share information about advances in emerging advanced manufacturing technologies such as collaborative robotics. As part of the IMCP program, officials told us that the Department of Commerce (DOC) has worked with the Departments of Education and Labor to build working groups to prepare panels and content for annual IMCP summits. These groups focused on addressing the workforce skills gap and identifying successful models for skill development, providing technical assistance, and sharing federal funding opportunities to help communities build a workforce with the skills that their employers need. Officials from the International Trade Administration’s Manufacturing program told us that they developed the Top Markets Report series— a collection of sector-specific reports designed to help U.S. exporters compare markets across borders. The reports highlight future export opportunities for advanced manufacturing technologies such as in additive manufacturing and smart grid products. Some agency officials reported that their programs promote coordination among stakeholders in the manufacturing community by convening representatives from industry and academia to address manufacturing issues, and reaching out to communities to stimulate manufacturing in a specific region, among other activities. Coordination was used to address all three manufacturing trends, based on our survey results, as shown in table 11. Examples of programs that reported promoting coordination, according to agency officials: DOE’s Clean Energy Manufacturing Initiative has convened stakeholders from industry, academia, and leadership from DOE national laboratories and DOE to discuss how public and private entities can partner to boost manufacturing competitiveness, train the advanced manufacturing workforce, and promote innovative energy technologies. It has also partnered with DOE’s Clean Energy Manufacturing Analysis Center, which works with industry and academia to provide research and analyses of factors driving manufacturing strategy in the United States. DOL’s H-1B Technical Skills Training Grant Program has implemented several cross-sector initiatives to promote a higher skilled workforce. For example, DOL has facilitated a partnership among its Employment and Training Administration, DOE, and Oklahoma State University, among others, to provide training in advanced manufacturing and design tools related to working in the oil and gas industry. The agency has also partnered with the Peralta Community College District’s Laney College to create the Advanced Manufacturing Medical/Biosciences Pipeline for Economic Development, which promotes technology transfer, economic development, and workforce development in medical device and bioscience manufacturing. DOC’s IMCP program has established collaboration among different public and private entities within identified communities to stimulate manufacturing and attract investment from global manufacturers. In particular, the program focuses on regions working across the public, private, and academic sectors to address issues related to workforce development, trade and international investment, and access to capital, among other things. According to our survey, 28 of the 51 programs that reported addressing manufacturing trends also reported facing challenges in addressing the changing manufacturing sector. Most of the challenges reported by these programs related to the trends we identified—advanced manufacturing, higher-skilled workforce, and globalization and competition—with some reporting multiple challenges related to a single trend. As shown in table 12, the most frequently-mentioned challenge for programs that are addressing the advanced manufacturing sector was a lack of resources and funding. For example, most of NSF’s programs, which are primarily basic R&D programs, receive many more competitive proposals than can be funded by the available budget, according to NSF program officials. Challenges mentioned by other programs included a lack of information about advanced technologies, technical challenges, and difficulty in coordinating across agencies. We have identified technical challenges related to advanced manufacturing in our previous work on additive manufacturing. For example, in 2015 we found there were limited materials available with which the technology could be used, and technical limitations on the speed of production and the ability to build items of varying sizes. Experts in additive manufacturing told us that these challenges could be addressed through additional R&D. As the U.S. manufacturing sector becomes more oriented toward advanced manufacturing, the federal government faces challenges in helping workers develop the skills for the advanced manufacturing sector, according to some experts we interviewed. Program officials who told us that they are addressing the need for a higher skilled workforce said that they are primarily facing challenges in keeping training relevant to the current workforce (see table 13). For example, DOD’s Mentor2 program officials reported encountering challenges ensuring not only that training for jobs in new manufacturing technologies is continually being updated, but also that it remains relevant as requirements change over time. Program officials told us their programs face a variety of challenges in an increasingly globalized and competitive manufacturing sector. As shown in table 14, programs addressing the increased globalization trend most frequently mentioned challenges related to promoting domestic production, while also pointing to limited resources and information, as well as issues with supply chain management. Among the programs, DOE’s Photovoltaics and Tech-to-Market programs reported encountering challenges because manufacturing of key components has largely moved overseas. Program officials said that the globalized nature of photovoltaic manufacturing makes it challenging to incentivize companies to produce them in the United States when the much larger scale of production for these panels overseas results in lower costs. However, as manufacturing costs further decline and shipping costs become a larger fraction of the total panel cost, there will be opportunities for onshoring of manufacturing and implementation of the U.S. manufacturing plan that promotes domestic production of renewable technologies, according to program officials. Most of the 58 programs reported having performance goals or measures related to the support of manufacturing. In addition, 4 programs had performance evaluations that specifically examined their effects on manufacturing. Apart from agency efforts to assess the performance of individual programs, a federal interagency initiative coordinates activities and assesses progress in the area of advanced manufacturing. However, agencies that comprise an interagency group have not identified the information needed to determine progress in meeting strategic plan objectives. Forty-four of the 58 programs reported having at least one performance goal or measure related to the support of manufacturing. These goals and measures cover a range of areas, which reflects differences in each program’s mission. Half of the 44 programs reported goals or measures related to advancing scientific knowledge or improving technologies. Fewer programs reported goals or measures in other areas, such as providing technical assistance to manufacturers, enhancing national security or medical countermeasures preparedness, promoting U.S. exports or open trade, or workforce development. Advancing scientific knowledge or improving technologies. Twenty-two of the 44 programs reported at least one performance goal or measure related to supporting fundamental scientific research in areas such as nanomanufacturing and robotics. Such research has the potential to support manufacturing to the extent that it is subsequently applied to products or manufacturing processes. In addition, several programs reported performance goals or measures related to improving technologies, such as developing an energy- saving technology or alternative fuel to the point that it becomes cost- competitive with existing technologies. Table 15 provides examples of reported performance goals and measures in this area. Providing technical assistance to manufacturers. Four of the 44 programs reported at least one performance goal or measure related to providing technical assistance to manufacturers. These include goals and measures that quantify how assistance provided to manufacturing firms helped them. Table 16 provides examples of reported performance goals and measures in this area. Maintaining national security or medical countermeasures preparedness. Six of the 44 programs reported performance goals or measures pertaining to the maintenance of production and manufacturing capabilities for national defense, or medical countermeasure preparedness for emerging infectious diseases and other threats. Table 17 provides examples of reported performance goals and measures in this area. Promoting U.S. exports or open trade. Six of the 44 programs reported performance goals or measures related to promoting U.S. exports or open trade. Such goals and measures include removing, reducing, and preventing trade barriers. Table 18 provides examples of reported performance goals and measures in this area. Developing the workforce. Seven of the 44 programs reported performance goals or measures related to workforce development, such as developing occupational profiles and obtaining and retaining employment. Table 19 provides examples of reported performance goals and measures in this area. Other areas. Three of the 44 programs reported performance goals or measures in other areas, such as holding successful manufacturing-related events and tracking the amount of loans provided for manufacturing. Of the 58 programs in our survey, 4 specifically estimated their effects on manufacturing with an independent performance evaluation that met our definition of a program evaluation. Trade Adjustment Assistance for Firms (TAAF, DOC)—In 2012, we examined, among other things, the program’s data and performance measures and what they indicated about the program’s effectiveness. The review found that although the program provides limited data about outcomes, manufacturing firms that have participated in the program have experienced small, positive, and statistically significant increases in sales. Hollings Manufacturing Extension Partnership (MEP, DOC)— There have been three evaluations of the program, all conducted by independent groups. Two of the evaluations, conducted in 2012 and 2015 by the Center for Economic Studies (CES) at the U.S. Census Bureau, examined how manufacturing establishments have been affected by MEP’s assistance. In 2013, the National Research Council conducted a meta-analysis of previous program evaluations of MEP going back to the early 1990s. Each of the three evaluations found that MEP had some positive effects, such as increased establishment productivity and productivity per worker. Bioenergy Technologies Office (BETO, DOE)—This program, which supports the manufacturing of bioproducts, has received biennial peer reviews that assess BETO’s individual projects and the overall management, performance, and strategic direction of the Office. In 2013 and 2015 peer reviews, external reviewers delivered a positive overall assessment of BETO and validated much of the office’s current approach and technical strategy. The reviews noted that BETO is funding high-impact projects that have the potential to significantly advance the state of technology for the industry and made recommendations for further improvement. Manufacturing Machines and Equipment (MME, NSF)—The program was evaluated in 2013 by the independent Science and Technology Research Institute. The study concluded that, among other things, the NSF, through the MME program, has positively “contributed to the emergence of additive manufacturing over the last 25 years.” Apart from these efforts to assess the performance of individual programs, there is an interagency subcommittee—the Subcommittee on Advanced Manufacturing (SAM)—which was tasked by the NSTC’s Committee on Technology with, among other things, coordinating federal agencies’ activities and reporting on the federal government’s progress in a particular area—advanced manufacturing. SAM is co-chaired by OSTP. Ten of the 11 agencies that administer programs in this review are represented on the SAM and, as we discussed earlier regarding agencies’ actions to address manufacturing trends, 42 of the 58 programs in this review reported addressing advanced manufacturing. SAM provides support for implementation of the National Strategic Plan for Advanced Manufacturing, a plan developed in 2012 in response to the America COMPETES Reauthorization Act of 2010. The strategic plan included five objectives (see sidebar). The strategic plan also included suggested indicators or metrics for tracking progress over the short-term and long-term and identified the federal agencies that should implement actions to achieve them. Federal agencies identified in the strategic plan are to implement actions to achieve one or more of the plan’s objectives. For example, the fourth objective of the plan is to optimize the federal government’s advanced manufacturing investment by taking a portfolio perspective across agencies and adjusting accordingly. To achieve this objective, the plan lists actions that agencies can take, including: (1) coordinating federal agency investments in the knowledge and capabilities shared across the manufacturing sector and (2) targeting and balancing investments in advanced materials, broad production technology platforms, advanced manufacturing processes, and design and data infrastructure. The strategic plan then identifies the Departments of Commerce, Defense, Energy, and the National Science Foundation as the agencies responsible for implementing these actions. The plan notes that the federal government has current investments in advanced manufacturing R&D as well as plants and equipment that help to position promising technologies for broad adoption and commercialization or to meet certain essential national security needs. By coordinating the federal government’s portfolio of these investments, the plan envisions that these investments will increase the global competitiveness of U.S. manufacturing and help to create a fertile domestic environment for innovation. The strategic plan identifies potential measures that could be used to measure progress toward these objectives but does not specify what information should be submitted by agencies. For example, the plan identifies short-term measures for achieving objective four: (1) development and implementation of a framework for managing the whole- of-government portfolio, and (2) number and scale of multi-agency advanced manufacturing funding solicitations. Long-term measures include: (1) balance of federal advanced manufacturing R&D investment across portfolio dimensions, including basic research, applied research, demonstration facilities, and others and (2) accelerated time-to-market of new advanced manufacturing processes and products. While the plan identifies these short and long-term metrics, it does not include reporting requirements for agencies on these metrics to measure their progress toward the objective. The RAMI Act requires NSTC’s Committee on Technology to periodically update the strategic plan for advanced manufacturing and to describe the progress made in achieving the objectives from prior strategic plans, including a discussion of why specific objectives were not met. Under its charter, one of the SAM’s purposes is to provide support for implementation of and updates to the strategic plan for advanced manufacturing. As required by the RAMI Act, the SAM plans to update the strategic plan on May 1, 2018, including reporting on the progress made in achieving the objectives from prior strategic plans. However, it has not identified the information it will collect from federal agencies to determine the extent to which the strategic plan objectives are being achieved. One of the key practices for enhancing and sustaining interagency collaborative efforts is developing mechanisms to monitor, evaluate, and report results. As we previously reported, federal agencies engaged in collaborative efforts need to create the means to monitor and evaluate their efforts to enable them to identify areas for improvement. As a part of this effort, agencies should consider whether there is a way to track and monitor progress toward the short and long-term outcomes. According to SAM officials, the information to be collected to evaluate progress in achieving the objectives of the strategic plan has not yet been determined. The SAM’s role, according to its charter, is to serve as a forum for information sharing, collaboration, and consensus-building among agencies regarding federal policy, programs, and budget guidance for advanced manufacturing. The SAM generally holds two to three meetings per year with between 30 and 50 officials across 13 different agencies, where officials have conversations about their programs and investments, and the SAM helps to put these efforts into a broader context, according to SAM officials. While these discussions include leading practices regarding measuring program effectiveness, according to SAM officials, the SAM’s role is not to provide top-down direction to agencies regarding how to measure effectiveness. While the SAM’s role under its charter includes collaboration and consensus-building among agencies, OSTP, as a co-chair of the subcommittee, has not worked with SAM member agencies to specify the information needed to report progress in meeting strategic plan objectives, which is inconsistent with a key practice for interagency collaboration. Without specifying the information it will collect from federal agencies, SAM may lack consistent, comprehensive information that would help it fully report on the progress in achieving the objectives of the National Strategic Plan for Advanced Manufacturing. The health of the U.S. manufacturing sector has long been a concern, and the vast majority of the 58 programs we identified that support manufacturing reported taking steps to address trends in the sector, most prominently the movement toward advanced manufacturing. Federal law requires a government-wide strategic plan for advanced manufacturing to improve government coordination and provide long term guidance in support of manufacturing competitiveness. The next update of the strategic plan, which is required by 2018, must describe the progress made in achieving the objectives of the plan and include a discussion of why specific objectives were not met. SAM has worked to facilitate federal agency collaboration on advanced manufacturing and plans to report in 2018 on progress in achieving the strategic objectives. However, OSTP has not worked with SAM member agencies to identify the information it will collect from federal agencies to determine the extent to which the strategic objectives are being achieved. Consistent with a key practice for interagency collaboration, identifying the information needed from federal agencies would better position the federal government to report consistent, comprehensive information on the progress in achieving the objectives of the National Strategic Plan for Advanced Manufacturing. To enhance the ability of the Executive Office of the President to implement RAMI Act requirements related to reporting on advanced manufacturing, we recommend that the Director of the Office of Science and Technology Policy, working through the National Science and Technology Council and agency leadership, as appropriate, identify the information they will collect from federal agencies to determine the extent to which the objectives outlined in the National Strategic Plan for Advanced Manufacturing are being achieved. We provided a draft of this report to USDA, DOC, DOD, DOE, DOL, Education, EPA, the Export-Import Bank, HHS, NSF, OSTP, SBA, and the Treasury for review and comment. We received the following comments: USDA did not provide any comments. DOC provided technical comments, which we incorporated as appropriate. DOD’s GAO Liaison stated via e-mail that it concurred and had no comments on the report. DOE, DOL, and Education provided technical comments, which we incorporated as appropriate. EPA stated that it had no comments on the report. The Export-Import Bank, HHS, and NSF provided technical comments, which we incorporated as appropriate. OSTP’s General Counsel provided comments via e-mail, which we discuss below. SBA provided technical comments, which we incorporated as appropriate. The Treasury stated that it had no comments on the report. OSTP did not state whether it agreed or disagreed with the recommendation in our draft report but raised an issue related to the recommendation and suggested several revisions to its wording. Our draft report had recommended that OSTP, working through relevant agencies, develop a mechanism to collect information from federal agencies needed to determine the extent to which the objectives in the National Strategic Plan for Advanced Manufacturing are being achieved. OSTP stated that there is such a mechanism. According to OSTP, NSTC’s subcommittees coordinate and agree on how to measure progress toward strategic plan goals and establish mechanisms to monitor, evaluate, and report results. OSTP cited the SAM’s charter, which specifies that a function of the SAM is to provide periodic updates on the implementation of the strategic plan, among other things, to the Committee on Technology and the Assistant to the President for Science and Technology. While we agree that the SAM charter provides for periodic updates on the implementation of the strategic plan, the focus of the draft recommendation was on the need to identify the specific information to be collected from federal agencies to report on progress made in achieving the objectives of the 2012 strategic plan. Without identifying such information, the federal government may not be prepared to report consistent and comprehensive information on progress in meeting strategic plan goals. In response to OSTP’s comments, we modified the report and the wording of our recommendation to be more precise by deleting the reference to developing a “mechanism” for collecting information and focusing on the need to identify the information to be collected. OSTP also suggested several revisions to the wording of the recommendation, which we are not making for the following reasons: OSTP suggested directing the recommendation to the Assistant to the President for Science and Technology rather than the Director of the Office of Science and Technology Policy. We directed the recommendation to the Director of the Office of Science and Technology Policy because that is the office responsible under federal law for establishing the Committee on Technology, which is required to update the strategic plan. OSTP also suggested revising the recommendation to specifically mention the SAM. We did not specifically mention the SAM because its charter expired March 1, 2017, and the legal requirement to update the strategic plan is the responsibility of the Committee on Technology, established by the Director of OSTP. Also, while OSTP’s General Counsel indicated that, as of March 2017, an extension to the charter was being considered, it was not clear whether any extension would include the period of time in which the update to the strategic plan is required to be completed under the RAMI Act. OSTP also suggested revising the recommendation to focus on the extent to which the objectives of the Advanced Manufacturing Partnership (AMP) recommendations are being achieved in periodic updates to the implementation of the National Strategic Plan for Advanced Manufacturing. The AMP recommendations are sets of recommendations proposed in a series of reports by PCAST. These recommendations were not all covered in the scope of our report. Instead, the focus of our recommendation was on reporting on the progress in achieving the objectives of the strategic plan, as required by the RAMI Act. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Director of the Office of Science and Technology Policy; the Secretaries of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Labor, and the Treasury; the Director of the National Science Foundation; the Administrators of the Environmental Protection Agency and the Small Business Administration; the Acting Chairman of the U.S. Export-Import Bank; and other interested parties. In addition, the report will be available at no charge on the GAO web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Andrew Sherrill at (202) 512-7215 or sherrilla@gao.gov or John Neumann at (202) 512-3841 or neumannj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to examine (1) how selected federal programs provide support to U.S. manufacturing; (2) how selected federal tax expenditures provide support to U.S. manufacturing; (3) how, if at all, selected federal programs address manufacturing trends and what, if any, challenges they face; and (4) the extent to which federal agencies measure performance and assess effectiveness in supporting manufacturing generally, and advanced manufacturing specifically. To address these objectives, we first identified federal agencies (i.e. any federal organization) that administered programs that support the manufacturing sector. We defined “support the manufacturing sector” broadly, including support for U.S. and foreign manufacturers that manufacture in the United States; programs that support U.S. manufacturers who manufacture or export their goods overseas, and programs that train workers who lose their jobs in manufacturing, whether they are being trained for other jobs in manufacturing or in another sector of the economy. To identify these agencies, we reviewed prior GAO and Congressional Research Service (CRS) reports, the President’s 2016 budget, and the Catalog of Federal Domestic Assistance (CFDA). We identified 11 agencies that administered programs that support the manufacturing sector. We initially met with officials with the Departments of Commerce, Defense, and Energy. In our meetings with these three agencies, and in our interviews with experts at CRS and the Information Technology and Innovation Foundation, we asked officials if these 11 agencies were the main agencies that administer programs to support U.S. manufacturing, and if there were any other agencies that did so. After considering their input, we added 4 more agencies, bringing the total to 15 agencies that we contacted: The Departments of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Housing and Urban Development, Labor, Transportation, and the Treasury; the National Aeronautics and Space Administration; the National Science Foundation; the Environmental Protection Agency; the Export-Import Bank; and the Small Business Administration. After contacting Department of Housing and Urban Development officials, we eliminated the agency from consideration because the officials informed us that the sole manufacturing program that the team had identified no longer existed. We then developed a list of programs administered by the 14 agencies on our list that appeared to directly target or indirectly support U.S. manufacturing. To identify programs, we searched prior GAO and CRS reports and program inventories developed by agencies, CFDA, www.manufacturing.gov, as well as agency web sites. We conducted keyword searches using words such as “manufacturing” and “manufacturers” to compile the initial list of programs. We then contacted each of the 14 agencies with an initial list of potential programs administered by their agency. We asked the officials if any programs administered by their agency should be added or removed from the list. We reviewed the agencies’ input and determined whether the programs should be included in this review. To be selected as a program that supports manufacturing, the program had to meet the following criteria: (1) have an identifiable focus on manufacturing, (2) be operational in fiscal year 2014 and at the time of selection in 2015, and (3) not be part of a larger program that was selected. We then confirmed with each of the agencies the programs identified for their agency. Based on the agencies’ input and our application of the criteria, we identified 58 programs administered by 11 agencies that met the criteria. To identify tax expenditures, we reviewed a 2013 CRS report titled “Federal Tax Benefits for Manufacturing: Current Law, Legislation in the 113th Congress, and Arguments For and Against Federal Assistance” and spoke with experts from CRS and officials from the Department of the Treasury (Treasury). We asked Treasury officials if any tax expenditures should be added or removed from the list. The team then reviewed the agencies’ input and determined whether the tax expenditure should be included in this review. To be selected as a tax expenditure that supports manufacturing, it had to meet the following criteria: (1) have a benefit for manufacturing, and (2) be operational in fiscal year 2014 and at the time of selection in 2015. Based on the agency’s input and internal discussions, we selected nine tax expenditures that met the criteria. We then reviewed reports from the Joint Committee on Taxation (JCT) to obtain estimates of federal revenue forgone for each of the tax expenditures. We reviewed, but did not verify, the procedures reported by JCT to estimate the magnitude of revenues forgone through tax expenditures. We administered a web-based survey to agency officials for these programs to collect, among other data, descriptive information, budget and participation data, and information on efforts to address trends in manufacturing. To minimize errors arising from differences in how questions might be interpreted and to reduce variability in responses that should be qualitatively the same, we conducted pretests with six programs at six agencies, and we revised the survey based on pretest feedback. We conducted an additional pretest with two of the programs to ensure that revised questions used in the survey were understandable. In February 2016, we notified agency officials that the survey was available online. We also made telephone calls to officials and sent them reminder messages, as necessary, to ensure their survey response. We analyzed and grouped the data we collected to describe the 58 programs and provide information on their size and scope. We did not conduct a legal analysis to confirm the various descriptions of the programs in this report, including information on their budgetary obligations, program goals, or intent. Further, we did not review or analyze agencies’ financial data or materials prepared by the agencies in connection with the annual budget and appropriations process in developing this report. We used standard descriptive statistics to analyze responses to the survey. Because this was not a sample survey, there are no sampling errors. To minimize other types of errors, commonly referred to as nonsampling errors, and to enhance data quality, we employed recognized survey design practices in the development of the survey and in the collection, processing, and analysis of the survey data. On the basis of our application of recognized survey design practices and follow-up procedures, we determined that the data were of sufficient quality for our purposes. For the obligations data we used in this report, our survey asked program officials to provide their program’s total obligations (federal amount only) for fiscal years 2014, 2015, and 2016. The survey also asked programs to estimate the proportion of obligations the program used to support U.S. manufacturing, and it also asked for an explanation of how the proportions were determined. To assess the reliability of these data, we took the following steps. First, the survey question on the proportion of obligations that support U.S. manufacturing allowed programs to provide ranges or other approximate percentages if they did not know the precise numbers. Second, because we asked the programs to explain how they determined the proportion of obligations that supported U.S. manufacturing, we were able to understand, in a limited way, the reliability and validity of the proportions provided. In general, the programs that provided numbers and proportions also provided explanations that suggested the numbers were broadly reliable and accurate. Third, we checked the obligations data for a number of programs against publicly available budget data. The funding amounts provided by programs in our survey generally corresponded well. Fourth, we performed common data testing steps to assess the reliability of the data, including identifying outliers and missing data. Fifth, our questions requesting obligations funding were framed in terms of the programs’ overall funding and budgets, rather than asking specifically for the amounts related to manufacturing. This made it more possible for programs to provide accurate numbers based on existing data sources. We used these data to provide a broad understanding of funding levels across the population of 58 programs in this report. More specifically we report these numbers in two primary ways. First, in appendix II, we provide total obligations and the proportion going to manufacturing for all programs reporting these numbers. Second, in tables 2 through 6, we provided counts of the numbers of programs and aggregated obligation totals for each agency in categories of programs. We aggregated by summing the totals reported for these programs and categories by agency. We reported non-rounded obligations for programs mentioned in the text of the report, and we rounded numbers in the tables and in appendix II. Based on the data reliability steps described above, we determined that these data were sufficiently reliable for the purposes of this report. To observe and obtain information on how selected programs support manufacturing, we conducted site visits to two locations: Chicago, IL and Knoxville, TN in October and November 2015. We selected these locations based on the following criteria: Each was a city where: (1) a Manufacturing USA institute was currently operating, (2) an Investing in Manufacturing Communities Partnership (IMCP) community was designated, and (3) a Hollings Manufacturing Extension Partnership (MEP) Center was currently operating. For variety in administering agencies, we determined that one city we selected had to have a Manufacturing USA institute that was overseen by the Department of Defense (DOD), and the other had to have an institute that was overseen by the Department of Energy (DOE). To obtain information across multiple policy areas, the IMCP had to have a workforce training and international trade component of their program to meet our criteria. At the time of our selection, there were six currently operating manufacturing institutes each overseen by either DOD or DOE. During each site visit, we visited and toured the institute, interviewed IMCP and MEP officials, visited two manufacturers, and toured their facilities. For additional information on programs that train workers in manufacturing, we interviewed officials with two other organizations in Chicago, and we toured training facilities for one of them: the Jane Addams Resource Corporation (see table 20). The information we obtained on our site visits was not generalizable to all locations where these programs operated. We identified manufacturing trends by reviewing reports addressing manufacturing policy in the United States produced by the White House’s Office of Science and Technology Policy (OSTP) and interviewing experts in the manufacturing sector. The reports were produced by the President’s Council of Advisors on Science and Technology (PCAST) and National Science and Technology Council (NSTC) and written by experts in industry, academia, and government agencies tasked with helping the Executive Branch develop manufacturing policy in the United States. Specifically, PCAST and NSTC address the future of federal science and technology investments and make recommendations to improve policy in science, technology, and innovation. As table 21 shows, all of the reports we reviewed identified the advanced manufacturing capabilities and higher skilled jobs trends, while four of the five reports mentioned the increasingly complex and globalized nature of manufacturing. We interviewed seven experts in the manufacturing sector, and they agreed that these were three manufacturing trends. To select these experts, we selected an initial group of experts after we attended the Manufacturing and Innovation: Making Value for America webinar on January 19, 2016. The webinar enabled us to identify three experts from the National Academy of Engineering (NAE) with specific knowledge regarding manufacturing trends. After speaking to the NAE experts, we asked if there were other experts in the sector that we should speak to. This afforded us the opportunity to identify additional experts from the private sector, non-profit organizations, and agency officials from the Department of Commerce and the National Science Foundation. The views of these experts cannot be generalized, but they provided additional perspectives. To determine the extent to which programs are addressing manufacturing trends, we included survey questions asking which trends programs are addressing, steps they have taken to address them within the past 3 years, and challenges they face in doing so. To analyze programs’ survey responses, we categorized program responses based on what trends they were addressing, which allowed us to identify how many programs are taking steps to address each trend. We further categorized program responses by analyzing steps programs are taking to address each trend and organized all program responses into three broad strategies: (1) providing funding and resources, (2) sharing information, and (3) promoting coordination. To determine challenges programs face in addressing manufacturing trends, we analyzed program responses to our survey questions related to challenges and determined how many programs reported challenges related to each of the trends. To further categorize the types of challenges programs face, we organized all program responses into three broad challenge categories: (1) lack of funding and resources, (2) lack of information, and (3) coordination challenges. To ensure the consistency and accuracy of this analysis, one analyst conducted the primary categorization and a second analyst reviewed that categorization and raised questions about particular results. The two analysts then discussed and resolved the questions. Additionally, a GAO social science methodologist with expertise in qualitative data analysis reviewed the underlying documentation for the analysis as a broad check on its accuracy and consistency. To examine the extent to which federal agencies measure the performance and assess the effectiveness in support of manufacturing generally and advanced manufacturing specifically, we asked agencies in our survey what manufacturing-related performance goals and metrics they used and what program evaluations had been conducted or planned in the past 5 years to assess any impact that the program had on the U.S. manufacturing sector. To analyze the evaluations identified by program officials, we obtained copies of the evaluations and reviewed them to see if they met GAO’s definition of a program evaluation and whether they specifically evaluated the program’s effect on manufacturing. To analyze federal efforts to assess the effectiveness of support for advanced manufacturing specifically, we reviewed relevant federal legislation and prior GAO reports on interagency collaboration. We also interviewed officials with OSTP’s Subcommittee on Advanced Manufacturing about their efforts to coordinate advanced manufacturing efforts across the federal government and evaluate progress in implementing the National Strategic Plan for Advanced Manufacturing. This appendix provides information about the 58 programs that support U.S. manufacturing selected for our review. All of the information about budgetary obligations, descriptions, and activities (including any statements about the program goals or intent) is based on information provided by agency officials. In some cases, agency officials may have estimated or rounded their program’s obligations for our survey. See appendix I for more information about our survey. We did not conduct an independent analysis of this information. Further, we did not review agencies’ financial data or materials prepared by the agencies in connection with the annual budget and appropriations process in developing this report. The programs are organized by the federal agency that administers them, listed alphabetically. Program Information Reported in Survey Year program created: 2008 Program category: General financing Program description: The Bioenergy Program for Advanced Biofuels supports expanded production of advanced biofuels by awarding payments to eligible advanced biofuel producers, thereby promoting sustainable economic development in rural America. Awards are based on producers’ requests and the amount of biofuel they produce. Examples of awardees include producers of biodiesel from canola oil, greases, and soybean oil; ethanol from milo or sorghum; electricity from an on-farm anaerobic digester that uses animal waste as the feedstock; and manufacturing facilities that produce wood pellets. Program activities that support manufacturing: Payments: The program awards payments to manufacturing facilities that produce advanced biofuels. Manufacturing Trends Addressed by the Program The Bioenergy Program for Advanced Biofuels did not report it addressed GAO’s identified manufacturing trends (advanced manufacturing, move to a higher skilled workforce, and increased globalization and competition). Program Information Reported in Survey Year program created: 1972 Program category: General financing Program description: The Business and Industry Guaranteed Loan Program seeks to improve the economic health of rural communities by bolstering the existing private credit structure by guaranteeing loans for rural businesses, which enables private lenders to provide more affordable financing for businesses in eligible rural areas. Examples include loans for purchasing and developing land; purchasing equipment, machinery, or other supplies; and business repair, modernization, or development. Program activities that support manufacturing: Loans: The program issues Loan Note Guarantees to private lenders enabling rural businesses, including manufacturers, to obtain loans. The loans provide better rates and terms to the businesses that receive them. Manufacturing Trends Addressed by the Program The Business and Industry Guaranteed Loan Program did not report it addressed GAO’s identified manufacturing trends (advanced manufacturing, move to a higher skilled workforce, and increased globalization and competition). The Business and Industry Guaranteed Loan Program provides loans but this role does not directly relate to manufacturing trends, and the program’s mission is much broader than just manufacturing. Program Information Reported in Survey Year program created: 1993 Program category: Trade, Export promotion Program description: The Advocacy Center serves as the primary interagency coordinator across 14 different agencies to execute “whole of government” approach to help U.S. exporters win business overseas. According to program officials, the center coordinates federal resources to assist U.S. businesses as they compete against foreign firms for specific foreign government contracts. The Center helps support and retain U.S. jobs through promoting U.S. exports and is essential in the success of initiatives from the International Trade Administration’s (ITA) Global Market unit. Global Markets overseas staff counsel companies on advocacy, perform and coordinate advocacy efforts overseas, and provide key market intelligence that helps determine national interest and advocacy campaigns. Global Markets domestic staff reach out to clients and counsel companies on advocacy services. Program activities that support manufacturing: Advocates: According to program officials, the center offers government- to-government support for qualified U.S. business interests and acts as a counterweight to foreign governments that advocate for their national businesses. The intent of the advocacy, according to these officials, is to promote fairness in foreign markets. Engagement by U.S. government officials with overseas governments may take the form of official correspondence, focused meetings or in-person meetings, talking points in a bilateral meetings or dialogue, and/or press releases or meetings with foreign press. Conducts market intelligence: The company seeking advocacy fills out a questionnaire to provide the Center with details of the project description, type of assistance requested, foreign government decision makers, and timeline. The Advocacy Center then verifies the information with the assistance and concurrence of the U.S. mission abroad. Verifies that the companies adhere to the Foreign Corrupt Practices Act: According to program officials, companies seeking advocacy must sign the anti-bribery agreement attesting that they adhere to the Foreign Corrupt Practices Act. Conducts due diligence: The Center conducts due diligence on the company seeking assistance to confirm that the company can conduct the service or provide the products needed to successfully compete for and complete the foreign project. Makes a national interest determination: When the U.S. business contribution is less than 50 percent of the total value of the project, the following is considered: U.S. materials and equipment content, U.S. labor content, contributions to the U.S. technology base, and potential for follow-on business benefiting the U.S. economy. Participates in interagency task force: The Secretary of Commerce chairs the interagency task force on commercial advocacy, which comprises 15 other federal agencies. The purpose of the task force is to provide increased support for U.S. exporters beyond traditional commercial advocacy and take a “whole of government” approach. Program Information Reported in Survey Year program created: Not reported Program category: Trade, Enforce trade laws and agreements and support policy formulation Program description: The Antidumping and Countervailing Duty Operations program enforces the U.S. trade laws by conducting investigations, administrative reviews, new shipper reviews, sunset reviews, changed circumstances reviews, and scope and anti- circumvention inquiries. The program also assists in the defense of determinations made by the Enforcement and Compliance office in U.S. courts, the World Trade Organization, and in North American Free Trade Agreement dispute settlement panels, according to program officials. The program conducts investigations in response to U.S. industry petitions alleging that imports are being dumped or unfairly subsidized and that those imports are materially injuring, or threatening material injury to, competing U.S. industry. Program activities that support manufacturing: Administration and enforcement of the antidumping and countervailing duty laws: The Enforcement and Compliance Office investigates U.S. firms’ claims that they are being injured by dumped or unfairly subsidized imports. If the final result of an investigation is affirmative and the International Trade Commission makes a final finding of injury, the Enforcement and Compliance Office will impose an order that requires importers of the merchandise make cash deposits equal to the estimated amount of dumping and/or subsidization. Program Information Reported in Survey Year program created: 1980 Program category: Trade, Export promotion Program description: The Domestic Field program includes a network of 108 U.S. Export Assistance Centers across the United States that focus primarily on the exporting needs of small and medium-sized businesses. At Assistance Centers, Domestic Field trade specialists help identify opportunities for U.S. exporters, clarify foreign regulations and standards, provide support to clients who have business disputes abroad or encounter foreign market barriers, and counsel U.S. companies on the best strategies to succeed in overseas markets. The Domestic Field also plays a primary role in educating U.S. firms about their rights, obligations, and opportunities in foreign markets, or of the assistance the International Trade and Investment Administration can provide in resolving their trade problems. Working with other International Trade and Investment Administration programs, the Domestic Field program organizes educational outreach programs to U.S. businesses and industry associations across the country. Program activities that support manufacturing: Providing export counseling to U.S. exporters or companies interested in exporting: The network of U.S. Export Assistance Centers includes 108 offices in 48 states and Puerto Rico that work with U.S. companies on a one-by-one basis, including manufacturers, to focus on their exporting needs and plans. This often involves customized assistance, including market identification, export mechanics, financing options, and partner identification. Fee-based services: These services include matchmaking and vetting services, as well as single company promotions, to help connect U.S. manufacturers with opportunities overseas. Trade show assistance: This includes counseling on-site, and facilitating business-to-business meetings. Export training: It organizes trade promotion conferences as well as numerous webinars and seminars on all aspects of exporting for U.S. manufacturers. The Domestic Field also partners with the Manufacturing Extension Partnership program to deliver the ExporTech export training program in locations across the country. This intensive export education program is delivered to five to seven manufacturers at a time with the end goal being the development of an actionable exporting plan for each manufacturer. Program Information Reported in Survey Year program created: 1934 Program category: Trade, Export promotion Program description: According to program officials, the Foreign-Trade Zones program helps encourage commercial activity and value added at U.S. manufacturing and distribution facilities that compete with foreign alternatives by allowing delayed or reduced duty payments on foreign merchandise transferred from the zones, as well as other savings. Officials also stated that the Foreign-Trade Zones program can reduce costs through delayed or reduced duties, allow special entry procedures, and encourage activity closer to market. Reducing costs through the program can lead to more competitive U.S. operations, thereby helping to maintain U.S.-based activities and jobs. Program staff serve as the operational arm of the Foreign-Trade Zone Board, an interagency body chaired by the Secretary of Commerce. The Board was established to license and regulate foreign trade zones, and licenses primarily public or non-profit corporations to administer zones on a local level. Program activities that support manufacturing: Processing requests for manufacturing (production) authority: According to program officials, under the Board’s regulations, companies may conduct manufacturing operations within foreign trade zones if they have obtained approval in advance. Officials stated that the Board approves requests for manufacturing authority under procedures and criteria delineated in the Board’s regulations. Pre-application counseling for manufacturing applicants: Companies may conduct manufacturing operations within foreign-trade zones if they have obtained approval in advance from the Board. Specifically, the Board staff responds to questions presented via telephone or e-mail and reviews and provides feedback on draft requests submitted by potential applicants. Education and outreach on potential for Foreign-Trade Zone manufacturing: Through information available on the Board’s website and shared at industry events, the Board staff conducts education and outreach activities regarding potential benefits to manufacturers under the Foreign Trade Zone program. Manufacturing Trends Addressed by the Program The mission of the Foreign-Trade Zones program is broader than just manufacturing. Additionally, program officials explained that although not specifically targeting the manufacturing trends GAO identified, in 2012 the Foreign-Trade Zone Board completed a total overhaul of its regulations. They further stated that although the revised regulations dramatically streamline procedures for potential program users—including potential users that could fall under one or more of the manufacturing trends—that change was effective as of April 2012 and, therefore, fell outside GAO’s 3-year scope of review. While the program may have addressed the three manufacturing trends in recent years, there is nothing explicit in the program design that is intended to address or affect these developments. Program Information Reported in Survey Year program created: 1988 Program category: Innovation, Applied research and development Program description: The Hollings Manufacturing Extension Partnership is a federal-state-industry partnership that provides U.S. manufacturers with access to technologies, resources, and industry experts. The program consists of Manufacturing Extension Partnership Centers located across the country that work directly with local manufacturing communities to strengthen their competitiveness. Funding for the Centers is a cost-sharing arrangement consisting of support from federal, state, and local governments and fees charged to the manufacturing clients for services provided by the Centers. may be conducted in conjunction with local community colleges and technical schools. Support for Manufacturing Day: The program supports the annual Manufacturing Day to raise awareness for manufacturing and attract younger workers to manufacturing jobs. Export assistance: The program partners with the U.S. Export Assistance Centers of the Department of Commerce (DOC) to deliver the ExporTech export training and consultation program in locations across the country. This intensive export education program includes peer counseling and is delivered to five to seven manufacturers at a time to develop an exporting plan for each manufacturer, according to agency officials. Program Information Reported in Survey Year program created: 2013 Program category: Trade, Enforce trade laws and agreements and support policy formulation Program description: The Industry Trade Policy and Analysis program supports U.S. government trade policy formulation and negotiations by providing the trade and economic analysis and issue expertise needed to expand exports and foreign direct investment in the United States. The objectives are to benefit U.S. businesses and provide new opportunities to expand U.S. exports of goods and services. The program also serves as the primary source of trade data within ITA and is responsible for undertaking cross-sectoral economic analysis, such as the number of annual jobs supported by exports. Program activities that support manufacturing: Represent U.S. industry in trade negotiations: The program maximizes U.S. gains in trade negotiations by evaluating industry positions and foreign market access offers and recommending policy actions that best support the interests of U.S. industry. Protect U.S. intellectual property: The program advances U.S. commercial interests on international intellectual property laws, policies, practices and assists U.S. companies to overcome intellectual property- related trade barriers. Standards policy: The program addresses standards-related market access barriers and leads ITA’s involvement in standards policy issues, which may include those important to U.S. manufacturers. Economic analysis: The program evaluates potential economic effects of statutory and regulatory programs on trade-dependent industries, including those engaged in manufacturing. Data programs: The program provides publicly accessible online trade and tariff information to help companies (including manufacturers) assess the market opportunities available for their products. Manufacturing Trends Addressed By the Program Many of the issues that the Industry Trade Policy and Analysis program cover touch on the three manufacturing issues GAO identified, but little of what the program produces is explicit to U.S. manufacturing. The program is targeted toward exporters, although these exporters are neither explicitly “U.S.” companies or explicitly manufacturing firms. The program does not isolate the impact that it has on these trends, as its efforts are more broadly focused. Program Information Reported in Survey Year program created: 1980 Program category: Trade, Export promotion Program description: The International Field program includes foreign service officers, locally engaged staff, and headquarters-based experts who advance U.S. commercial interests, identify opportunities for U.S. exports, clarify local regulations and standards, engage foreign government officials in commercial diplomacy to help resolve market access and/or trade compliance problems affecting U.S. exporters or investors, and counsel companies on the best strategies to succeed in overseas markets. The program assists companies of all sizes to identify target markets for entry or expansion and develop effective strategies to succeed in those markets. This includes bringing foreign buyers and U.S. companies together through business matchmaking services, promotional support and representation at trade shows and fairs, trade events, product launches, and technical seminars. Program activities that support manufacturing: Technical assistance: The program provides pre-export logistics information and assistance, matchmaking with foreign companies, and on the ground advocacy on market access and/or compliance issues. Department of Commerce, Economic Development Administration Program Funding: The program is overseen by DOC, but DOC does not obligate additional funds specifically for the program. Instead, 12 federal agencies participating in the program and DOC provide funds to selected manufacturing communities through existing funding sources. Program Information Reported in Survey Year program created: 2013 Program category: General support across the areas of innovation, trade, and training Program description: According to program officials, the Investing in Manufacturing Communities Partnership (IMCP) is designed to strengthen communities’ ability to attract inbound investment by fostering regional collaboration and designating manufacturing communities to receive preferential consideration for federal funding, among other things. DOC has not allocated ongoing grant funding for this program because it was designed to enhance coordination and strategic investment of existing funding and technical assistance to manufacturing communities, according to agency officials. Benefits of the program include increasing capacity for U.S. innovation and manufacturing, higher skills for the American workforce, attracting and retaining small businesses that serve as suppliers, and expanding opportunity for U.S. exports. Program activities that support manufacturing: Providing grants: In September 2013, IMCP awarded $7 million in planning grants to 44 communities nationwide to support the development of their strategies. Under IMCP, 12 federal agencies and DOC with more than $1 billion in grant funding can use the awardees’ plans to make targeted investments to strengthen regional manufacturing. Technical assistance: To advance U.S. manufacturing and provide all communities with tools for success, DOC and the interagency team creates strategic programming and technical assistance opportunities. Community Mentorship Program: The mentorship program aims to cultivate relationships among communities, create mutual vested interests, and encourage ownership in each other’s success. IMCP national summit: DOC hosts an annual manufacturing summit for manufacturing communities. Federal liaison: The Economic Development Administration assigns each IMCP community to a federal liaison in one of the 13 partnering federal agencies. Department of Commerce, International Trade Administration, Industry and Analysis Program Funding: Not reported Program Information Reported in Survey Year program created: 2004 Program category: Trade, Export promotion Program description: The Manufacturing program ensures appropriate industry and other stakeholder input into trade and investment policy development, as well as trade negotiations and implementation. Among other efforts, it supports exports and foreign direct investment in the United States by leveraging industry expertise and an understanding of the dynamics of global competition to develop and implement policies and improve U.S. business competitiveness globally in high-growth export sectors and markets and opportunities for foreign direct investment. The program develops industry-specific negotiating priorities for the U.S. government and develops and recommends strategies that further open foreign markets. The Manufacturing program also works closely with the Office of the United States Trade Representative in negotiating trade agreements and policy outcomes affecting these industries, providing key technical support. In addition, the Manufacturing unit analyzes and reports on potential benefits to U.S. producers and consumers, devises programs to capitalize on opportunities, and supports compliance with trade agreement provisions. Program activities that support manufacturing: Export assistance (trade policy): The Manufacturing Unit works to develop trade policy positions that support the U.S. manufacturing sector and promote exports. Export assistance (trade promotion): The Manufacturing Unit works to develop trade promotion activities for the U.S. manufacturing sector to promote exports. Departments of Defense and Energy, and the obligations for them are in those agencies’ budgets and not included here. Program Information Reported in Survey Year program created: 2014 Program category: Innovation, Applied research and development Program description: Manufacturing USA is a network of institutes where researchers, companies, and entrepreneurs can collaborate to develop new manufacturing technologies with broad applications. Each institute has a unique technology focus and helps support manufacturing activity in local areas. The Manufacturing Innovation Institutes allow minimization of cost and risk to an industry in developing new manufacturing processes and technologies that take the nation’s basic research to implementation in manufacturing, according to agency officials. Program activities that support manufacturing: Agency coordination: The network coordinates the activities of the program with programs and activities of other federal agencies whose missions contribute to or are affected by advanced manufacturing. Network support: The network supports the institutes within the network with services to increase administrative efficiency and impact. Open topic institutes: The Department of Commerce holds competitions for institutes where the topic is identified by industry via their proposals. It manages the institutes afterward. Department of Commerce, International Trade Administration, Industry and Analysis Program Funding: Not reported. Program Information Reported in Survey Year program created: 2013 Program category: Trade, Export promotion Program description: The Textiles, Consumer Goods, and Materials program includes the Office of Textiles and Apparel and the Office of Consumer Goods and Materials. The Office of Textiles and Apparel administers and enforces agreements and preference programs concerning the textile, apparel, footwear, and travel goods industries and works to ensure fair trade and a level playing field for these industries to enhance their competitiveness in international markets. The office’s promotion export program assists small and medium-sized U.S. textile and apparel firms to develop and expand their export markets, attempting to help job retention and creation in this and related sectors. The Office of Consumer Goods and Materials provides industry expertise, trade policy guidance, and market access advocacy for a wide variety of consumer goods and materials industry sectors. Industry experts in the Office of Consumer Goods and Materials identify issues of strategic and commercial interest to those industry sectors and work with its stakeholders to enhance their international competitiveness. Program activities that support manufacturing: Policy advocacy and development: Program offices work with trade associations, companies, advisory committees, and individual companies to identify trade issues that need resolution so that U.S. industry is globally competitive. Activities include support for negotiation of trade agreements in concert with U.S. Trade Representatives and representation of U.S. policy and interests in bilateral and plurilateral trade for development of U.S. trade policy by objectively representing U.S. industry in internal U.S. government discussion. Policy implementation: Program offices work with trade associations, companies, advisory committees, individual companies, and federal agencies to implement trade agreements and contribute to enforcement of trade agreements through work with industries and provide technical expertise on industry-related issues that are essential to developing policy and strategy for consultations and dispute settlement. Trade promotion: Program offices work with trade associations, companies, and state and local governments to promote exports of U.S. textile, apparel, footwear, and consumer goods products, as well as materials (e.g., chemicals, building products, cosmetics, aluminum, and forest products). The program promotes products through trade missions, trade shows, International Buyer Programs, certified trade fairs, and Market Development Cooperator Program awards. Technical assistance: The program provides U.S. industries with data and market analysis so companies can make better strategic decisions about exports and trade in general and counsels companies on foreign market conditions and trends based on trade data and qualitative analysis. Administer cooperative agreements: Program offices currently administer seven Market Development Cooperative Programs, most of which are aimed at entry or expansion in key and growing markets (e.g., China). Program Information Reported in Survey Year program created: 1965 Program category: Trade, Financial support Program description: According to agency officials, the Trade Adjustment Assistance for Firms program is a trade remedy mechanism that is used rather than relying on tariffs, quotas, or duties. This assistance targets U.S. firms experiencing a decline in sales and employment, resulting directly from the increase in imports of like or directly competitive articles. The program works in partnership with a national network of Trade Adjustment Assistance Centers, and provides technical assistance to U.S. manufacturing, production, and service firms affected by import competition to develop and implement projects to regain global competitiveness, increase profitability and create jobs. Program activities that support manufacturing: Petitioning for certification: The program assists firms with submitting a petition to be certified as a trade-impacted firm. Generally, certification specialists in the Trade Adjustment Assistance Centers work with the firm at no cost to the firm to complete and submit a petition. Program Evaluations That Assessed Any Impact on the U.S. Manufacturing Sector Trade Adjustment Assistance: Commerce Program Has Helped Manufacturing and Services Firms, but Measures, Data, and Funding Formula Could Improve. GAO-12-930. Washington, D.C.: September 13, 2012. Recovery planning: The program-certified firms work with Trade Adjustment Assistance Centers staff to develop a customized business recovery plan for approval. Recovery plan implementation: The firm works with consultants to implement projects in an approved business recovery plan. Providing grants: The program provides grants to independent, non- profit or university-affiliated Trade Adjustment Assistance Centers that help U.S. manufacturing, production, and manufacturing service firms in a public-private collaborative framework apply for certification of eligibility for program assistance and prepare and implement strategies to guide firms’ economic recovery. Technical assistance: The program provides direct technical assistance to import-impacted U.S. manufacturing, production, and service firms by providing matching funds to Trade Adjustment Assistance Centers. The centers use the funds to match the costs for third-party consultants to help firms expand markets, strengthen operations, and increase competitiveness. Program Information Reported in Survey Year program created: 2013 Program category: Trade, Enforce trade laws and agreements and support policy formulation Program description: The Policy and Negotiations program oversees a variety of activities and policies related to the negotiation of trade and investment disciplines in international agreements, the administration of U.S. antidumping and countervailing duty laws, the negotiation and administration of suspension agreements of U.S. antidumping and countervailing duty investigations, as well as the improvement of access to export markets for U.S. companies. Program activities that support manufacturing: Trade agreements negotiation and compliance: The program office negotiates trade and investment international agreements, conducts outreach and assistance to U.S. companies or industries confronting foreign government trade actions or barriers that block or impede U.S. exports or investment, and administers the Trade Agreements Compliance Program, which involves all business units in ITA and the Office of the General Counsel. Antidumping and Countervailing Duties Petition Counseling and Analysis Unit: This program office reaches out to and assists U.S. industries and workers (especially small and medium-sized enterprises) seeking to use U.S. antidumping and countervailing duty law to remedy injury from unfairly traded imports. Support antidumping and countervailing duty cases: A program office applies policies and procedures in antidumping/countervailing duty proceedings while ensuring that broader policy objectives and statutory and international obligations are respected. They assist U.S. businesses by reviewing case determinations and developing new policies for major or emerging issues and ensuring consistent application of the trade remedy laws. Trade remedy compliance: Staff monitor and conduct outreach and advocacy to address potentially unfair application of foreign trade remedies. Staff members provide a wide range of services and tools to assist U.S. companies that find themselves subject to trade remedy actions. Subsidies Enforcement Office: This office assists U.S. businesses by providing a range of services to confront foreign subsidies that impede U.S. companies’ and workers’ ability to compete and expand into domestic as well as overseas markets. Steel import monitoring and analysis: The office administers the Steel Import Licensing program and provides steel import statistics and analyses to the U.S. government and industry stakeholders. Interagency Trade Enforcement Center: The office provides expert support to trade enforcement undertakings by the U.S. government, including research on foreign laws and measures. Department of Defense, Office of the Secretary Program Funding: Not reported Program Information Reported in Survey Year program created: Not reported Program category: Innovation, Basic research and development Program description: The Basic, Applied, and Advanced Research in Science and Engineering program (1) supports basic, applied, or advanced research and technology development in mathematical, physical, engineering, environmental, and life sciences, in addition to other fields with good, long-term potential for contributing to technology for Department of Defense missions; (2) facilitates transition of research results to practical application for defense needs; (3) improves linkages between defense research and the civilian technology and industrial bases to promote commercial application of the results of defense research and commercial availability of technology for defense needs; (4) fosters education of future scientists and engineers in disciplines critical to defense; and (5) strengthens the infrastructure for research and related science and engineering education in those disciplines. Program activities that support manufacturing: Technology maturity: The program invests in emerging manufacturing processes for enabling defense technologies required for national defense. Industrial base: The program actively supports a connected U.S. defense industrial base. Infrastructure: The program actively supports a healthy defense infrastructure. Workforce: The program actively supports an educated workforce to support national defense. Department of Defense, Office of the Assistant Secretary of Defense for Logistics Maintenance and Readiness Program Funding: Not reported Program Information Reported in Survey Year program created: 1998 Program category: Innovation, Applied research and development Program description: The Commercial Technologies for Maintenance Activities program is a joint Department of Defense (DOD)/National Centers for Manufacturing Science effort that promotes collaborative technology development, demonstration, and transition within DOD. Its objective is to ensure American troops and their equipment are ready to face any situation, with the most up-to-date and best-maintained platforms and tools available. The program is based on a collaborative model for manufacturers, academia, and DOD, and it creates relationships and opportunities, drives cutting edge research and development, and conducts industry intelligence from a unique perspective. Through partnerships, training, software, and business operations, the program can help achieve industry objectives while satisfying DOD needs through demonstration of new technologies prior to full deployment. Program activities that support manufacturing: Facilitates industry and DOD collaboration regarding maintenance technology: On occasion, the program advances maintenance capabilities that have directly benefited the manufacturing industrial base due, in part, to a large overlap between manufacturing technologies used by original equipment manufacturers and the tools and procedures employed by DOD maintenance depots. Examples of these capabilities include advanced machine controls, additive manufacturing and repair, complex electronics testing and troubleshooting, product lifecycle management applications, and advanced welding techniques, among other things. Program Information Reported in Survey Year program created: 1950 Program category: General financing Program description: According to agency officials, when essential to the national defense, Title III authority enables the U.S. government to apply financial incentives to encourage private industry to create new domestic sources of supply for key advanced materials and technology items and to accelerate deployment of new product and manufacturing process technology. According to agency officials, Title III authorities may be employed when domestic industrial capabilities that impact essential government requirements do not exist, are at risk of being lost, or are insufficient to meet essential governmental needs. According to agency officials, Title III actions stimulate private investment in production resources by reducing the risks associated with the capitalization and investments required to establish the needed production capacity. Projects range from process improvements and emerging technologies to construction of complete industrial production facilities. Program activities that support manufacturing: Purchase and develop production capabilities: The program purchases for government use or resale to create, maintain, protect, expand, or restore domestic industrial base capabilities essential for the national defense. Installation of equipment in industrial facilities: The program purchases, installs, and transfers title of production equipment. Purchase commitments: The program guarantees market to incentivize companies to establish production capability. Loans/loan guarantees: According to agency officials, the President may authorize a guaranteeing agency to provide guarantees of loans by private institutions to finance any contractor, subcontractor, provider of critical infrastructure, or other person in support of production capabilities or supplies that are necessary to the national defense to reduce current or projected shortfalls of industrial resources, critical technology items, or essential materials needed for national defense purposes. Development of substitutes: The program strengthens the production and technological capabilities of key industrial sectors and ensures affordable and assured access to critical materials and technologies. Program Information Reported in Survey Year program created: 2014 Program category: Innovation, Applied research and development Program description: The Industrial Base Analysis and Sustainment program provides the Department of Defense with a comprehensive ability to monitor and assess the industrial base, to address critical issues relating to urgent operational needs and industrial base vulnerabilities, and to support industrial base expansion. This program maintains or improves the health of critical and fragile industry capabilities that are at risk of being lost but are needed to support the National Defense Strategy. The goal of the program is to avoid loss of critical capabilities and resultant reconstitution costs wherever affordable; innovative mechanisms are available to the producers in the interim. Program activities that support manufacturing: Contracts: The program funds contracts with constituents of the National Technology and Industrial Base for specific at-risk goods and services to address critical issues in the industrial base, expand the industrial base, and address defense supply chain vulnerabilities. Minimize risks to industrial base: The program supports the warfighter by minimizing risks from industrial base capability issues. Program Information Reported in Survey Year program created: Fiscal Year 2014 Program category: Innovation, Applied research and development Program description: The Manufacturing Applied Research program supports innovation-based efforts that will provide technology options for future Navy and Marine Corps capabilities. Efforts focus on advanced Naval materials, biocentric technologies, environmental quality, human factors and organizational design, medical technologies, and Naval training technologies. Program activities that support manufacturing: Providing contracts: The program contracts for technical work in manufacturing applied research. Program Information Reported in Survey Year program created: 2014 Program category: Training, Enhancing job seekers’ skills Program description: The Manufacturing Experimentation and Outreach Two (MENTOR2) program seeks to enhance defense readiness by improving both the training and the tools available to those who will be called on to utilize, maintain, and adapt high-technology systems in low- technology environments. The program pursues this goal by developing and demonstrating new training tools, new materials, and new manufacturing technologies in the fields of electromechanical design and manufacturing. It is envisioned that project-based curricula employing MENTOR2 design and prototyping tools will teach a deeper understanding of high-technology systems and better enable future competence in maintaining and adapting such systems through the manufacture of as-designed components or the design and manufacture of new components. Program activities that support manufacturing: Training: The program develops instructor-led and independent training (along with supporting materials/equipment) to Department of Defense personnel to support understanding and hands-on experience with prototyping equipment, computer-aided design, and simulation systems. Program Information Reported in Survey Year program created: more than 50 years ago Program category: Innovation, Applied research and development Program description: The Manufacturing Technology Program focuses on the needs of the warfighter and weapons systems by helping to implement affordable, low-risk manufacturing solutions. The program provides a crucial link between technology and industrial base applications; matures and validates emerging manufacturing technologies to support affordable, timely, and low-risk implementation in industry; and addresses production issues from system development through transition to production sustainment. In addition, this program funds the Department of Defense-led Manufacturing USA institutes. Program activities that support manufacturing: Technology maturity: The program invests in emerging manufacturing processes for enabling defense technologies required for national defense. Industrial base: The program actively supports a connected U.S. defense industrial base. Infrastructure: The program actively supports a healthy defense infrastructure. Workforce: The program actively supports an educated workforce to support national defense. Program Information Reported in Survey Year program created: Fiscal Year 2009 Program category: Innovation, Basic research and development Program description: The Navy Manufacturing Science program addresses basic research efforts including scientific study and experimentation directed toward increasing knowledge and understanding in national security related aspects of physical, engineering, environmental and life sciences. Program activities that support manufacturing: Providing grants: The program awards grants to research institutions to carry out research in novel manufacturing and process control technologies. Department of Defense, Army, Research, Development, and Engineering Command Program Funding: Not reported Program Information Reported in Survey Year program created: 2012 Program category: Innovation, Applied research and development Program description: Prototype Integration Facilities are buildings where engineers develop and test various manufacturing products and generate related data to help meet warfighter needs. The facilities assist in the transition of technologies from the laboratory to the field. The specific core mission and related competencies of each prototype integration facility is unique depending upon the engineering support required by their specific customers. Program activities that support manufacturing: Execute Army Manufacturing Technology Program: The Army Manufacturing Technology program addressed manufacturing technology gaps. When developing Army weapon systems, the facilities identify manufacturing technology gaps and develop manufacturing processes to promote affordability. Prototype integration facilities execute this work using in-house facilities and in collaboration with industry. The facilities are able to transfer manufacturing technologies to both the commercial and organic industrial base using a variety of contracting instruments and agreements. Training: The program provides training to organic industrial base personnel in newly developed manufacturing processes (e.g., welding of titanium) and provides specialized training in manufacturing technologies for Army personnel. Manufacturing technology transfer: The program validates engineering data through manufacturing prototypes and captures manufacturing process data. The program also makes these data available to the organic and commercial industrial base. Cooperative research and development agreements: The program works directly with industry to develop and transition new manufacturing technologies. Program Information Reported in Survey Year program created: 1984 Program category: Training, Enhancing job seekers’ skills Program description: The Career and Technical Education—Basic Grants to States program helps develop the academic, career, and technical skills of secondary and postsecondary students who elect to enroll in career and technical education programs. Program activities that support manufacturing: Providing grants: Grant recipients and subrecipients may use grant funds to improve programs that prepare individuals for careers in manufacturing. The decision to use grant funds for this purpose is made by the recipient and subrecipient. About 5 percent of students concentrated in manufacturing in program year 2010-2011, according to officials from the Department of Education. Program Information Reported in Survey Year program created: 1984 Program category: Training, Enhancing job seekers’ skills Program description: The Career and Technical Education—National Programs provide support directly—or through grants, contracts, or cooperative agreements—for research, development, demonstration, dissemination, evaluation, assessment, capacity-building, and technical assistance activities aimed at improving the quality and effectiveness of career and technical education programs. Program activities that support manufacturing: Technical assistance: In 2015, program funds were used to support a series of webinars (“Skills on Purpose”) to provide technical assistance to those seeking to build the education and skills of the manufacturing workforce through partnerships between educational institutions and industry. Program Information Reported in Survey Year program created: 2011 Program category: Innovation, Applied research and development Program description: The mission of the Advanced Manufacturing Office is to reduce the energy intensity and life-cycle energy consumption of manufactured products by researching, developing, and demonstrating energy-efficient manufacturing processes and materials and to promote continuous improvement in energy efficiency among existing facilities and manufacturers. Its goal is to reduce energy consumption of manufactured goods across targeted product life-cycles by 50 percent over 10 years. Program activities that support manufacturing: Research and development facilities: The program works with National Laboratories to competitively select research, development, and demonstration activity investments in foundational energy-related advanced manufacturing technologies through large scale public-private consortia. As noted previously, the program also oversees and funds Department of Energy institutes under the Manufacturing USA program. Research and development projects: These projects support innovative, clean-energy manufacturing projects cost-shared with companies and research organizations that focus on specific high-impact manufacturing technology materials and process challenges. These activities fund the development of next-generation manufacturing materials, information, and process technologies that facilitate the transition of emerging clean energy technologies to domestic production and improve energy efficiency in energy-intensive and energy-dependent manufacturing processes. Technical assistance: The program provides critical technical assistance for the deployment of advanced energy efficiency technologies and practices. Technical assistance activities help individual manufacturers reduce their energy intensity by 25 percent over 10 years; demonstrate the viability of improved energy management approaches; and provide targeted energy efficiency, productivity, and waste/water use reduction technical assistance to small and medium-sized manufacturers. Program Information Reported in Survey Year program created: 2007 Program category: General financing Program description: The Advanced Technology Vehicles Manufacturing loan program was established to support the production of fuel-efficient, advanced technology vehicles and qualifying components in the United States. The purpose is to originate, underwrite, and service loans to eligible automotive manufacturers and component manufacturers to finance the cost of (1) re-equipping, expanding, or establishing manufacturing facilities in the United States to produce advanced technology vehicles and qualifying components and (2) engineering integration performed in the United States of advanced technology vehicles and qualifying components. Program activities that support manufacturing: Provides direct loans to automotive or component manufacturers: The program provides direct loans to automotive manufacturers and component suppliers to support domestic manufacturing of fuel-efficient, advanced technology vehicles and qualifying components. Program Information Reported in Survey Year program created: 1992 Program category: Innovation, Applied research and development Program description: The Department of Energy’s Bioenergy Technologies Office forms cost-share partnerships with key stakeholders to develop, demonstrate, and deploy technologies for advanced biofuels production. The program works with industrial, academic, national laboratory, agricultural, and nonprofit partners to develop and deploy commercially viable, high-performance and sustainable biofuels, biproducts, and biopower from renewable biomass resources in the United States to reduce dependence on imported oil, enhance energy security, create domestic jobs, improve ecosystem health, and reduce carbon emissions. Program Evaluations That Assessed Any Impact on the U.S. Manufacturing Sector U.S. Department of Energy, Energy Efficiency & Renewable Energy, Bioenergy Technologies Office, 2013 Review Panel Summary Report and Program Results, DOE/EE-1014 (2014). Efficiency & Renewable Energy, Bioenergy Technologies Office, 2015 Review Panel Summary Report and Program Results, DOE/EE-1386 (2016). Program activities that support manufacturing: Competitive research and development awards: The program regularly issues funding opportunity announcements that target research and development needs identified through workshops with industry and academia, which identify key technical barriers to commercialization of biofuel and enabling technologies. Competitive pilot and demonstration awards: The program regularly issues funding opportunity announcements designed to provide financial assistance to industry in construction of pilot and demonstration facilities. National laboratory research and development: The program directly funds research and development on applied and enabling technology at several national laboratories with core and key capabilities to address cross-cutting technical barriers. Enhancing sustainability of bio-based systems: The program’s sustainability activities include analysis and research and development focused on understanding and promoting the positive environmental, economic, and social effects and reducing the potential negative impacts of bioenergy production activities. Efforts include developing scientific methods and models for measuring bioenergy sustainability across the full supply chain, demonstrating improved environmental performance and social benefits relative to conventional or business-as-usual energy systems, and disseminating practical tools for analyses and technology development that enhance sustainable bioenergy outcomes. Resource assessment: The program’s resource assessment work uses a comprehensive spatially-explicit modeling framework to estimate county-level supply curves for all major traditional crop and biomass feedstock resources, including energy crops. In fiscal year 2015, the focus was on the analysis of the current and future economic availability of biomass feedstocks. Workforce development: The program is developing and will implement an education and workforce development program to improve public accessibility to information on bioenergy production and the bioenergy industry, support formal and informal education including STEM and vocational programs in exploring issues relevant to sustainable production of biofuels and bioproducts, and develop and enhance pathways to bioenergy-related training and careers. Department of Energy, Office of Energy Efficiency and Renewable Energy Program Funding: This initiative is a crosscutting activity leveraging other programs’ funded activities, according to Department of Energy (DOE) officials. Program Information Reported in Survey Year program created: 2013 Program category: Innovation, Applied research and development Program description: The Clean Energy Manufacturing Initiative is an effort across DOE to strengthen U.S. clean energy manufacturing competitiveness. The objectives are to increase U.S. competitiveness in manufacturing clean energy technologies and increase U.S. manufacturing competitiveness across the board by boosting energy productivity and leveraging low-cost domestic energy resources and feedstocks. Program activities that support manufacturing: Analysis: The program provides objective analysis and up-to-date data on global clean energy manufacturing. Public-private partnership pilots: As a part of its mission, the program builds partnerships to increase U.S. manufacturing competitiveness. DOE currently supports partnership efforts across the country through a range of pilots, initiatives, institutes, and facilities. Engagement and communications: The program engages leaders from industry, universities, national laboratories, and the broader innovation and economic community to identify ways in which the public and private sectors can partner to enhance U.S. clean energy competitiveness. For advancing clean energy manufacturing, the program engages stakeholders through regional and national summits and through new partnerships. Further, at Clean Energy Manufacturing Initiative Days, leaders from the Department of Energy and the participating manufacturing companies discuss manufacturing technology research and development priorities and strategies for increasing U.S. manufacturing competitiveness. Crosscutting coordination: The program coordinates the Clean Energy Manufacturing Tech Team. The team formulates and develops a strategy to leverage existing budget authorities to strengthen U.S. clean energy manufacturing competitiveness and advance progress toward the nation’s energy goals. Program Information Reported in Survey Year program created: 1977 Program category: Innovation, Applied research and development Program description: The Concentrating Solar Power program provides competitive awards to industry, national laboratories, and universities with the shared goal of making large-scale dispatchable solar energy systems cost competitive without subsidies by the end of the decade. As part of this effort, the program supports research and development of concentrated solar power technologies to achieve SunShot Initiative cost targets, which seeks to make solar energy more affordable by using systems that can supply solar power on demand through the use of thermal storage. Program activities that support manufacturing: Funding opportunity announcements: The program provides financial assistance for research, development, and demonstration to assist in getting technology to market. Workshops/conferences: The program holds and participates in workshops and conferences to stimulate discussion of the market, trends, priorities, and to identify opportunities, among other things. Technical assistance: The program provides technical assistance to awardees during the course of projects. Interagency collaboration: The program collaborates with other agencies to perform studies of potential environmental impacts of the technologies. Program Information Reported in Survey Year program created: 2012 Program category: Innovation, Applied research and development Program description: The Photovoltaics program specifically supports the research and development of photovoltaics technologies to improve efficiency and reliability and to lower manufacturing costs to make solar electricity cost-competitive with other sources of energy. Program activities that support manufacturing: Providing cooperative assistance: The program funds research and development activities that, if successful, are intended to transition to domestic manufacturing. Program Information Reported in Survey Year program created: 2005 Program category: Innovation, Applied research and development Program description: The Solid State Lighting Program focuses on research and development breakthroughs in efficiency and performance of solid-state lighting technology, and it equips buyers to successfully apply solid-state lighting. The program includes the following elements: (1) core technology research projects focused on applied research for technology development, with particular emphasis on meeting efficiency, performance, and cost targets; (2) product development projects that use the knowledge gained from basic or applied research to develop or improve commercially viable materials, devices, or systems; (3) manufacturing research and development projects to reduce costs and enhance quality in solid-state lighting products and to address the technical challenges that must be overcome to enable solid-state lighting to compete with existing lighting on a first-cost basis; and (4) technology application research and development projects to monitor solid-state lighting technology advances and provide field and laboratory evaluations of emerging products, particularly LED lighting systems that involve advanced controls. Technology application research and development projects address broad issues related to technology performance with a view that spans the entire industry. Program activities that support manufacturing: Financial assistance agreements for research and development: The program provides financial assistance for competitive research and development to maximize the energy-efficiency of solid-state lighting products in the marketplace; remove market barriers by improving lifetime, color quality, and lighting system performance; reduce costs of solid-state lighting sources and luminaires; improve product consistency while maintaining high-quality products; and encourage the growth, leadership, and sustainability of domestic U.S. manufacturing within the solid-state lighting industry. Applicants seeking assistance must submit a manufacturing plan that includes substantial domestic manufacturing. Email postings: The program disseminates information that focuses on solid-state lighting companies manufacturing in the United States, in a series called “SSL in America.” This is not intended to endorse or promote any of the companies, but rather to describe advances in energy-efficient solid-state lighting. Program Information Reported in Survey Year program created: 2012 Program category: Innovation, Applied research and development Program description: The Tech-to-Market program within the Solar Energy Technology Office aims to make solar energy more cost- competitive. The program helps move technologies to the market by targeting two known funding gaps: (1) those that occur at the prototype commercialization stage and (2) those at the commercial scale-up stage. The program funds recipients so that they are able achieve technical milestones and commercialize the funded technology, while also helping them to find follow-on funding and form strategic partnerships. Program activities that support manufacturing: Provide funding opportunities: The program provides funding opportunities to selected and awarded applicants who are working on cutting edge technology within the United States. The recipients must be working toward the Sunshot goal and contributing to the latest and greatest technologies in the solar industry. Program Information Reported in Survey Year program created: 1974 Program category: Innovation, Applied research and development Program description: The Windows and Building Envelope program research and development efforts focus on ways to reduce energy consumption in buildings by supporting projects that develop energy- efficient windows and envelope projects. Program activities address technologies like highly insulating materials and systems; methodologies and analysis tools to measure and validate building envelope performance; and market-enabling efforts, such as creating an organization to rate, certify, and label related products to better inform consumers. Program activities that support manufacturing: Competitively awarded research and development projects: The program provides funds to support research and development projects that include advanced manufacturing processes for energy-efficient window and building envelope components. The sub-program also provides earlier stage applied research and development funding for technologies that might indirectly impact the U.S. manufacturing sector, such as advanced window coatings or advanced insulation materials that could be adopted by U.S. manufacturers in the future. Annual operating plan for Department of Energy national laboratories: The program provides direct funding to some national laboratories to support the development of physics-based software models of building envelope components, including windows, as well as facilities used by manufacturers to test the physical properties of building envelope components. The sub-program also supports research and development projects at the national laboratories that have indirect impact on U.S. manufacturers as they have the potential to be incorporated in future manufacturing processes. Program Information Reported in Survey Year program created: 2012 Program category: Manufacturing public health products Program description: The Centers for Innovation in Advanced Development in Manufacturing is a core service that provides advanced development and manufacturing capabilities for developing medical countermeasures for emerging infectious diseases; chemical, biological, radiological, and nuclear threats; as well as the manufacturing of pandemic influenza vaccine doses augmenting the current national capacity. The Centers will increase the nation’s preparedness for bioterrorism and influenza pandemic by using modern technologies for accelerating production, improving quality, and expanding vaccine manufacturing capacity. The Centers are comprised of three companies that provide advanced development and manufacturing capabilities. Program activities that support manufacturing: Funding for manufacturing sites: The program is currently funding the establishment and operation of domestic manufacturing sites located in Baltimore, MD; Holly Springs, NC; and College Station, TX. Workforce training and development: Aside from providing core services for the advanced development and manufacture of biological medical countermeasures, this program supports the creation or enhancement of specialized workforce training and development approaches to reestablish the U.S.-based expertise necessary for developing and producing chemical, biological, radiological, and nuclear medical countermeasures. These approaches are intended to develop a highly-skilled biotechnology and pharmaceutical workforce proficient in bioprocess engineering, production, quality systems, and regulatory affairs. in the future to non-US manufacturing contractors, dependent on task orders awarded in the next 5 years, though all 100 percent could support U.S. manufacturing. Program Information Reported in Survey Year program created: 2013 Program category: Manufacturing public health products Program description: The Fill Finish Manufacturing Network provides packaging support for medical countermeasure distribution. The program is comprised of four companies that are industry experts in the area of filling and finishing bulk products into sterile vials, syringes, and cartridges. They also complete the kitting, labeling, and packaging services as needed. These four companies maintain commercial clients and perform these services on a routine basis. As members of the network, they can respond to U.S. government funded project needs where a product developer does not have these capabilities in-house. Program activities that support manufacturing: Domestic manufacturing capacity: The four providers of the biological fill and finish manufacturing services maintain a significant domestic capacity. Facilities exist in Alachua, FL; Bloomington, IN; Rochester, MI; and Greenville, NC. Addressing critical sterile drug shortage concerns: In collaboration with the Food and Drug Administration, the Department of Health and Human Services has engaged contractors in the Fill Finish Manufacturing Network to manufacture and transfer specific sterile drug products found on the drug shortage list. The intent is to train the network to perform an effective and efficient manufacturing technical transfer for any sterile drug product, along with all the quality and regulatory administration tasks that would be required in a public health emergency. This pilot program has the added benefit of potentially alleviating sterile drug shortage concerns by increasing domestic capacity. Program Information Reported in Survey Year program created: 2006 Program category: Innovation, Applied research and development Program description: The mission of the program is to eliminate occupational injuries, illnesses, hazardous exposures, and fatalities among individuals working in manufacturing through a focused program of research, intervention, and prevention. Program officials also co-chair the Manufacturing Sector Council, which has representatives from academia, trade/professional organizations, industry, insurers, unions, and government. This Council is charged with maximizing the impact of occupational safety and health research through partnerships and to promote widespread adoption of improved workplace safety and health practices based on research findings. Program activities that support manufacturing: Research: The program helps generate new knowledge on occupational safety and health through an intramural/research program; develop innovative solutions for difficult-to-solve problems in high-risk industrial sectors; and track work-related hazards, exposures, illnesses and injuries for prevention. Provide grants: The program provides grants to extramural investigators to conduct research on occupational safety and health. These investigators generate new knowledge and test the efficacy of innovative solutions. Training: The program builds capacity to address traditional and emerging hazards through training. Information dissemination: The program delivers occupational safety and health communication to inform decisions towards safe work practices and to improve workplace safety and health. Program Information Reported in Survey Year program created: 1998 Program category: Training, Enhancing job seekers’ skills Program description: The H-1B Job Training Grant Program funds projects that provide training and related activities to assist workers in gaining the skills and competencies needed to obtain or upgrade employment in high-growth industries or economic sectors. Over time, these education and training programs will help businesses reduce their use of skilled foreign professionals permitted to work in the U.S. on a temporary basis under the H-1B visa program. Program activities that support manufacturing: Funding training through grants: The program competitively awards grants to public and private partnerships to provide training and related services that support employment in high-growth and economic sectors that currently use H-1B visas to employ foreign workers, many of which are manufacturing occupations. Program Information Reported in Survey Year program created: 1937 Program category: Training, Enhancing job seekers’ skills Program description: The Registered Apprenticeship program prepares American workers to compete in a global 21st Century economy. Registered Apprenticeship has already trained millions of U.S. workers through a network of 21,000 Registered Apprenticeship programs across the nation, consisting of over 150,000 employers. Program activities that support manufacturing: Training/technical assistance: The Department of Labor’s (DOL) Office of Apprenticeship works in conjunction with independent State Apprenticeship Agencies to administer the program nationally. These state agencies are responsible for registering apprenticeship programs that meet federal and state standards, protecting the safety and welfare of apprentices, issuing nationally recognized and portable Certificates of Completion to apprentices, promoting the development of new programs through marketing and technical assistance, assuring that all programs provide high quality training and produce skilled and competent workers. Industry grants: The program awards grants as part of a broader commitment to create more opportunities by advancing job-driven training initiatives that help workers acquire the skills to succeed in currently available jobs. Under the American Apprenticeship Initiative Grant, 46 grantees have committed to expanding apprenticeship programs in new and growing industries, to align apprenticeships with further education and career advancement, and to expand the use of proven apprenticeship models that work. Program Information Reported in Survey Year program created: 1974 Program category: Training, Supporting workers who have been laid off from their job in manufacturing Program description: DOL’s Trade Adjustment Assistance program funds employment and training services to manufacturing and other eligible workers who lose their jobs as a result of the impact of global trade. Program activities that support manufacturing: Employment and case management services: Program participants receive employment and case management services, which include: (1) comprehensive assessments of skill levels and service needs; (2) development of an individual employment plan to identify employment goals and objectives; (3) information on available training and counseling, and how to apply for financial aid; (4) short-term prevocational services such as development of learning skills, communications skills, interviewing skills, among others; (5) individual career counseling; (6) provision of labor market information; (7) job referral and placement; and (8) information relating to the availability of supportive services. Training that includes tuition-based courses or work-based learning: The program provides training, such as classroom training; on the-job training; customized training designed to meet the needs of a specific employer or group of employers; apprenticeship programs; post- secondary education; prerequisite education or coursework and remedial education, which may include General Equivalency Diploma preparation; literacy training; basic math; or English as a Second Language. Relocation allowances and job search allowances: The program provides participants job search and relocation allowance reimbursements when seeking a job outside of the worker’s commuting area or moving to a job that earns family-sustaining wages. Trade readjustment allowances: Upon exhaustion of Unemployment Insurance benefits, program participants may be eligible to receive trade readjustment allowances that provide income support while participating in full-time training. Reemployment trade adjustment assistance: Reemployment trade adjustment assistance provides wage supplements to reemployed program participants age 50 or older who do not earn more than $50,000 annually in their new employment. A qualified participant receives a wage supplement consisting of a portion of the difference between a worker’s new wage and their old wage when they accept new employment at a lower wage than their previous employment. Trade adjustment assistance program-related state administration funds: The program provides funds to cover related administration costs the state would incur in the provision of the program’s benefits and services to trade-affected workers. Program Information Reported in Survey Year program created: 2009 Program category: Training, Enhancing job seekers’ skills Program description: The Trade Adjustment Assistance Community College and Career Training Grant program provides community colleges and other eligible institutions of higher education with funds to expand and improve their ability to deliver education and career training programs that can be completed in 2 years or less; are suited for workers who are eligible for training under the Trade Adjustment Assistance for Workers program; and prepare program participants for employment in high-wage, high-skill occupations. These multi-year grants help ensure that institutions of higher education are helping adults succeed in acquiring the skills, degrees, and credentials needed for high-wage, high-skill employment while also meeting the needs of employers for skilled workers. DOL implements the program in partnership with the Department of Education. Program activities that support manufacturing: Workforce training related to manufacturing jobs: Grantees’ capacity- building activities may include developing or enhancing programs of study that lead to industry-recognized credentials, purchasing approved training equipment, or renovating classroom or lab space to support training programs. Information-sharing: The program makes all curriculum developed with program grant funds available as open educational resources using a creative commons copyright, and the curriculum is uploaded to the program’s repository. This allows nonfunded training providers to further adapt and reuse grant-funded curriculum. Providing training to trade-eligible workers and other unemployed or under-employed adults: According to program officials, program grantees are required to provide training to participants during their grant period of performance and to track certain performance metrics for those participants, and the training must lead to stackable credentials that are also industry-recognized, such as a certificate or associate’s degree. Strengthening relationships between manufacturing employers and community colleges: Grants support strengthening relationships between community colleges and manufacturing employers. Grantees engage employers in the manufacturing sector to create or strengthen programs of study through the design of curriculum and credentials; delivery of training; provision of internships and other work-based learning opportunities; contributions of equipment, facility, faculty, mentors; and hiring graduates of the training programs. Program Information Reported in Survey Year program created: 2009 Program category: Innovation, Applied research and development Program description: The E3 - Economy, Energy and Environment program is a federal technical assistance framework comprising six federal agencies, including the Environmental Protection Agency (EPA), to provide support to small and medium-sized manufacturers. The program’s mission is to help communities, manufacturers, and manufacturing supply chains adapt and thrive in today’s green economy. In providing technical assistance, the program connects agencies and organizations in local communities and small and medium-sized manufacturers with experts from federal agencies, states, and regions. Program activities that support manufacturing: Technical assistance: EPA, in concert with five other federal agencies, provides technical reviews of manufacturing processes at small and medium-sized manufacturers and provides customized assessments detailing how participating manufacturers can incorporate practical sustainability approaches. The program’s assessments aim to reduce energy consumption, minimize carbon footprints, prevent pollution, increase productivity, and drive innovation. Manufacturing Trends Addressed by the Program E3 has not directly engaged with advanced manufacturing work because the program is primarily targeted toward smaller manufacturers for whom a lack of environmental and lean manufacturing knowledge is an impediment to improving their operations. Some of these companies may supply other companies in the advanced manufacturing sphere. Agency officials at DOL, who also support E3, may be engaged in enhancing workforce skills, but EPA is primarily concerned with improvements to manufacturing processes themselves. Program Information Reported in Survey Year program created: 1945 Program category: Trade, Financial support Program description: The Export-Import Bank of the United States is the official export credit agency of the United States. The Bank is an independent, self-sustaining (for budgetary purposes) federal agency that exists to support the export of U.S. goods and services, and thereby American jobs. The Bank’s charter states that it should not compete with the private sector. Rather, the Export-Import Bank’s role is to assume the credit and country risks that the private sector is unable or unwilling to accept, while still maintaining a reasonable assurance of repayment. In fiscal year 2015, the Export-Import Bank authorized 2,630 transactions supporting an estimated $17 billion in U.S. exports. Program activities that support manufacturing: Loans: Under this program, the Export-Import Bank provides fixed-rate loans directly to foreign buyers of goods and services. Loan guarantees (including Working Capital Guarantees): These programs provide guarantees to commercial lenders to cover repayment risks on foreign buyer’s debt obligations incurred to purchase U.S. exports. As described by program officials, under Working Capital Guarantees, the Export-Import Bank provides repayment guarantees to lenders on secured, short-term working capital loans made to qualified exporters. Export credit insurance: The Bank explained that export credit insurance supports U.S. exporters selling goods overseas by protecting the businesses against the risk of foreign buyer or other foreign debtor default for political or commercial reasons. This risk protection permits exporters to extend credit to their international customers where otherwise not possible, according to program officials. Manufacturing Trends Addressed by the Program The Export-Import Bank does not directly address manufacturing trends. Program Information Reported in Survey Year program created: 1993 Program category: Training, Enhancing job seekers’ skills Program description: With an emphasis on 2-year colleges, the Advanced Technological Education program focuses on the education of technicians for the high-technology fields that drive our nation’s economy. The program involves partnerships between academic institutions and industry to promote improvement in the education of science and engineering technicians at the undergraduate and secondary school levels. The program supports curriculum-development, professional development of college faculty and secondary school teachers, career pathways to 2-year colleges from secondary schools and from 2-year colleges to 4-year institutions, and other activities. Another goal is articulation between 2-year and 4-year programs for K-12 prospective science, technology, engineering, and mathematics teachers that focus on technological education. The program invites research proposals that advance the knowledge base related to technician education. Program Information Reported in Survey Year program created: mid 1980s Program category: Innovation, Basic research and development Program description: The Biotechnology and Biochemical Engineering program supports fundamental engineering research that advances the understanding of cellular and biomolecular processes. This research eventually leads to the development of enabling technology for advanced manufacturing and/or applications in support of the biopharmaceutical, biotechnology, and bioenergy industries, or with applications in health or for the environment. A quantitative treatment of biological and engineering problems of biological processes is considered vital to successful research projects in the program. The program encourages highly innovative and potentially transformative engineering research, which may lead to novel bioprocessing and manufacturing approaches and proposals that address emerging research areas and technologies that effectively integrate knowledge and practices from different disciplines while incorporating ongoing research into educational activities. Program activities that support manufacturing: Providing grants: The grants enable fundamental research toward developing new manufacturing technologies based on engineering biology. Program Information Reported in Survey Year program created: 2012 Program category: Innovation, Basic research and development Program description: The Design of Engineering Material Systems program supports fundamental research intended to lead to new paradigms of design, development, and insertion of advanced engineering material systems. For the purposes of this program, fundamental research includes research that develops and creatively integrates theory, processing/manufacturing, data/informatics, experimental, and/or computational approaches with rigorous engineering design principles, approaches, and tools to inform the accelerated design and development of materials. The program seeks research proposals that strive to develop systematic scientific methodologies to tailor the behavior of material systems in ways that are driven by performance metrics and incorporate processing/manufacturing. Ultimately it is expected that research outcomes will be methodologies to enable the discovery of materials systems with new properties and behavior and to also enable their rapid insertion into engineering systems. Program activities that support manufacturing: Providing grants: The grants support fundamental research in engineering and science. Awards may include research that leads to advanced engineering materials systems for manufacturing. Program Information Reported in Survey Year program created: 1982 Program category: Innovation, Basic research and development Program description: The Engineering and Systems Design program supports fundamental research ultimately leading to new engineering and systems design methods and practices for specific global contexts. In particular, the program seeks intellectual advances in which the theoretical foundations underlying design and systems engineering are operationalized into rigorous and pragmatic methods for a specific context. In addition, the program funds the rigorous theoretical and empirical characterization of new or existing methods for design and systems engineering, identifying which global contexts and under which assumptions these methods are effective and efficient. Research in engineering and systems design should advance the state of knowledge of design methodology by adapting existing methods to a new context or by carefully characterizing existing or new design methods in a new context. Program activities that support manufacturing: Providing grants: The grants support fundamental research in engineering and science. Awards may include research that ultimately leads to new engineering and systems design methods and practices for manufacturing. Program Information Reported in Survey Year program created: 1973 Program category: Innovation, Applied research and development Program description: The Industry/University Cooperative Research Centers program develops long-term partnerships among industry, academe, and government. The centers are catalyzed by a small investment from the National Science Foundation (NSF) and are primarily supported by industry center members, with NSF taking a supporting role in the development and evolution of the center. Each center is established to conduct research that is of interest to both the industry members and the center faculty. The program contributes to the nation’s research infrastructure base and enhances the intellectual capacity of the engineering and science workforce through the integration of research and education. Program activities that support manufacturing: Providing grants: The program enables partnership between academia and industries to carry out pre-competitive research benefiting multiple industrial sectors, including manufacturing. Training: The program helps train students and other researchers in industrially relevant research and prepares them as the workforce for U.S. industries, including manufacturing. Program description: The Manufacturing Machines and Equipment program supports fundamental research that informs the development of new and/or improved manufacturing machines and equipment—and optimization of their use—with a particular focus on equipment appropriate for the manufacture of mechanical and electromechanical devices, products, and systems featuring scales from microns to meters. The program promotes proposals that relate to the manufacturing of equipment and facilities that enable the production of energy products. Other areas of research interest include a wide range of manufacturing operations, including both subtractive and additive processes, forming, bonding/joining, and laser processing. Program activities that support manufacturing: Providing grants: The grants support fundamental research in engineering and science. Awards may include research that leads to new manufacturing machines and equipment. Program Information Reported in Survey Year program created: 2013 Program category: Innovation, Basic research and development Program description: The Materials Engineering and Processing program supports fundamental research addressing the processing and mechanical performance of engineering materials by investigating the interrelationship of materials processing, structure, properties and/or life- cycle performance for targeted applications. As part of its mission, the program focuses on manufacturing processes that convert material into useful forms as either intermediate or final composition. These include processes such as extrusion, molding, casting, deposition, sintering and printing. Program activities that support manufacturing: Providing grants: The grants support fundamental research in engineering and science. Awards may include research that ultimately leads to advanced engineering materials processing and performance for manufacturing. Program Information Reported in Survey Year program created: 2001 Program category: Innovation, Basic research and development Program description: The Nanomanufacturing program seeks to explore transformative approaches to nanomanufacturing. Nanomanufacturing is the production of useful nano-scale materials, structures, devices, and systems in an economically viable manner. The approaches supported by this program include, but are not limited to: micro-reactor and micro- fluidics enabled nanosynthesis, bio-inspired nanomanufacturing, manufacturing by nanomachines, additive nanomanufacturing, hierarchical nanostructure assembly, continuous high-rate nanofabrication, and modular manufacturing platforms for nanosystems. The program encourages the fabrication of nanomaterials by design, three-dimensional nanostructures, multi-layer nanodevices, and multi- material and multi-functional nanosystems. Also of interest is the manufacture of dynamic nanosystems, such as nanomotors, nanorobots, and nanomachines, and enabling advances in transport and diffusion mechanisms at the nano-scale. Program activities that support manufacturing: Providing grants: The grants support fundamental research in engineering and science. Awards may include research that leads to the manufacture of useful nano-scale materials, structures, devices, and systems. Program Information Reported in Survey Year program created: 2011 Program category: Innovation, Basic research and development Program description: The goal of the National Robotics Initiative is to accelerate the development and use of robots in the United States that work beside or cooperatively with people. Innovative robotics research and applications emphasizing the realization of such co-robots working in symbiotic relationships with human partners is supported by multiple agencies of the federal government including NSF. The purpose of this program is the development of this next generation of robotics, to advance the capability and usability of such systems and artifacts, and to encourage existing and new communities to focus on innovative application areas. It will address the entire life cycle from fundamental research and development to manufacturing and deployment. Program activities that support manufacturing: Providing grants: The program supports basic research in co-robots (robots that work with, or help, people). Much of the research is applicable to manufacturing; some of it is specifically aimed at improving the ability of robots to aid in manufacturing processes. Program Information Reported in Survey Year program created: 2014 Program category: Innovation, Basic research and development Program description: The Service, Manufacturing, and Operations Research program supports fundamental research leading to the creation of innovative mathematical models, analysis, and algorithms for decision- making related to design, planning, and operation of service, manufacturing, and other complex systems. Specifically, the program supports two main types of research: (1) innovations in general-purpose methodology related to optimization, stochastic modeling, and decision and game theory; and (2) research grounded in relevant applications that require the development of novel and customized analytical and computational methodologies. Application areas of interest include supply chains and logistics; risk management; healthcare; environment; energy production and distribution; mechanism design and incentives; production planning, maintenance, process monitoring, and quality control; and national security. Of particular interest are methods that incorporate increasingly rich and diverse sources of data to support decision-making. Program activities that support manufacturing: Providing grants: The grants support fundamental research in engineering and science. Awards may include research that leads to advances in modeling and optimization for manufacturing. Small Business Administration, Office of Capital Access Program Funding: Not reported. Program Information Reported in Survey Year program created: 1953 Program category: General financing Program description: The 7(a) loan program is the largest of the Small Business Administration’s (SBA) business loan programs. The mission is to assist small businesses in obtaining financing when they do not qualify for conventional credit by providing the credit enhancement of a federal guaranty. Loan guarantees can help underserved businesses that traditionally have trouble accessing capital through conventional credit markets. SBA loan guarantees are flexible enabling small businesses to obtain financing of up to $5 million for various business uses, with loan maturities up to 25 years depending on the type of assets being financed. SBA guarantees a portion of 7(a) loans made and administered by private sector commercial lending institutions. Loans can be guaranteed for most legitimate general business purposes to businesses classified as small. Program activities that support manufacturing: Loans: The 7(a) program provides loan guarantees to small manufacturers. Loans can be made to start-up or existing manufacturers for many legitimate costs related to the opening, operations, and expansion of independent small manufacturing companies. Small Business Administration, Office of Capital Access Program Funding: Not reported. Program Information Reported in Survey Year program created: 1958 Program category: General financing Program description: The 504 loan program is SBA’s premier economic development program, providing “brick and mortar” and/or major equipment financing. The program has particular features, such as a statutorily-mandated job creation component, a community development goal, or a public policy goal achievement component, that help the agency facilitate job creation and enable the establishment and viability of small businesses. Program activities that support manufacturing: Loans: Fixed-rate, long-term financing for land, building, and equipment. Manufacturing Trends Addressed by the Program The Certified Development Company (CDC)/504 Loan Program provides financing for major fixed assets, such as equipment or real estate for the purpose of job creation/retention without regard to manufacturing trends in any one industry group. Loan proceeds used for the acquisition of equipment could result in addressing manufacturing trends. However, SBA does not track the types of equipment purchased. Program Information Reported in Survey Year program created: 1982 Program category: Innovation, Applied research and development Program description: SBA establishes the policy guidance for the Small Business Innovation Research (SBIR) program. The federal agencies that participate in the program must obligate a minimum percentage of extramural research and development funds for awards to small businesses. Funding from the participating agencies helps drive small research and development companies to innovate, strengthen U.S. competitiveness, and create jobs. The program helps small businesses develop innovations to meet the research and development needs of the federal government and then commercialize those innovations in the marketplace. Program activities that support manufacturing: Providing grants and contracts: Eleven federal agencies participate in the program by funding research and development in the manufacturing space. Implement Executive Order 13329: According to agency officials, Executive Order 13329 requires SBIR/Small Business Technology Transfer (STTR) agencies to give high priority to manufacturing-related research and development and it further states that the federal government has an important role in advancing innovation, including innovation in the manufacturing sector, through small businesses. According to agency officials, the program Policy Directive states that participating agencies must, to the extent permitted by law, and in a manner consistent with the mission of that agency and the purpose of the SBIR program, give priority in the SBIR program to manufacturing-related research and development in accordance with Executive Order 13329. SBA collects information, as part of the annual reports submitted by participating agencies, regarding agency efforts to advance manufacturing through the programs. Program Information Reported in Survey Year program created: 1992 Program category: Innovation, Applied research and development Program description: SBA establishes the policy guidance for the STTR program. The federal agencies that participate in the program must obligate a minimum percentage of extramural research and development funds for awards to small businesses. The purpose of the program is to stimulate a partnership of ideas and technologies between innovative small business concerns and research institutions through federally funded research and development. By providing awards to small business concerns for cooperative research and development efforts with research institutions, the program assists the small business and research communities by commercializing innovative technology. Central to the program is expansion of the public/private sector partnership to include joint venture opportunities for small businesses and nonprofit research institutions. The unique feature of the program, according to agency officials, is the requirement for the small business to formally collaborate with a research institution in early phases of the research and development cycle. Program activities that support manufacturing: Providing grants and contracts: Five federal agencies participate in the program by funding research and development in the manufacturing area, as well as other areas. Implement Executive Order 13329: According to agency officials, Executive Order 13329 requires SBIR/STTR agencies to give high priority to manufacturing-related research and development and it further states that the federal government has an important role in advancing innovation, including innovation in the manufacturing sector, through small businesses. The program gives priority in the STTR program to manufacturing-related research and development. SBA collects information, as part of the annual reports submitted by participating agencies, regarding agency efforts to advance manufacturing through the programs. In addition to the contacts named above, Kimberly Gianopoulos (Director), Blake Ainsworth (Assistant Director), Kim Frankena (Assistant Director), Laura Heald (Assistant Director), Christopher Murray (Assistant Director), Pierre Toureille (Assistant Director), Paul Schearf (Analyst-in- Charge), Jeffrey Arkin, Jeffrey Barron, Anthony Costulas, David Dornisch, Holly Dye, Alexander Galuten, Tobias Gillett, Rich Hung, Stephen Komadina, Zina Merritt, Mimi Nguyen, Nhi Nguyen, Oliver Richard, Timothy Persons, Rachel Pittenger, William Shear, Almeta Spencer, Amy Suntoke, Daren Sweeney, Brian Tremblay, and Marilyn Wasleski made key contributions to this report.
The U.S. manufacturing sector—representing about 12 percent of the economy and employing 12 million workers in 2015—has undergone changes over the last several decades. With increased productivity and technological innovation, the sector experienced a decreasing number of jobs and share of the economy. GAO was asked to examine how the federal government supports manufacturing. This report examines (1) how selected federal programs and tax expenditures provide support to U.S. manufacturing; (2) how programs are addressing manufacturing trends; and (3) the extent to which agencies measure performance and assess effectiveness in support of manufacturing generally, and advanced manufacturing specifically. GAO reviewed selected programs with a focus on manufacturing, among other criteria, and conducted a survey of these selected programs to collect data on their budget, activities, and effects. GAO also reviewed reports and interviewed agency officials and experts. GAO identified 58 programs in 11 federal agencies that reported providing support to U.S. manufacturing by fostering innovation through research and development, assisting with trade in the global marketplace, helping job seekers enhance skills and obtain employment, and providing general financing or business assistance. Twenty-one of these programs reported using all of their obligations in fiscal year 2015 to support U.S. manufacturing. For these 21 programs, obligations of each program ranged from $750,000 to $204 million in fiscal year 2015, the most recent full year of data. Twenty-six other programs reported using funding to support manufacturing—in addition to other sectors—and provided ranges of estimates for the obligations directly supporting manufacturing. The remaining 11 programs either did not provide an estimate of their support to manufacturing or reported no program obligations in fiscal year 2015. GAO also identified nine tax expenditures that can provide benefits to manufacturers, amounting to billions of dollars in incentives for both the manufacturing sector and other sectors of the economy. Most (51) of the 58 programs reported addressing trends toward an increase in advanced manufacturing (e.g. activities using automation, software, or cutting edge materials), the need for a higher-skilled workforce, and more global trade competition for U.S. manufacturers by providing funds and resources, sharing information, and promoting coordination. Survey responses from the 58 programs indicated that more than two-thirds of them are addressing the shift toward advanced manufacturing, approximately half are taking steps to address increased globalization and competition, and fewer than half are addressing the need for a higher skilled workforce. Forty-four of the 58 programs reported having performance goals or measures related to the support of manufacturing, but agencies that comprise an interagency group have not identified the information they will collect from agencies and use to report progress in supporting advanced manufacturing. Ten of the 11 agencies that administer programs GAO reviewed participate in a federal interagency initiative to coordinate activities and report on progress in the area of advanced manufacturing. The Subcommittee on Advanced Manufacturing—co-chaired by the Office of Science and Technology Policy (OSTP) and that coordinates advanced manufacturing efforts—supports the updating and reporting on a National Strategic Plan for Advanced Manufacturing. The plan, which was published in 2012, identifies objectives and potential measures that could be used to assess progress. The subcommittee plans to report in 2018 on progress in achieving the strategic plan's objectives, as required by the Revitalize American Manufacturing and Innovation Act of 2014. However, OSTP has not worked with the subcommittee member agencies to identify the information needed to report progress in achieving the strategic objectives, such as what measures will be used. While subcommittee officials said the subcommittee does not provide top-down direction to federal agencies on how to measure effectiveness, specifying the information it will collect from federal agencies would better position it to report consistent and comprehensive information on the progress in achieving the plan's objectives. OSTP should identify the information it will collect from agencies to determine their progress in achieving the objectives of the National Strategic Plan for Advanced Manufacturing. In commenting on a draft of this report, OSTP neither agreed nor disagreed with the recommendation and suggested alternative language. In response, GAO revised the recommendation to focus on the identification of information, as discussed in the report.
Cutting off terrorists’ funding is an important means of disrupting their operations. As initial U.S. and foreign government deterrence efforts focused on terrorists’ use of the formal banking or mainstream financial systems, terrorists may have been forced to increase their use of various alternative financing mechanisms. Alternative financing mechanisms enable terrorists to earn, move, and store their assets and may include the use of commodities, bulk cash, charities, and informal banking systems, sometimes referred to as hawala. In its fight against terrorism, the United States has focused on individuals and entities supporting or belonging to terrorist organizations including al Qaeda, Hizballah, HAMAS (Harakat al- Muqawama al-Islamiya—Islamic resistance Movement), and others. These terrorist organizations are known to have used alternative financing mechanisms to further their terrorist activities. Government officials and researchers believe that terrorists do not always need large amounts of assets to support an operation, pointing out that the estimated cost of the September 11 attack was between $300,000 and $500,000. However, government officials also caution that funding for such an operation uses a small portion of the assets that terrorist organizations require for their support infrastructure such as indoctrination, recruitment, training, logistical support, the dissemination of propaganda, and other material support. In response to the terrorist attacks of September 11, the Departments of the Treasury and Justice both established multiagency task forces dedicated to combating terrorist financing. Treasury established Operation Green Quest, led by the Customs Service—now ICE in the Department of Homeland Security—to augment existing counterterrorist efforts by targeting current terrorist funding sources and identifying possible future sources. On September 13, 2001, the FBI formed a multiagency task force—which is now known as the Terrorist Financing Operations Section (TFOS)—to combat terrorist financing. The mission of TFOS has evolved into a broad role to identify, investigate, prosecute, disrupt, and dismantle all terrorist-related financial and fundraising activities. The FBI also took action to expand the antiterrorist financing focus of its Joint Terrorism Task Forces (JTTFs)—teams of local and state law enforcement officials, FBI agents, and other federal agents and personnel whose mission is to investigate and prevent acts of terrorism. In 2002, the FBI created a national JTTF in Washington, D.C., to collect terrorism information and intelligence and funnel it to the field JTTFs, various terrorism units within the FBI, and partner agencies. Following September 11, representatives of the FBI and Operation Green Quest met on several occasions to attempt to delineate antiterrorist financing roles and responsibilities. However, such efforts were largely unsuccessful. The resulting lack of clearly defined roles and coordination procedures contributed to duplication of efforts and disagreements over which agency should lead investigations. In May 2003, to resolve jurisdictional issues and enhance interagency coordination, the Attorney General and the Secretary of Homeland Security signed a Memorandum of Agreement concerning terrorist financing investigations. The Agreement and its related procedures specified that the FBI was to have the lead role in investigating terrorist financing and that ICE was to pursue terrorist financing solely through participation in FBI-led task forces, except as expressly approved by the FBI. Regarding strategic efforts, the Money Laundering and Financial Crimes Strategy Act of 1998 (Strategy Act) required the President—acting through the Secretary of the Treasury and in consultation with the Attorney General and other relevant federal, state, and local law enforcement and regulatory officials—to develop and submit an annual NMLS to the Congress by February 1 of each year from 1999 through 2003. Unless reauthorized by the Congress, this requirement ended with the 2003 strategy, which was issued on November 18, 2003. The goal of the Strategy Act was to increase coordination and cooperation among the various regulatory and enforcement agencies and to effectively distribute resources to combat money laundering and related financial crimes. The Strategy Act required the NMLS to define comprehensive, research-based goals, objectives, and priorities for reducing these crimes in the United States. The NMLS has generally included multiple priorities to guide federal agencies’ activities in combating money laundering and related financial crimes. In 2002, the NMLS was adjusted to reflect new federal priorities in the aftermath of September 11 including a goal to combat terrorist financing. The U.S. government faces myriad challenges in determining and monitoring the nature and extent of terrorists’ use of alternative financing mechanisms. Terrorists use a variety of alternative financing mechanisms to earn, move, and store their assets based on common factors that make these mechanisms attractive to terrorist and criminal groups alike. For all three purposes—earning, moving, and storing—terrorists aim to operate in relative obscurity, using mechanisms involving close knit networks and industries lacking transparency. More specifically, first, terrorists earn funds through highly profitable crimes involving commodities such as contraband cigarettes, counterfeit goods, and illicit drugs. For example, according to U.S. law enforcement officials, Hizballah earned an estimated profit of $1.5 million in the United States between 1996 and 2000 by purchasing cigarettes in a low tax state for a lower price and selling them in a high tax state at a higher price. Terrorists also earned funds using systems such as charitable organizations that collect large sums in donations from both witting and unwitting donors. Second, to move assets, terrorists seek out mechanisms that enable them to conceal or launder their assets through nontransparent trade or financial transactions such as the use of charities, informal banking systems, bulk cash, and commodities that may serve as forms of currency, such as precious stones and metals. Third, to store assets, terrorists may use similar commodities because they are likely to maintain value over a longer period of time and are easy to buy and sell outside the formal banking system. The true extent of terrorists’ use of alternative financing mechanisms is unknown, owing to the criminal nature of the activity and the lack of systematic data collection and analysis. The limited and sometimes conflicting information available on alternative financing mechanisms adversely affects the ability of U.S. government agencies to assess risk and prioritize efforts. U.S. law enforcement agencies, and specifically the FBI, which leads terrorist financing investigations and maintains case data, do not systematically collect and analyze data on terrorists’ use of alternative financing mechanisms. The lack of such a method of data collection hinders the FBI from conducting systematic analysis of trends and patterns focusing on alternative financing mechanisms. Without such an assessment, the FBI would not have analyses that could aid in assessing risk and prioritizing efforts. Moreover, despite an acknowledged need from some U.S. government officials and researchers for further analysis of the extent of terrorists’ use of alternative financing mechanisms, U.S. government reporting on these issues has not always been timely or comprehensive, which could affect planning and coordination efforts. For example, the Departments of the Treasury and Justice did not produce a report on the links between terrorist financing and precious stone and commodity trading, as was required by March 2003 under the 2002 NMLS. Moreover, we found widely conflicting views in numerous interviews and available reports and documentation concerning terrorists’ use of precious stones and metals. In monitoring terrorists’ use of alternative financing mechanisms, the U.S. government faces a number of significant challenges including accessibility to terrorist networks, adaptability of terrorists, and competing demands or priorities within the U.S. government. First, according to law enforcement agencies and researchers, it is difficult to access or infiltrate ethnically or criminally based networks that operate in a nontransparent manner, such as informal banking systems or the precious stones and other commodities industries. Second, the ability of terrorists to adapt their methods hinders efforts to target high-risk industries and implement effective mechanisms for monitoring high-risk industry trade and financial flows. According to the FBI, once terrorists know that an industry they use to earn or move assets is being watched, they may switch to an alternative commodity or industry. Finally, competing priorities create challenges to federal and state officials’ efforts to use and enforce applicable U.S. laws and regulations in monitoring terrorists’ use of alternative financing mechanisms. For example, we reported to you in November 2003 the following: Although the Internal Revenue Service (IRS) agreed with us in 2002 to begin developing a system, as allowed by law, to share with states data that would improve oversight and could be used to deter terrorist financing in charities, the IRS had not made this initiative a priority. The IRS had not developed and implemented the system, citing competing priorities. The Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN) officials stated the extent of the workload created under the 2001 Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act (USA PATRIOT Act) initially increased the amount of work required and may have slowed efforts to take full advantage of the act concerning the establishment of anti-money laundering programs. FinCEN anti-money laundering program rules for dealers in precious metals, stones, or jewels were proposed on February 21, 2003, and had not been finalized when we recently contacted FinCEN on February 24, 2004. FBI officials told us that the 2002 NMLS contained more priorities than could be realistically accomplished, and Treasury officials said that resource constraints and competing priorities were the primary reasons why strategy initiatives, including those related to alternative financing mechanisms, were not met or were completed later than expected. As a result of our earlier findings: We recommended that the Director of the FBI, in consultation with relevant U.S. government agencies, systematically collect and analyze information involving terrorists’ use of alternative financing mechanisms. Justice agreed with our finding that the FBI does not systematically collect and analyze such information, but Justice did not specifically agree or disagree with our recommendation. However, both ICE and IRS senior officials have informed us that they agree that law enforcement agencies should have a better approach to assessing the use of alternative financing mechanisms. We recommended that the Secretary of the Treasury and the Attorney General produce the report on the links between terrorism and the use of precious stones and commodities that was required by March 2003 under the 2002 NMLS based on up-to-date law enforcement investigations. The Treasury responded that the report would be included as an appendix in the 2003 NMLS. Precious stones and commodities were given a small amount of attention in an appendix on trade-based money laundering within the 2003 NMLS that was released in November 2003. It remains unclear as to how this will serve as a basis for an informed strategy. We recommended that the Commissioner of the IRS, in consultation with state charity officials, establish interim IRS procedures and state charity official guidelines, as well as set milestones and assign resources for developing and implementing both, to regularly share data on charities as allowed by federal law. The IRS agreed with our recommendation, and we are pleased to report that the IRS expedited efforts and issued IRS procedures and state guidance on December 31, 2003, as stated in its agency comments in response to our report. In May 2003, to resolve jurisdictional issues and enhance interagency coordination, the Attorney General and the Secretary of Homeland Security signed a Memorandum of Agreement concerning terrorist financing investigations. The Agreement and its related procedures specified that the FBI was to have the lead role in investigating terrorist financing and that ICE was to pursue terrorist financing solely through participation in FBI-led task forces, except as expressly approved by the FBI. Also, the Agreement contained several provisions designed to increase information sharing and coordination of terrorist financing investigations. For example, the Agreement required the FBI and ICE to (1) detail appropriate personnel to each other’s agency and (2) develop specific collaborative procedures to determine whether applicable ICE investigations or financial crimes leads may be related to terrorism or terrorist financing. Another provision required that the FBI and ICE jointly report to the Attorney General, the Secretary of Homeland Security, and the Assistant to the President for Homeland Security on the status of the implementation of the Agreement 4 months from its effective date. In February 2004, we reported to the Senate Appropriations’ Subcommittee on Homeland Security that the FBI and ICE had implemented or taken concrete steps to implement most of the key Memorandum of Agreement provisions. For example, the agencies had developed collaborative procedures to determine whether applicable ICE investigations or financial crimes leads may be related to terrorism or terrorist financing—and, if so, determine whether these investigations or leads should thereafter be pursued under the auspices of the FBI. However, we noted that the FBI and ICE had not yet issued a joint report on the status of the implementation, which was required 4 months from the effective date of the Agreement. By granting the FBI the lead role in investigating terrorist financing, the Memorandum of Agreement has altered ICE’s role in investigating terrorism-related financial crimes. However, while the Agreement specifies that the FBI has primary investigative jurisdiction over confirmed terrorism-related financial crimes, the Agreement does not preclude ICE from investigating suspicious financial activities that have a potential (unconfirmed) nexus to terrorism—which was the primary role of the former Operation Green Quest. Moreover, the Agreement generally has not affected ICE’s mission or role in investigating other financial crimes. Specifically, the Agreement did not affect ICE’s statutory authorities to conduct investigations of money laundering and other traditional financial crimes. ICE investigations can still cover the wide range of financial systems—including banking systems, money services businesses, bulk cash smuggling, trade-based money laundering systems, illicit insurance schemes, and illicit charity schemes—that could be exploited by money launderers and other criminals. According to ICE headquarters officials, ICE is investigating the same types of financial systems as before the Memorandum of Agreement. Further, our February 2004 report noted that—while the Memorandum of Agreement represents a partnering commitment by the FBI and ICE— continued progress in implementing the Agreement will depend largely on the ability of these law enforcement agencies to meet various operational and organizational challenges. For instance, the FBI and ICE face challenges in ensuring that the implementation of the Agreement does not create a disincentive for ICE agents to initiate or support terrorist financing investigations. That is, ICE agents may perceive the Agreement as minimizing their role in terrorist financing investigations. Additional challenges involve ensuring that the financial crimes expertise and other investigative competencies of the FBI and ICE are effectively utilized and that the full range of the agencies’ collective authorities—intelligence gathering and analysis as well as law enforcement actions, such as executing search warrants and seizing cash and other assets—are effectively coordinated. Inherently, efforts to meet these challenges will be an ongoing process. Our interviews with FBI and ICE officials at headquarters and three field locations indicated that long-standing jurisdictional and operational disputes regarding terrorist financing investigations may have strained interagency relationships to some degree and could pose an obstacle in fully integrating investigative efforts. On a broader scale, as discussed below, we also have reported that opportunities exist to improve the national strategy for combating money laundering and other financial crimes, including terrorist financing. The 1998 Strategy Act required the President—acting through the Secretary of the Treasury and in consultation with the Attorney General and other relevant federal, state, and local law enforcement and regulatory officials—to develop and submit an annual NMLS to the Congress by February 1 of each year from 1999 through 2003. Also, in 2002, the NMLS was adjusted to reflect new federal priorities in the aftermath of September 11 including a goal to combat terrorist financing. Unless reauthorized by the Congress, the requirement for an annual NMLS ended with the issuance of the 2003 strategy. To assist in congressional deliberations on whether there is a continuing need for an annual NMLS, we reviewed the development and implementation of the 1999 through 2002 strategies. In September 2003, we reported to this Caucus that, as a mechanism for guiding the coordination of federal law enforcement agencies’ efforts to combat money laundering and related financial crimes, the annual NMLS has had mixed results but generally has not been as useful as envisioned by the Strategy Act. For example, we noted that although Treasury and Justice had made progress on some NMLS initiatives designed to enhance interagency coordination of investigations, most had not achieved the expectations called for in the annual strategies, including plans to (1) use a centralized system to coordinate investigations and (2) develop uniform guidelines for undercover investigations. Headquarters officials cited differences in the various agencies’ anti-money laundering priorities as a primary reason why initiatives had not achieved their expectations. Most financial regulators we interviewed said that the NMLS had some influence on their anti-money laundering efforts because it provided a forum for enhanced coordination, particularly with law enforcement agencies. Law enforcement agency officials said the level of coordination between their agencies and the financial regulators was good. However, the financial regulators also said that other factors had more influence on them than the strategy. For example, the financial regulators cited their ongoing oversight responsibilities in ensuring compliance with the Bank Secrecy Act as a primary influence on them. Another influence has been anti-money laundering working groups, some of which were initiated by the financial regulators or law enforcement agencies prior to enactment of the 1998 Strategy Act. The officials said that the U.S. government’s reaction to September 11, which included a change in government perspective and new regulatory requirements placed on financial institutions by the USA PATRIOT Act, has driven their recent anti-money laundering and antiterrorist financing efforts. Although the financial regulators said that the NMLS had less influence on their anti-money laundering activities than other factors, they have completed the tasks for which the NMLS designated them as lead agencies over the years, as well as most of the tasks for which they were to provide support to the Treasury. In our September 2003 report, we noted that our work in reviewing national strategies for various crosscutting issues has identified several critical components needed for their development and implementation, including effective leadership, clear priorities, and accountability mechanisms. For a variety of reasons, these critical components generally have not been fully reflected in the development and implementation of the annual NMLS. For example, the joint Treasury-Justice leadership structure that was established to oversee NMLS-related activities generally has not resulted in (1) reaching agreement on the appropriate scope of the strategy; (2) ensuring that target dates for completing strategy initiatives were met; and (3) issuing the annual NMLS by February 1 of each year, as required by the Strategy Act. Also, although the Treasury generally took the lead role in strategy-related activities, the department had no incentives or authority to get other departments and agencies to provide necessary resources and compel their participation. And, the annual strategies have not identified and prioritized issues that required the most immediate attention. Each strategy contained more priorities than could be realistically achieved, the priorities have not been ranked in order of importance, and no priority has been explicitly linked to a threat and risk assessment. Further, although the 2001 and 2002 strategies contained initiatives to measure program performance, none had been used to ensure accountability for results. Officials attributed this to the difficulty in establishing such measures for combating money laundering. In addition, we noted that the Treasury had not provided annual reports to the Congress on the effectiveness of policies to combat money laundering and related financial crimes, as required by the Strategy Act. In summary, our September 2003 report recommended that—if the Congress reauthorizes the requirement for an annual NMLS—the Secretary of the Treasury, working with the Attorney General and the Secretary of Homeland Security, should take appropriate steps to strengthen the leadership structure responsible for strategy development and implementation by establishing a mechanism that would have the ability to marshal resources to ensure that the strategy’s vision is achieved, resolve disputes between agencies, and ensure accountability for strategy implementation; link the strategy to periodic assessments of threats and risks, which would provide a basis for ensuring that clear priorities are established and focused on the areas of greatest need; and establish accountability mechanisms, such as (1) requiring the principal agencies to develop outcome-oriented performance measures that must be linked to the NMLS’s goals and objectives and that also must be reflected in the agencies’ annual performance plans and (2) providing the Congress with periodic reports on the strategy’s results. In commenting on a draft of the September 2003 report, Treasury said that our recommendations are important, should Congress reauthorize the legislation requiring future strategies; Justice said that our observations and conclusions will be helpful in assessing the role that the strategy process has played in the federal government’s efforts to combat money laundering; and Homeland Security said that it agreed with our recommendations. Our review of the development and implementation of the annual strategies did not cover the 2003 NMLS, which was issued in November 2003, about 2 months after our September 2003 report. While we have not reviewed the 2003 NMLS, we note that it emphasized that “the broad fight against money laundering is integral to the war against terrorism” and that money laundering and terrorist financing “share many of the same methods to hide and move proceeds.” In this regard, one of the major goals of the 2003 strategy is to “cut off access to the international financial system by money launderers and terrorist financiers more effectively.” Under this goal, the strategy stated that the United States will continue to focus on specific financing mechanisms—including charities, bulk cash smuggling, trade-based schemes, and alternative remittance systems—that are particularly vulnerable or attractive to money launderers and terrorist financiers. To be successful, efforts to disrupt terrorists’ ability to fund their operations must focus not only on the formal banking and mainstream financial sectors but also on alternative financing mechanisms. The 2003 NMLS, which was issued last November includes a focus on alternative financing mechanisms; however, it is too soon to determine how well these efforts are working. We were pleased that IRS implemented our recommendation by expediting the establishment of procedures and guidelines for sharing data on charities with states. We continue to believe that implementation of our other two recommendations would further assist efforts to effectively address vulnerabilities posed by terrorists’ use of alternative financing mechanisms. Also, regarding investigative efforts against sources of terrorist financing, the May 2003 Memorandum of Agreement signed by the Attorney General and the Secretary of Homeland Security represents a partnering commitment by two of the nation’s premier law enforcement agencies, the FBI and ICE. In the 9 months since the Agreement was signed, progress has been made in waging a coordinated campaign against sources of terrorist financing. Continued progress will depend largely on the ability of the agencies to establish and maintain effective interagency relationships and meet various other operational and organizational challenges. Finally, from a broader or strategic perspective, the annual NMLS has had mixed results in guiding the efforts of law enforcement and financial regulators in the fight against money laundering and, more recently, terrorist financing. Through our work in reviewing national strategies, we identified critical components needed for successful strategy development and implementation; but, to date, these components have not been well reflected in the annual NMLS. The annual NMLS requirement ended with the issuance of the 2003 strategy. If the Congress reauthorizes the requirement for an annual NMLS, we continue to believe that incorporating these critical components—a strengthened leadership structure, the identification of key priorities, and the establishment of accountability mechanisms—into the strategy could help resolve or mitigate the deficiencies we identified. Mr. Chairman, this concludes our prepared statement. We would be happy to respond to any questions that you or Members of the Caucus may have. For further information about this testimony, please contact Loren Yager at (202) 512-4128 or Richard M. Stana at (202) 512-8777. Other key contributors to this statement were Christine M. Broderick, Danny R. Burton, Barbara I. Keller, R. Eric Erdman, Kathleen M. Monahan, Tracy M. Guerrero, and Janet I. Lewis. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The September 11, 2001, terrorist attacks highlighted the importance of data collection, information sharing, and coordination within the U.S. government. Such efforts are important whether focused on terrorism or as an integral part of a broader strategy for combating money laundering. In this testimony, GAO addresses (1) the challenges the U.S. government faces in deterring terrorists' use of alternative financing mechanisms, (2) the steps that the Federal Bureau of Investigation (FBI) and Immigration and Customs Enforcement (ICE) have taken to implement a May 2003 Memorandum of Agreement concerning terrorist financing investigations, and (3) whether the annual National Money Laundering Strategy (NMLS) has served as a useful mechanism for guiding the coordination of federal efforts to combat money laundering and terrorist financing. GAO's testimony is based on two reports written in September 2003 (GAO-03-813) and November 2003 (GAO-04-163) for the Caucus and congressional requesters within the Senate Governmental Affairs Committee, as well as a February 2004 report (GAO-04-464R) on related issues for the Senate Appropriations Subcommittee on Homeland Security. The U.S. government faces various challenges in determining and monitoring the nature and extent of terrorists' use of alternative financing mechanisms, according to GAO's November 2003 report. Alternative financing mechanisms are outside the mainstream financial system and include the use of commodities (cigarettes, counterfeit goods, illicit drugs, etc.), bulk cash, charities, and informal banking systems to earn, move, and store assets. GAO recommended more systematic collection, analysis, and sharing of information to make alternative financing mechanisms less attractive to terrorist groups. In response to our recommendation that the FBI, in consultation with other agencies, systematically collect and analyze information on terrorists' use of these mechanisms, Justice did not specifically agree or disagree with our recommendation, but other agencies agreed with the need for improved analysis. The Treasury agreed with our recommendation to issue an overdue report on precious stones and commodities, but it remains unclear how the resulting product may be used as the basis for an informed strategy as expected under the 2002 NMLS. The Internal Revenue Service (IRS) agreed with our recommendation to develop and implement procedures for sharing information on charities with states and issued IRS procedures and state guidance on December 31, 2003. To resolve jurisdictional issues and enhance interagency coordination of terrorist financing investigations, the FBI and ICE have taken steps to implement most of the key provisions of the May 2003 Memorandum of Agreement. According to GAO's February 2004 report, the agencies have developed collaborative procedures to determine whether applicable ICE investigations or financial crimes leads may be related to terrorism or terrorist financing--and, if so, determine whether the FBI should thereafter take the lead in pursuing them. GAO's report noted that continued progress will depend largely on the ability of the agencies to establish and maintain effective interagency relationships. From a broader or strategic perspective, the annual NMLS generally has not served as a useful mechanism for guiding coordination of federal efforts to combat money laundering and terrorist financing, according to GAO's September 2003 report. While Treasury and Justice had made progress on some strategy initiatives designed to enhance interagency coordination of investigations, most initiatives had not achieved the expectations called for in the annual strategies. The report recommended (1) strengthening the leadership structure for strategy development and implementation, (2) identifying key priorities, and (3) establishing accountability mechanisms. In commenting on a draft of the September 2003 report, Treasury said that our recommendations are important, should the Congress reauthorize the legislation requiring future strategies; Justice said that our observations and conclusions will be helpful in assessing the role that the strategy process has played in the federal government's efforts to combat money laundering; and Homeland Security said that it agreed with our recommendations.
Federal interest in performance information and its potential relationship to budgeting practices has existed to varying degrees for over 50 years. More recently, this interest culminated in the passage of GPRA and related management reforms of the 1990s. GPRA mandates that federal agencies develop performance information describing the relative effectiveness and efficiency of federal programs as a means of improving the congressional and executive decision-making processes. Among other statutory obligations, GPRA requires federal agencies to publish strategic and annual plans describing specific program activities with the intention of establishing a more tangible link between performance information for these programs and agency budget requests. The current administration has taken several steps to strengthen performance-resource linkages for which GPRA laid the groundwork. The budget and performance initiative of the PMA contains the criteria agencies must meet in order to achieve “green” status on the initiative. The criteria include elements relating to budgeting and strategic planning and also ties those elements to individual performance. As we have previously reported, creating a “line of sight” between individual performance and organizational success is a leading practice used in public sector organizations to become more results-oriented. Central to the budget and performance integration initiative of the PMA, the PART is a means to strengthen the process for assessing the effectiveness of programs by making that process more robust, transparent, and systematic. The PART is a series of diagnostic questions designed to provide a consistent approach to rating federal programs. (See app. II for the PART questionnaire.) Drawing on available performance and evaluation information, OMB staff use the questionnaire to rate the strengths and weaknesses of federal programs with a particular focus on individual program results. The PART asks, for example, whether a program’s long-term goals are specific, ambitious, and focused on outcomes, and whether annual goals demonstrate progress toward achieving long-term goals. It is designed to be evidence-based, drawing on a wide array of information, including authorizing legislation; GPRA strategic plans, annual performance plans, and reports; financial statements; inspectors general and GAO reports; and independent program evaluations. The PART questions are divided into four sections; each section is given a specific weight in determining the final numerical rating for a program. Table 1 shows an overview of the four PART sections and the weights OMB assigned. In addition, each PART program is assessed according to one of seven major approaches to delivering services. Table 2 provides an overview of these program types and the number and percentage of programs covered by each type in the 2002 through 2004 performance assessments. As of February 2005, the PART ratings have been published for 607 programs (according to OMB, this represents about 60 percent of the federal budget). Each program received one of four overall ratings: (1) “effective,” (2) “moderately effective,” (3) “adequate,” or (4) “ineffective” based on program design, strategic planning, management, and results. A fifth rating, “results not demonstrated,” was given— independent of a program’s numerical score—if OMB decided that a program’s performance information, performance measures, or both were insufficient or inadequate. Table 3 shows the distribution of ratings for 2002-2004. During the next 2 years, the administration plans to assess all remaining executive branch programs with limited exceptions. In our January 2004 report on the PART, you asked us to examine (1) how the PART changed OMB’s decision-making process in developing the President’s fiscal year 2004 budget request; (2) the PART’s relationship to the GPRA planning process and reporting requirements; and (3) the PART’s strengths and weaknesses as an evaluation tool, including how OMB ensured that the PART was applied consistently. We found that the PART helped structure OMB’s use of performance information for its internal program and budget analysis, made the use of this information more transparent, and stimulated agency interest in budget and performance integration. Our analysis confirmed that one of the PART’s major impacts was its ability to highlight OMB’s recommended changes in program management and design. We noted that while much of the PART’s potential value lies in the related program recommendations, realizing these benefits would require sustained attention to implementation and oversight to determine if desired results are achieved, and that OMB needs to remain congnizant of this as it considers capacity and workload issues in the PART. We also recognized that while there are inherent challenges in assigning a single rating to programs having multiple purposes and goals, OMB devoted considerable effort to promoting consistent ratings but challenges remain in addressing inconsistencies among OMB staff, such as interpreting the PART guidance and defining acceptable measures. OMB senior officials recently told us that inconsistencies in the PART process could also be attributed to agency staff, given the shared agency-OMB responsibilities in the PART process. Limited credible evidence on results also constrained OMB’s ability to rate program effectiveness, as evidenced by the almost 50 percent of programs rated “results not demonstrated.” We also found that the PART is not well integrated with GPRA—the current statutory framework for strategic planning and reporting. We said that by using the PART process to review and sometimes replace GPRA goals and measures, OMB substituted its judgment for a wide range of stakeholder interests. The PART/GPRA tension was further highlighted by challenges in defining a unit of analysis useful for both program-level budget analysis and agency planning purposes. Although the PART can stimulate discussion on program-specific measurement issues, it cannot substitute for GPRA’s focus on thematic goals and department- and governmentwide crosscutting comparisons, and was not used to evaluate similar programs together to facilitate trade-offs or make relative comparisons. Lastly, we said that while the PART clearly must serve the President’s interests, the many actors whose input is critical to decisions will not likely use performance information unless they feel it is credible and reflects a consensus on goals. Our work showed that it if OMB wanted to expand the understanding and use of the PART beyond the executive branch, it would be important for OMB to discuss in a timely fashion with Congress the focus of the PART assessments and clarify the results and limitations of the PART and the underlying performance information. On the other hand, we noted that a more systematic congressional approach to providing its perspective on performance issues and goals could facilitate OMB’s understanding of congressional priorities and thus increase the PART’s usefulness in budget deliberations. In light of these issues, we recommended that OMB address the capacity demands of the PART, strengthen the PART guidance, address evaluation information availability and scope issues, focus program selection on crosscutting comparisons and critical operations, broaden the dialogue with congressional stakeholders, and articulate and implement a complementary relationship between the PART and GPRA. We also suggested that Congress consider the need for a structured approach to articulating its perspective and oversight agenda on performance goals and priorities for key programs. OMB took several steps to implement many of our recommendations. For example, OMB clarified its PART guidance on defining the PART programs, using outcome and output measures, and expanded the discussion of evaluation quality; began to use the PART as a framework for crosscutting assessments; and expanded its discussion about the relationship between the PART and GPRA. The guidance notes that the PART strengthens and reinforces performance measurement under GPRA by encouraging the careful development of performance measures according to GPRA’s outcome-oriented standards. It also requires that PART goals be “appropriately ambitious” and that GPRA and the PART performance measures be consistent. They have also begun reporting on the status of each program’s recommendations and implemented PARTWeb, a Web- based data collection tool to, among other things, improve collaboration between OMB and agencies and centrally track the implementation and status of the PART recommendations. The PART process has aided OMB’s oversight of agencies, and has focused agencies’ efforts to improve performance measurement. According to OMB, the PART is a framework for program assessment and informs its budget decisions. Many agency officials told us that the PART helped either create or strengthen a culture of evaluation within the agencies by providing external motivation for program review. Not surprisingly, agency officials used the PART results to make a case for increased resources in general and for program evaluation specifically. This increased focus on performance is often reflected in improved ratings when “results not demonstrated” programs get reassessed by the PART—86 percent of programs previously rated “results not demonstrated” were subsequently rated adequate, moderately effective, or effective when reassessed. This focus is not without cost, however; the PART remains a labor-intensive process for both OMB and agencies. OMB senior officials describe the PART as providing a consistent framework for assessing federal programs, and as a means to inform its budget decisions. As a major component of the PMA, OMB clearly relies on the PART—a significant component of the PMA—as a major oversight tool and finds information from the PART reviews useful. As we previously reported, the PART has helped to structure and discipline OMB’s use of performance information for internal program analysis and budget review, and made their use of this information more transparent. Given the PART’s use in the budget process, the high profile of the PMA scorecard, and the strong connection between the PART and successful performance on the PMA’s budget and performance integration initiative, agencies have clear incentives to take the PART seriously. Many agency officials told us that the PART helped either create or strengthen a culture of evaluation within the agencies by providing external motivation for program review. The PART question that asks whether a program has undergone regular, independent evaluations sends the message that program assessment and evaluation is an important management tool. For example, according to one agency official at the Health Resources Services Administration in HHS, this requirement encouraged staff to think more broadly about using different types of program evaluations and how they could get the most out of their evaluation dollars. Another HHS official reported that the PART provided an impetus for finishing strategic and evaluation plans for his program, which in turn helped inform the division’s planning process. Our companion report on the PART evaluation recommendations reports similar findings, noting that most program officials interviewed for that report said that the PART recommendations directed senior management’s attention to the need for evaluation. Not surprisingly, agency officials used the PART results in some cases successfully—to argue for increased resources in general as well as specifically for program evaluation. For example, officials in one agency said that a good rating on the PART is a powerful aid in gaining bipartisan support for budget increases. DOL agency officials told us that absent the PART, they might not have received funding to evaluate the Youth Activities program—a program they felt had been in need of an evaluation for a long time. Agency officials we spoke with credited the PART with increasing attention to the use of performance measurement in day-to-day program management, which they considered to be of greater consequence than the PART’s bottom-line ratings and recommendations. For example, agency officials at DOL credited the first year’s PART assessments with encouraging managers to take steps prior to assessments to identify and address program weaknesses, develop and improve performance measures, and train staff on the PART. Officials from DOL’s Trade Assistance program said that the PART forced them to look at the program in a new light, and be objective about what they are doing and how they are doing it. SBA officials said that the PART and the PMA helped them move away from “analysis by anecdote” and refocused their attention on the impact their programs have on small businesses, instead of largely on output measures such as the number of loans they have made. One official at HHS said that the PART allowed him to “evangelize” on the importance of good performance data and the perils of bad data. Other officials echoed a similar sentiment, one of them indicating that the PART scores helped to create “a new sense of urgency” about performance measures and completing the changes to performance systems that were already underway. The link between program ratings and the PMA scorecard also provided an incentive for change. “Results not demonstrated” ratings have implications beyond the PART. For an agency to achieve “green” on the Performance and Budget Integration initiative of the PMA scorecard, less than 10 percent of its programs could have received a results not demonstrated PART rating for more than 2 years in a row. According to OMB’s PMA scorecard update as of June 30, 2005, only nine agencies have met this particular criterion. This increased focus on performance is often reflected in improved ratings when programs originally rated “results not demonstrated” are reassessed. When reassessed, 86 percent of programs previously rated “results not demonstrated” were rated adequate, moderately effective, or effective. Because programs were only reassessed when OMB determined that significant changes had been made to address deficiencies, this result is not surprising. However, it was, on average, the “results not demonstrated” programs with initially higher section IV scores (section IV measures program results) that, when reassessed, showed the greatest improvement in rating. While there were programs with low section IV scores that received an “adequate” rating when reassessed, lower scoring programs generally remained in the “results not demonstrated” category or received an “ineffective” rating when reassessed. Although the PART has enhanced the focus on performance, this has not come without a cost. As we reported in our January 2004 report, senior OMB managers recognized the increased workload the PART initially placed on examiners; however, they expected the workload to decline as both OMB and agency staff became more familiar with the PART tool and process, and as issues with the timing of the PART reviews were resolved. During this review we found that while the learning curve did appear to flatten, it did not seem to compensate for either the increased workload due to the sheer number of programs being assessed or reassessed each year or the amount of time an individual assessment takes. This finding is consistent with views expressed by OMB staff during our 2004 review. They told us that they were surprised that reassessments took almost as long as assessing programs for the first time. OMB limited the scope of reassessments to include only those programs where there is significant evidence of change. Programs that received a “results not demonstrated” rating received priority for reassessment. According to OMB officials, this change was made partly due to resource constraints. Officials in some of our case study agencies expressed concern that OMB’s growing workload affects how the PART programs are defined. They said that as more programs are assessed OMB has less time to focus on the PART units that “make sense” and instead is creating larger PART units to help control the number of the PART assessments that need to be completed. One official recognized that getting into too much detail can be time consuming, but nonetheless noted that reviewing a larger “program” can lead to missing some important details; another said it can lead to recommendations that are not specific enough to be useful to a program. One agency official said that the PART assessments can be thoughtful when OMB is knowledgeable about a program and has enough time to complete the reviews, but the assessments are less useful when OMB staff are unfamiliar with programs or have too many of the PART assessments to complete. Officials across all of our case study agencies reported these types of issues. For example, one official said that although the PART reviews were to be completed by the cognizant OMB examiner for the program, this was not always the case. He said that due to turnover at OMB, programs in his department were assessed by several people even within a single PART cycle, resulting in a lack of continuity. In several cases, agencies reported that OMB was not able to reassess programs because of resource constraints. Some officials told us that the heavy workload meant that the PARTs were not completed in a timely enough fashion to allow agencies to appeal ratings or present new performance measures, sometimes resulting in lower PART scores. OMB officials noted that OMB policy permits agencies to appeal answers to individual questions, not entire ratings, and that in practice, ratings may be appealed at any time during the PART process whether the ratings are in draft form or completed. Although a senior OMB official acknowledged continuing capacity issues regarding the PART, he said that the PART is still a better way for examiners to accomplish their traditional program assessment responsibilities because it is more objective and transparent. He noted that OMB is devoting more people to help administer the PART tool and that PARTWeb, OMB’s new on-line Web-based data collection system for PART, is also designed to ease the management of the process. For example, the official said that PARTWeb will automate the production of PART summary sheets. The PART is a resource-intensive process for agencies as well. Some agency officials at DOL noted that the PART process is “one size fits all” in that a small program at DOL is supposed to have the same resources to devote to helping the budget examiner through the process and have the same analytic and evaluation resources as a large organization like the Social Security Administration. They said that agency staff is diverted from mission work to the PART work and in some cases is spending significant time on helping OMB staff understand the history and context of the programs. OMB has said that a major purpose of the PART is program improvement. Our analysis supports OMB’s statements that most of the PART recommendations to date were aimed at improving performance assessment, such as developing outcome and efficiency measures, and in collecting performance data. Improving managers’ ability to assess program outcomes, identify information gaps, and assess next steps are necessary first steps on the path to long-term program improvement, but are not expected to result in observable program improvement in the short term. Moreover, as of February 2005—the date of the most recent available OMB data—the majority of the PART recommendations have not yet been fully implemented. Consequently, there is limited evidence to date of outcome-based program results. Implementing the PART recommendations has proven challenging. Although some agency officials appreciated the flexibility OMB provided by not making prescriptive recommendations, some follow-on actions were so general that it was difficult to understand what change was required or how progress could be measured. Some agencies did not discuss with OMB the evaluation plans created in response to the PART recommendations; combined with different expectations on the scope and purpose of the evaluations and the quality of evaluation designs, it is not certain whether these evaluations will meet OMB’s needs. Lastly, OMB has used the PART as a framework for several crosscutting reviews, but these generally do not include all tools, such as tax expenditures, that contribute to related goals. Greater focus on selecting related programs and activities for concurrent review would improve their usefulness. For each program assessment, the PART summary worksheets were published in a separate volume with the President’s fiscal year 2004 budget request. For the fiscal year 2005 and 2006 budgets, similar information was provided in an accompanying CD-ROM. The detailed, supporting worksheets for each program were posted on OMB’s Web site. The PART summary sheets display the program’s key performance measures, budget information, significant findings, and follow-up actions (also known as recommendations. See fig. 1 for examples of follow-on actions). In the fiscal year 2006 budget, summary sheets for programs that have been previously assessed also include information on when the program was last assessed and the status of the follow-up actions. Status markers include “no action taken,” “action taken but not completed,” and “completed.” (See appendix III for examples of summary worksheets for programs assessed for the first time and for programs that were reassessed.) As figure 2 shows, the distribution of recommendations between program management, assessment, and design is fairly consistent over the 3 years; however, the percentage of recommendations with explicit funding references in a given year have steadily declined since the PART’s inception from 20 percent in 2002 to 12 percent in 2004. A major goal of the PART is to identify program strengths and weaknesses and make recommendations to improve program results. However, we found that the link between problems identified by the PART assessments and the recommendations intended to address them is sometimes unclear. Regardless of what types of deficiencies were identified by the PART, the most frequent recommendations in each of the three years were related to performance assessments, such as developing outcome measures and/or goals, and improving data collection. While especially true for “results not demonstrated” programs, it also held true for programs rated “effective” and “moderately effective.” Moreover, programs assessed for the first time in 2004—the most recent year for which data is available—received recommendations to improve performance assessments, such as outcome measures, as frequently as programs assessed during the first PART cycle. More than half of all the PART recommendations made since the PART’s inception are aimed at improving the “process” of program assessment. This includes developing meaningful and robust performance goals and collecting quality data to measure progress against those goals. Of the 797 follow-on recommendations made in the first 2 years for which OMB provided status information, 30 percent were considered fully implemented. Of these, 47 percent are geared toward performance assessment. For example, the Animal and Plant Health Inspection Service Plant and Animal Health Monitoring Programs within the Department of Agriculture received three recommendations, one of which would create efficiency measures and the other which would update the program’s measures and accomplishments. Such measures improve managers’ ability to assess program outcomes, identify information gaps, and assess next steps, but are not expected to result in observable program improvement in the short term. OMB claims that many programs are getting better every year—which it defines as improving program outcomes, taking steps to address the PART findings, improving program management, and becoming more efficient—but, as noted above, these claims have not yet been fully born out. Some recommendations are aimed at changing a program’s purpose or design and/or implicitly or explicitly require action by Congress. For example, the Department of Agriculture’s Commodity Credit Corporation’s Marketing Loan Payments program received a recommendation to have the “House and Senate Agricultural committees examine the issue of payment limits for marketing loan and LDP gains and how they could be tightened.” A Department of Education special education program was told to “work with Congress on the IDEA reauthorization to increase the act’s focus on accountability and results, and reduce unnecessary regulatory and administrative burdens.” Even in cases where there is general agreement that legislative action or statutory changes are needed, making them takes time. OMB has said that if statutory provisions impede effectiveness, one result of a PART review could be recommendations for legislative changes. The responsibility to implement the PART recommendations lies with agency and program managers. Successfully implementing recommendations that require legislative action or statutory changes requires the additional step of positively engaging Congress. A perceived disconnect between what one is held accountable for and what one has the authority to accomplish is not unusual. Our 2003 survey of governmentwide federal managers supports this view. We found that while 57 percent of non-Senior Executive Service (SES) managers and 61 percent of SES mangers believed they were held accountable for results to a great or very great extent, only 38 percent and 40 percent, respectively, believed that managers at their level had the decision-making authority they needed to achieve agency goals. Although OMB has given agencies discretion to define strategies to implement recommendations, OMB officials told us that, as a matter of policy, they have generally not prioritized the recommendations within each agency or across the more than 1,700 recommendations governmentwide because they do not want to dilute attention paid to any of the recommendations by deeming them a lower priority. Realistically, though, agencies cannot act on all of them concurrently. Because OMB has chosen to assess nearly all federal programs, resources are diffused across multiple areas instead of concentrated on those areas of highest priority both within agencies and across the federal government. This strategy is likely to lengthen the time it will take to observe measurable change. Moreover, as we report in our companion report on the PART evaluation recommendations, agency officials questioned the PART’s assumption that all programs should have evaluations. Agency officials in one of our case study agencies said that they were able to fund some evaluations for small programs without cutting into program budgets, but other agency officials pointed out that spending several hundred thousand dollars for an evaluation study was a reasonable investment for large programs; they questioned whether all programs, regardless of size or importance, need to be evaluated, especially in times of tight resources and suggested instead a risk-based approach to prioritizing programs to be evaluated. We also noted that the lack of prioritization meant that agencies were free to choose which programs to evaluate first, and were likely to be influenced by such factors as the potential effect of the PART reassessments on their PMA scores. OMB gives agencies wide latitude to implement the PART recommendations, which had both positive and negative effects on agency actions. Some officials appreciated the flexibility that OMB provided by not making prescriptive recommendations. They said that they were generally able to devise implementation strategies that suit programmatic needs and in most cases were able to devise implementation strategies that fit within existing agency plans and procedures. While they discuss their strategies with OMB, it is generally up to agency staff to determine the best course of action to implement the recommendations. In other cases, though, agency officials said that the recommendations were so broad as to be vague. This sometimes hampered implementation. For example, a DOE program received a recommendation to “improve performance reporting by grantees and contractors by September, 2004.” DOE officials told us that in this case, the desired result is unclear. The PART requires that they report grantee performance both aggregated on a programwide level and disaggregated at the grantee level. DOE officials said that because they already report grantee information in each of these ways for both the PART and their Performance and Accountability Report (PAR), and because the recommendation does not describe the deficiencies in the reporting, they are unsure how to change their reporting practices to meet OMB’s needs. Our review of this program’s PART worksheet supports this view. Although we found one reference to “inadequate communication in the PAR of program-level aggregate data on the impact of the grants program” in the detailed supporting worksheet for this program, the reason for the inadequacy is unclear. In cases such as these, it is difficult to see how OMB and agencies can monitor progress. Given the importance OMB places on implementing the PART recommendations, it is important that recommendations clearly identify deficiencies and provide a basis for determining whether they are complete. Federal agencies are increasingly expected to demonstrate effectiveness in achieving agency or governmentwide goals. Our previous work has shown that the accuracy and quality of evaluation information necessary to make the judgments called for when rating programs is highly uneven across the federal government. To help explain linkages between program activities, outputs, and outcomes, program evaluation designs are tailored to address various types of programs and questions. For example, a process evaluation reviews various aspects of program operations to assess the extent to which a program is operating as intended. Alternatively, an impact evaluation depends on scientific research methods to assess the net effect of a program by comparing program outcomes with an estimate of what would have happened in the absence of the program, in order to isolate the program's contributions to the observed outcomes. In other words, evaluations are useful to specific decisionmakers to the degree that the evaluations are credible and address their information needs. Our companion report notes that although the evaluation recommendations provided agencies with flexibility to interpret what evaluation information OMB expected and which evaluations to fund, a few programs did not discuss their evaluation plans with OMB; combined with different expectations on defining the scope and purpose of evaluations and disagreements about the quality of evaluation designs, it is not certain whether these evaluations will meet OMB’s needs. OMB and our case study agencies significantly differed in defining evaluation scope and purpose. Some of the difficulties seemed to derive from the OMB examiners’ expecting to find, in the agencies’ external evaluation studies, comprehensive judgments about program design, management, and effectiveness, similar to the judgments made in the PART examinations. Because evaluations designed for internal and external audiences often have a different focus, differences of opinion on the usefulness of evaluations are perhaps not surprising. Evaluations that agencies initiate typically aim to identify how to improve the allocation of program resources or the effectiveness of program activities. Studies requested by program authorizing or oversight bodies, such as OMB, are more likely to address external accountability—to judge whether the program is properly designed or is solving an important problem. HHS departmental officials reported having numerous differences with OMB examiners over the acceptability of their evaluations. HHS officials were particularly concerned that OMB sometimes disregarded their studies and focused exclusively on OMB’s own assessments. One program official complained that OMB did not adequately explain why the program’s survey of refugees’ economic adjustment did not qualify as an “independent, quality evaluation,” although an experienced, independent contractor conducted the interviews and analysis. In the published PART review, OMB acknowledged that the program surveyed refugees to measure outcomes and monitored grantees on site to identify strategies for improving performance. In our subservent interview, OMB staff explained that the outcome data did not show the mechanisms by which the program achieved its outcomes and grantee monitoring did not substitute for obtaining an external evaluation, or judgment, of the program’s effectiveness. Other HHS officials said that OMB had been consistent in applying the standards for independent evaluation, but these standards were set extremly high. In reviewing a vaccination program, OMB did not accept the several research and evaluation studies offered, since they did not meet all key dimensions of “scope.” OMB acknowledged that the program had conducted several management evaluations of the program to see whether the program could be improved but found their coverage narrow and concluded “there have previously been no comprehensive evaluations looking at how well the program is structured/managed to achieve its overall goals.” The examiner also did not accept an external Institute of Medicine evaluation of how the government could improve its ability to increase immunization rates because the evaluation report had not looked at the effectiveness of the individual federal vaccine programs or how the program complemented the other related programs. However, in reviewing recommendation status, OMB credited the program with having contracted for a comprehensive evaluation that was focused on the operations, management, and structure of this specific vaccine program. OMB and agencies differed in identifying which evaluation methods were sufficiently rigorous to provide high-quality information on program effectiveness. OMB guidance encouraged the use of randomized controlled trials, or experiments, to obtain the most rigorous evidence of program impact but also acknowledged that these studies are not suitable or feasible for every program. However, without guidance on which—and when—alternative methods were appropriate, examiners and agency staff disagreed on whether specific evaluations were of acceptable quality. To help develop shared understandings and expectations, federal evaluation officials and OMB examiners held several discussions on how to assess evaluation quality according to the type of program being evaluated. When external factors such as economic or environmental conditions are known to influence a program’s outcomes, an impact evaluation attempts to measure the program’s net effect by comparing outcomes with an estimate of what would have occurred in the absence of the program intervention. A number of methodologies are available to estimate program impact, including experimental and quasi-experimental designs. Experimental designs compare the outcomes for groups that were randomly assigned to either the program or to a nonparticipating control group prior to the intervention. The difference in these groups’ outcomes is believed to represent the program’s impact, assuming that random assignment has controlled for any other systematic difference between the groups that could account for any observed difference in outcomes. Quasi- experimental designs compare outcomes for program participants with those of a comparison group not formed through random assignment, or with participants’ experience prior to the program. Systematic selection of matching cases or statistical analysis is used to eliminate any key differences in characteristics or experiences between the groups that might plausibly account for a difference in outcomes. Randomized experiments are best suited to studying programs that are clearly defined interventions that can be standardized and controlled, and limited in availability, and where random assignment of participants and nonparticipants is deemed feasible and ethical. Quasi-experimental designs are also best suited to clearly defined, standardized interventions with limited availability, and where one can measure, and thus control for, key plausible alternative explanations for observed outcomes. In mature, full- coverage programs where comparison groups cannot be obtained, program effects may be estimated through systematic observation of targeted measures under specially selected conditions designed to eliminate plausible alternative explanations for observed outcomes. Following our January 2004 report recommendation that OMB better define an “independent, quality evaluation,” OMB revised and expanded its guidance on evaluation quality. The guidance encouraged the use of randomized controlled trials as particularly well suited to measuring program impacts but acknowledged that such studies are not suitable or feasible for every program, so it recommended that a variety of methods be considered. OMB also formed an Interagency Program Evaluation Working Group in the summer of 2004 which discussed this guidance extensively to provide assistance on evaluation methods and resources to agencies undergoing a PART review. Evaluation officials from several federal agencies expressed concern that the OMB guidance materials defined the range of rigorous evaluation designs too narrowly. In the spring of 2005, representatives from several federal agencies participated in presentations about program evaluation purposes and methods with OMB examiners. They outlined the types of evaluation approaches they considered best suited for various program types and questions. (See fig. 3.) However, OMB did not substantively revise its guidance on evaluation quality for the fiscal year 2007 reviews beyond recommending that “agencies and OMB should consult evaluation experts, in-house and/or external, as appropriate, when choosing or vetting rigorous evaluations.” Although assessing programs in isolation can yield useful information, it is often critical to understand how individual programs fit within a broader portfolio of tools and strategies—such as regulations, credit programs, and tax expenditures—to accomplish federal goals. Such an analysis is necessary to capture whether a program complements and supports other related programs, whether it is duplicative and redundant, or whether it actually works at cross-purposes to other initiatives. Although variations of the PART tool are meant to capture the different approaches to service delivery, such as grants versus direct federal activities, not all approaches—such as tax expenditures—are systematically assessed by the PART. Tax expenditures may be aimed at the same goals as spending programs but little is known about their effectiveness or their relative efficacy when compared to related spending programs in achieving the objectives intended by Congress. Since in some years, tax expenditures are about the same order of magnitude as discretionary programs and in some program areas tax expenditures may be the main tool used to deliver services, this is a significant gap. We recently recommended that OMB require that tax expenditures be included in the PART and any future such budget and performance review processes so that tax expenditures are considered along with related outlay programs in determining the adequacy of federal efforts to achieve national objectives. Consistent with recommendations in our January 2004 report, OMB has begun to use the PART framework to conduct crosscutting assessments. OMB reported on two crosscutting PART assessments—Rural Water programs and Community and Economic Development (CED) programs— for the fiscal year 2006 budget and it plans to conduct three additional crosscutting reviews on block grants, credit programs, and small business innovation research for the fiscal year 2007 budget. According to both OMB and agency participants in the cooperative CED assessment, several aspects worked well. For example, the CED effort leveraged federal governmentwide community and development expertise housed in the OMB Interagency Collaborative on Community and Economic Development (ICCED). The group focused on four elements: (1) determining the programs to be included in such a comparison; (2) reaching agreement on the goals and objectives of similar programs; (3) identifying opportunities to better coordinate, target, leverage, and increase efficiency and effectiveness of similar programs; and (4) establishing a common framework of performance measures and accountability. Cognizant agency officials were pleased with this collaborative interagency process. They found value in leveraging existing efforts within agencies and benefited from OMB staff consultation. The CED crosscutting assessment examined the performance of 18 of the 35 federal community and economic development programs identified by OMB and account for the majority of the $16.2 billion OMB estimates is spent annually in this area. Although OMB identified three tax expenditures in the CED portfolio, it did not assess all of them with the PART instrument even though the Department of the Treasury’s (Treasury) estimate of their combined “cost” is nearly $1.4 billion, or about 57 percent of Treasury’s revenue loss estimates for community development. Little information on the CED crosscutting assessment was initially available beyond the brief description contained in the Analytical Perspectives volume of the Fiscal Year 2006 President’s budget request. Some OMB documents and administration officials stated that all 18 programs had been assessed by the PART. However, 8 of the 18 programs proposed for consolidation were actually assessed by the PART. Because PART programs do not always clearly align with the individual programs proposed for consolidation, it can be difficult to determine which programs were assessed with the PART and which were not. As the CED team itself recognized, the results of a crosscutting assessment need to be communicated to stakeholders and the public. Unless the scope, purpose, and results are clear to stakeholders, the fruits of crosscutting assessments will likely not be realized. In choosing programs to assess and reassess with the PART, OMB considers a variety of factors, including continuing presidential initiatives, whether a program is up for reauthorization, and whether a program received a rating of “results not demonstrated” in a previous PART review. Although these are reasonable criteria, a greater emphasis on selecting programs related to common or similar outcomes for review in a given year would enable decision makers to better analyze the efficacy of programs related to such outcomes, and improve the usefulness of crosscutting reviews conducted under the PART framework. Moreover, using PART assessments to review the relative contributions of similar programs to common or crosscutting goals and outcomes established through the GPRA process could improve the connection between the PART and GPRA. Developing a more strategic approach to selecting and prioritizing programs to be reassessed under the PART can also help ensure that OMB and agencies are using limited staff resources to the best advantage. Although both the PART and GPRA aim to improve the focus on program results, the different purposes and time frames they serve continue to create tensions. Some agencies have made progress over the past several years in reconciling the PART and GPRA, however, unresolved tensions can result in conflicting ideas about what to measure, how to measure it, and how to report program results. We continue to find evidence that the closed nature of the executive budget formulation process may not allow for the type of consultative stakeholder involvement in strategic and annual planning envisioned by GPRA. We remain concerned that the focus of agency strategic planning is shifting from long-term goal setting to short- term budget and planning needs. OMB attempted to clarify the relationship between the PART and GPRA in its PART guidance in 2004. The guidance notes that the PART strengthens and reinforces performance measurement under GPRA by encouraging the careful development of performance measures according to GPRA’s outcome-oriented standards. It also requires that the PART goals be “appropriately ambitious” and that GPRA and PART performance measures be consistent. Some agencies have made progress over the past several years in reconciling the PART and GPRA. For example, DOE and SBA officials told us that their existing GPRA measures now relate to or are generally accepted for PART purposes. Officials from DOE’s Office of Science and Labor’s Employment Standards Administration told us that OMB actively encouraged them to use their GPRA measures in the PART. HHS’s Breast and Cervical Cancer, Diabetes, and Foster Care programs as well as the Administration on Developmental Disabilities were able to use some existing GPRA measures as annual PART measures. These experiences are consistent with OMB’s view that although the PART and GPRA are often focused on different sets of measures, the characteristics of both sets should be the same (e.g., outcome-oriented, ambitious) and support OMB’s belief that they have adequately clarified the relationship between the PART and GPRA. However, some agency officials we spoke with described persistent difficulties in integrating the two processes. Some described the PART and GPRA as duplicative processes that strain agency resources; others said they conflicted. As described below, defining a “unit of analysis” and performance measures useful for both budget and performance purposes remains a challenge. One official noted, “there is almost an unavoidable conflict between data that is useful from a governmentwide perspective and data that is useful to program managers.” Although the Breast and Cervical Cancer and Diabetes programs had some success marrying their annual GPRA measures with short-term PART measures, they found that OMB did not consider their long-term GPRA goals to be aggressive enough; the measures were revised to meet OMB’s needs. OMB acknowledges that to improve performance and management decisions, OMB and agencies should determine an appropriate “unit of analysis” for a PART assessment. The PART guidance notes that although the budget structure is not perfect for program review in every instance, the budget structure is the most readily available and comprehensive system for conveying PART results. In response to our January 2004 report, OMB expanded its guidance on how the unit of analysis is to be determined. The guidance notes that interdependent programs or program activities can be combined for purposes of the PART as long as the aggregated unit of analysis for the PART is able to illuminate meaningful management distinctions among programs that share common goals but are managed differently. Moreover, it notes that several factors should be considered when deciding whether to combine programs, such as a program’s purpose, design, and administration; budgeting; and whether the programs support similar outcome goals. Although less of a problem than it was during our January 2004 review of the PART process, difficulties in defining a unit of analysis useful for both OMB’s budget analysis and program management purposes remain, and continue to highlight the tension between the PART and GPRA. Some agency officials acknowledged that overly disaggregating programs for the PART sometimes does not provide an understanding of how the entire program or service delivery system works before attempting to assess the performance of component pieces. One official described it as “putting the cart before the horse.” Some agency officials noted that difficulties can also arise when unrelated programs and programs with uneven success levels are combined for the PART. For example, OMB combined programs authorized under titles VII and VIII of the Public Health Services Act to create the Health Professions PART program. As required by the PART guidance, the entire PART program received a “no” for each question where any of the PART program components did not meet the requirements for a “yes” answer. As agency officials recognized, assessing the programs separately would have made each program’s successes and weaknesses more transparent. They felt this was important, as the individual programs have different underlying program authorities, goals, and attempt to address the maldistribution of health professionals in a variety of ways. In other words, although they complement each other, they serve different needs. OMB senior officials acknowledged that combining programs could theoretically make each component’s successes and challenges less apparent, but that in this case it is hard to argue that programs authorized by different titles of the Public Health Services Act are unrelated to each other. The goals and recommendations developed in a PART review, and hence the overall quality of the review, may suffer when the unit of analysis is not properly targeted. For example, the Centers for Disease Control and Prevention’s (CDC) National Immunization Program (NIP) includes both the 317 program—which provides funding to support 64 state, local, and territorial public health immunization programs for program operations and vaccine purchase—and the Vaccines for Children (VFC) program— which provides publicly purchased vaccines to participating providers which are then given to eligible children without cost to the provider or the parent. Only the 317 program has been assessed by the PART to date. In its PART assessment of the 317 Program, OMB noted that the administration was including a legislative proposal in the fiscal years 2004 and 2005 budget requests to “make it easier for uninsured children who are eligible for the CDC Vaccines for Children (VFC) program to receive immunizations in public health clinics, to improve program efficiency in the overall childhood immunization program. This proposal will expand the VFC program and result in $110 million in savings to the 317 discretionary childhood immunization program.” According to HHS officials, these proposals are outside the scope of the 317 program. They said that the 317 program’s stakeholders believe that OMB penalized the 317 program by recommending a change in that program that only the VFC program could accomplish. Program officials were unable to convince OMB to remove the VFC legislative proposal from the 317 program PART assessment summary sheet. Similarly, when OMB proposed a goal related to the global eradication of polio, HHS officials were unable to convince OMB that while global eradication of polio is a goal of the NIP overall, it is not within the scope of the individual 317 program, which is solely a domestic program. Although one of the program’s annual measures is still the “number of polio cases worldwide,” OMB responded to the agency’s concern in the most recent PART summary sheet for the 317 program, noting that “the global polio measure will be tracked by the global immunization program, which will be assessed separately in the future, and not by the 317 program.” We have long recognized the difficulties of developing useful results- oriented performance measures for programs such as those geared toward long-term health outcomes and research and development (R&D) programs. However, in a June 1997 report discussing the challenges of GPRA implementation, we also said that although such performance measurement efforts were difficult, they have the potential to provide important information to decision makers. We noted that agencies were exploring a number of strategies to address these issues, such as using program evaluations to isolate program impact, developing intermediate measures, employing a range of measures to assess progress, and working with stakeholders to seek agreement on appropriate measures. OMB recognizes several of these approaches as appropriate alternatives to outcome measures for PART purposes but as described below, agencies have had mixed success reaching agreement with OMB in these areas. Although these types of measurement challenges are clearly not new or unique to the PART, they are aggravated by the difficulties of developing measures useful for multiple purposes and audiences and often remain a point of friction in agencies and sometimes within OMB. For programs that can take years to observe program results, it can be difficult to identify performance measures that will provide information on the annual progress they are making toward achieving program results. This can complicate efforts to arrive at goals useful to multiple parties for multiple purposes. For example, CDC officials told us that long-term health outcome measures favored by the PART are often not as useful to them as data on preventative measures, which tell managers where more efforts are needed and allows them to respond more quickly. Programs where the federal government is one among many actors present similar challenges—when an outcome is beyond the scope of any one program, any changes made to a single federal program will not necessarily have an immediate effect. For example, for the Diabetes program OMB expressed interest in a long-term health outcome measure that tracks changes in national blindness and amputation rates. Program officials said that these types of changes generally cannot be attributed solely to the Diabetes program because it serves a relatively small portion of the population and works with many partners. The Breast and Cervical Cancer program—which screens low-income women and provides public education, quality assurance, surveillance, partnerships, and evaluation regarding breast cancer screening among low-income women—has similar concerns about OMB’s interest in linking the program to changes in the overall mortality rates of cancer patients. Agency experiences with the PART’s emphasis on efficiency measures presented a more varied picture. Some programs had success by defining efficiency in terms of program administration rather than program operations. For example, HHS’s foster care officials said that because children’s safety could have been compromised by moving children too quickly out of foster care, OMB agreed that an administrative efficiency measure would be appropriate instead of the type of outcome-oriented efficiency measure cited above. DOE officials told us that the Strategic Petroleum Reserves program is well suited to the PART’s view of “efficiency” because it can show (1) how every dollar from its budget is spent, (2) that it is spent efficiently, and (3) that the program results related to spending those dollars. In other cases, differences of opinion about efficiency measures highlighted the challenges that can result from the difficult but potentially useful process of comparing the costs of programs related to similar goals. For example, DOL agency officials told us that since Job Corps is a self- paced program, participants are permitted to remain in the program for up to 2 years (or up to 3 years with special approval). They consider this to be adequate time for students to complete their education and/or vocational training, which, as studies indicate, generally results in higher wages. DOL agency officials noted that since costs per participant increase the longer a student remains in the program, Job Corps appears less “efficient” compared with other job training programs, which reflects poorly in Job Corps’ PART assessment. They suggested cost per student day as a cost measure with less inherent perverse incentive, but OMB did not accept the suggestion. Similarly, DOL agency officials explained that whereas Job Corps’ current GPRA measures track the percentage of job/education placements for program exiters who graduate, the common measures—which OMB uses to gauge performance across all job-training programs—track entered employment/education for all program exiters, regardless of their graduate status at exit. Although there are significant differences in the time frames, the placement criteria, and the pool of participants for these measures, these officials told us that the measures are treated as interchangeable in the PART review. In other words, the same benchmark set for the “graduate placement” GPRA indicator is also used for the “placement of all participants” common measure indicator. Consequently, agency officials said, Job Corps is in the position of either (1) failing to meet the common measure goal or (2) being labeled un-ambitious by OMB if the goal is changed to be attainable yet—in DOL’s view—still aggressive. Either way, the agency officials said that their PART assessment is negatively affected. Several R&D officials noted that prior to the PART, there had been a collaborative effort to develop OMB’s R&D investment criteria to better assess the value of R&D programs. However, these managers believed that the investment criteria—which R&D program managers find useful to manage their programs—have been overshadowed by the PART—which OMB finds useful in its budget development process. Part of the trouble seems to be that the PART explicitly requires all programs to have or be developing an efficiency measure, while the investment criteria do not. The investment criteria focus on improving the management of basic research programs. One agency official noted that this is a management efficiency question, not a cost question; therefore it should be captured in the PART’s management section instead of the results section. Such a change could affect a program’s PART score because the management section represents 20 percent of the total weighted score whereas the results section represents 50 percent of the total weighted score. In the investment criteria published with the 2004 PART guidance, OMB noted that it had worked to clarify the implementation of the investment criteria, stating that the investment criteria are broad criteria for all R&D programs while the PART is used to determine compliance with the investment criteria at the program level. OMB also recognized that while programs must demonstrate an ability to manage in a manner that produces identifiable results, taking risks and working toward difficult-to- attain goals are important aspects of good research management, especially for basic research. They further note that the intent of the investment criteria is not to drive basic research programs to pursue less risky research that has a greater chance of success, and that the administration will focus on improving the management of basic research programs. Disagreements over when and how to revise and communicate information about federal programs further highlight tensions between OMB and agencies. OMB Circular A-11 states that the performance targets included in the PARTs and congressional justifications need to be updated to reflect the budgetary resources and associated performance targets decided for the President's budget, and that budget and performance reports should identify changes to performance goals that primarily stemmed from assessing actual performance. However, several agency officials reported problems with adjusting or retiring goals. For example, agency officials told us that sometimes goals need to be retired or consolidated, and cited instances in which they were not permitted to do so even after intense negotiation with OMB. They said that changing goals disrupts the ability to observe historical trends, making it hard for OMB to measure against a baseline. We recognize the value of baseline information and that changing goals and measures can limit the ability to observe trends over time. However, this is not always possible. Revised performance information can also further enhance performance assessments. As we have previously reported, successful organizations base their strategic planning to a large extent on the interests and expectations of their stakeholders, since they recognize that stakeholders will have a lot to say in determining whether their programs succeed or fail. Congress, the executive branch, and other stakeholders may all strongly disagree about a given agency’s missions and goals—in fact, full agreement among stakeholders on all aspects of an agency’s efforts is relatively uncommon because stakeholders’ interests can differ significantly. Still, stakeholder involvement is important to help agencies ensure that their efforts and resources are targeted at the highest priorities. Just as important, involving stakeholders—especially Congress—in strategic planning efforts can help create a basic understanding among stakeholders of the competing demands that confront most agencies. Because of Congress’s constitutional power to create and fund programs, congressional involvement is indispensable to defining each agency’s mission and establishing its goals. Some tension between the level of stakeholder involvement in the development of performance measures in the GPRA strategic planning process and the process of developing performance measures for the PART excutive is inevitable. Compared to the relatively open-ended GPRA process, any executive budget formulation process is likely to seem closed. An agency’s communication with stakeholders, including Congress, about goals and measures created or modified during the formulation of the President’s budget, is likely to be less than during the development of the agency’s own strategic or performance plan. Although OMB’s PART guidance discusses the need to integrate the PART and GPRA, we continue to find evidence that the closed nature of the executive budget formulation process may not allow for the type of stakeholder involvement in strategic and annual planning envisioned by GPRA. Beginning with the fiscal year 2005 budget submission, OMB required agencies to submit a performance budget, which is expected to satisfy all statutory requirements of the GPRA annual performance plan. It is generally expected to include the PART performance goals (including annual and long-term performance measures with targets and time frames) for programs that have been assessed by the PART. The PART guidance recognizes stakeholder involvement in strategic planning as required by GPRA by saying agencies must consult with Congress and solicit and consider the views of interested and potentially affected parties. At the same time, the executive budget formulation process—to which the PART belongs—is “predecisional.” This means that information from the annual budget process, including information required in agencies’ annual GPRA plans, is embargoed within the executive branch until the President’s budget request is transmitted to Congress. Agencies may therefore be prevented from consulting with their stakeholders when developing annual and long-term goals and measures. Some of our case study agencies described similar experiences. Their interaction with key stakeholders was limited. Sometimes they had to present new or revised program goals and measures to their stakeholders after the fact, and in some cases stakeholders disagreed with the goals, or had no choice but to accept them after the fact. Discussions of how performance information is being used are important because GPRA performance reports are intended to be one of Congress’s major accountability documents. As such, these reports are to help Congress assess agencies’ progress in meeting goals and determine whether planned actions will be sufficient to achieve unmet goals, or, alternatively, whether the goals should be modified. Because predecisional performance information must be excluded from the reports, their potential as a source of information to Congress is limited. However, this embargo conflicts with OMB’s own reporting requirements regarding the issuance of agency Performance and Accountability Reports (PAR). OMB’s Circular A-11 guidance notes that the transmittal date for an agency's PAR is November 15, and that because this precedes the transmittal of the President's budget, an agency may need to omit certain “privileged” information from its PAR. As described in Circular A-11, this privileged information includes specifically required elements of agency PARs, including an evaluation of performance for the current fiscal year; schedules for achieving established performance goals; and, if a performance goal is impractical or infeasible, an explanation of why that is the case and what action is recommended. However, OMB senior officials told us that the only information that cannot be included in a PAR is that related to target levels of funding and/or policy changes. While the PART has been useful to OMB to achieve its own budget formulation goals, OMB acknowledges the need to work to gain congressional acceptance of the tool and its results. In response to our January 2004 report on the first year of implementing the PART, OMB said that it was working to “generate, early in the PART process, an ongoing, meaningful dialogue with congressional appropriations, authorization, and oversight committees about what they consider to be the most important performance issues and program areas warranting review.” Although OMB uses a variety of methods to communicate the PART assessment results, congressional committee staff said these methods have not facilitated this early consultation on the PART. An absence of early consultation has contributed to several areas of disagreement between OMB and Congress about this executive branch tool, resulting in most congressional staff we spoke with not using the PART information. Most congressional staff reported that they would more likely use the PART results to inform their deliberations if OMB (1) consulted them early in the PART process regarding the selection and timing of programs to assess, (2) explained the methodology and evidence used or to be used to assess programs, and (3) discussed how the PART information can best be communicated and leveraged to meet their needs. Although OMB will be less likely to demonstrate the value of the PART beyond executive branch decision making without early consultation, OMB has had some success in engaging Congress when it has communicated selected PART results through legislative proposals and other traditional methods that clearly signal an executive branch priority. Although Congress currently has a number of opportunities to provide its perspective on specific performance issues and performance goals, opportunities also exist for Congress to enhance its institutional focus to enable a more systematic assessment of key programs and performance goals. OMB uses a variety of methods to communicate PART results to both the public and to Congress, primarily through the President’s budget request documents, OMB’s Web site, and meetings with some congressional staff. For example, OMB provides the single, bottom-line PART ratings in the Analytical Perspectives volume of the President’s budget request, while the one-page PART summary sheets are available on a CD-ROM accompanying the President’s budget request or on OMB’s Web site. The Web site also contains the detailed supporting worksheets as well as other information about the tool itself. Certainly, OMB has provided more extensive information on program performance than in the past. OMB’s PART guidance also directed agencies to address the PART findings—from both current and prior year’s PARTs—in their fiscal year 2006 budget submissions to OMB and budget justifications to Congress, as well as in testimony to Congress, in particular when a key budget or policy recommendation was influenced by a PART analysis. Agency witnesses testifying before the appropriations subcommittees did in fact include the results of the PART assessments in their written statements, and in some instances the PART was discussed during the “Q&A” portions of these hearings. In addition to requiring agencies to inform Congress about the PART, OMB offered to brief congressional committees about the PART in 2004. According to OMB, packages including the PART summary sheets for programs that fell within each committee’s jurisdiction and a list of the programs OMB planned to review for the fiscal year 2006 budget request were sent to all relevant House and Senate committees. An OMB senior official also said he followed up on these packages with phone calls, but received very little response. His records show that between February 2005 and June 2005 there were about 21 congressional meetings (bicameral and bipartisan) about the PART. In February 2005, upon the release of the Major Savings and Reforms in the President’s 2006 Budget document, OMB held what it termed a briefing on the PART, inviting all appropriations staff. OMB has set an ambitious benchmark for involving Congress in the PART process. In recent testimonies, OMB’s Deputy Director for Management stated that OMB’s responsibility is to convince Congress that the PART assessments have correctly identified whether a program is working and, if not, what to do about it. In the past 3 years OMB states it has conducted 607 the PART assessments (about 60 percent of federal programs) that have generated nearly 1,800 recommendations. However, it is not clear that the PART has had any significant impact on congressional authorization, appropriations, and oversight activities to date. Moreover, it is unlikely that performance information will be used unless it is believed to be credible and reliable and reflects a consensus about performance goals among a community of interested parties. The PART likely has required a significant additional commitment of OMB’s and agencies’ resources, but demonstrating the value of the assessments beyond the executive branch will require further efforts. According to OMB senior officials, OMB’s efforts generally focused on providing an overview of the PART process or communicating program assessment results to Congress rather than seeking early consultation about how the tool can best serve congressional needs. For example, upon the release of the Major Savings and Reforms in the President’s 2006 Budget document, OMB said they invited leadership, appropriations, and budget committee staff to a presentation about it. However, some subcommittee staff said that the presentation was primarily intended to provide the Major Savings document that proposed program funding reductions and terminations, some of which were based on the PART assessments. Although some subcommittee staff said that they met with OMB and that OMB officials asked for their input about the PART, they did not see subsequent evidence that their views had been considered. Overall, most committee staff said that OMB generally did not involve them in the PART process. The need for early consultation is clearly demonstrated by the strong interest House appropriators expressed in being consulted early in the PART process about the programs, activities, or projects that OMB intends to assess for the fiscal years 2007 and 2008 budget requests, including approval of the methodology to be used to conduct each assessment. Congress went so far as to express these concerns in committee report language related to OMB’s fiscal year 2006 appropriations. Similar views were also echoed by many appropriations and authorizing committee staff we spoke with. As we have noted, some tension about the amount of stakeholder involvement in the internal deliberations surrounding the development of the PART measures and the broader consultations more common to the GPRA strategic planning process is inevitable. Compared to the relatively open-ended GPRA process, any executive budget formulation process is likely to seem closed. However, if the PART is to be accepted as something more than an executive branch budget formulation tool, congressional understanding and acceptance of the tool and its analysis will be critical. A lack of early consultation has contributed to both congressional skepticism about the PART and to several areas of disagreement between OMB and Congress. As a result, most congressional staff we spoke with do not use PART information. Many committee staff we spoke with expressed frustration with the lack of available detail on how OMB arrived at their ratings of a program’s performance. Many had concerns about the goals and measures used to assess program performance. Some subcommittee staff questioned the “unit of analysis” for the purposes of the PART as well as the design of the tool itself. The PART is OMB’s tool of choice for assessing program performance and as such serves the administration’s needs. However, it is only one source of information available to congressional committees. Several committee staff were frustrated with the lack of detail provided on the PART summary sheets as to why a program was rated a certain way. They were unlikely to accept conclusions about a program’s performance without seeing the evidence used to support them, particularly when the rating was contrary to what they believed to be true about a program. For example, some appropriations subcommittee staff expressed concerns about the “ineffective” PART rating given to the Health Professions program, which assists in paying for health professionals’ education in exchange for their working in underserved areas. They said OMB could have made a stronger case for this rating if it had provided information showing that the program is unsuccessful in placing participating health professionals in underserved areas. In general, many committee staff we spoke with said they do not use the Web site containing the detailed supporting worksheets, primarily because finding this information on the Web site is too time-consuming, or the Web site is difficult to use. Although the detailed supporting worksheet for the Health Professions program notes that the agency has not conducted evaluations necessary to measure the program’s performance—thus a factor for the “ineffective” rating—OMB’s explanation of this rating is not clearly stated on the one- page summary sheet. Several committee staff said they wanted detailed information or criteria used to evaluate the program so that they could reach their own conclusions about program effectiveness. Some subcommittee staff felt that if OMB intends to request funding reductions or program eliminations based on PART assessments, a special burden exists to prove that these programs are ineffective. In other cases, committee staff remained unconvinced about the PART ratings and the evidence used to support them. House appropriations subcommittee staff said that the Agricultural Credit Insurance Fund— Direct Loans, which they had held hearings on, was rated “moderately effective;” however, the subcommittee staff questioned the basis on which this program was given this rating since the agency has written off many of its loans. Committee staff also cited a PART assessment that stated that SBA’s 7(a) loan program and its 504 program overlap because both provide long-term financing for similar borrowers. The committee staff disagreed with this assessment. A lack of consultation early in the PART process has contributed to congressional committee staff not agreeing with or not finding useful OMB’s choice or use of certain measures to determine the effectiveness of certain programs. Some committee staff reported that not all programs are well suited to being assessed by a tool like the PART. For example, a House subcommittee held a hearing in March 2004 that addressed concerns about defining acceptable PART measures for environmental research programs. Hearing witnesses noted that OMB permitted some research programs to use output or process measures while it held similar programs to stricter standards, requiring them to use outcome measures. During a recent House Budget Committee hearing on performance budgeting, an OMB senior official agreed with committee members that the PART needs a set of goals and measures useful to OMB and Congress. He added that consulting Congress early in the PART process, including discussions about how to make the PART useful for Congress, can better take place now that the PART has generated a critical mass of performance information. Some congressional staff were troubled by OMB’s definition of certain programs—the “unit of analysis”—used for the PART assessments. They noted that what was useful for congressional budget deliberations sometimes differed from the unit of analysis OMB used to assess program performance in the PART. For example, appropriations subcommittee staff said that they often look at the performance of a particular project in determining how much funding to provide it. When OMB combines projects that are only loosely related by their authorizing statutes and rates them all as “ineffective” or “effective,” this arrangement does not help Congress make trade-offs among those projects. A few committee staff we talked with said that they use the PART information as one of many sources of information about program performance, including inspectors general reports, agency-commissioned evaluations, National Academy of Sciences reports, GAO reports, and National Academy of Public Administration reports. Several indications of congressional attention to the PART results were reflected in recent appropriations committee reports. For example, a House Appropriations Committee report on fiscal year 2006 appropriations cites a PART assessment stating among other things that performance measures have still not been developed and that effects on Pacific salmon stocks are still unknown. The same committee applauds the Department of State’s educational and cultural exchange programs (ECA), noting that “ECA received from the Office of Management and Budget Program Assessment Rating Tool ratings of 98 percent and 97 percent, the highest in the State Department and in the top one-percent in the Executive Branch.” Another House Appropriations Committee report for fiscal year 2006 noted that DOE’s natural gas and petroleum/oil research and development programs received a poor PART score. In response, the committee encouraged the department to develop a strategic planning process that “demonstrates a clear path of investment that will yield demonstrable results, and better reflect the successes of these programs.” The PART’s focus on outcome measures may not fully appreciate congressional needs for other types of measures, such as output and workload information. Committee staff said they consider a variety of performance information such as outcome, output, and input measures to help gauge program performance. We have previously reported that congressional staff are interested in using a diverse array of information to address key questions on program performance, such as recurring information on spending priorities within programs; the quality, quantity, and efficiency of program operations; as well as the populations served or regulated. Our recent work examining performance budgeting efforts at both the state and federal levels also bears this out. We found that appropriations committees consider workload and output measures important for making resource allocation decisions. Workload and output measures lend themselves to the budget process because workload measures, in combination with cost-per-unit information, can help relate appropriation levels to a desired level of service. Despite its efforts, OMB has had limited success in engaging Congress in the PART process. For example, in June an OMB senior official testified that the PART had some effect on congressional authorizations, appropriations, or the oversight, but that OMB could clearly do a better job convincing Congress of the usefulness of performance information generated by the PART. Many majority and minority staff of House and Senate committees we talked with said that OMB should communicate the PART results in a way that meets individual committee needs. Most congressional committee staff said they would be more likely to use the PART results relevant to their committee responsibilities if OMB consulted with them early in the PART process and made PART information more useful for their work. They said it is important that such discussions also address performance information congressional committees find most useful. According to some staff, consulting them about congressional program priorities for PART assessments could be useful for linking these assessments to the authorization and appropriations processes by informing OMB about the committees’ planned legislative agenda and informing Congress about programs OMB plans to assess in the near future. In discussing options for increasing congressional staff’s access to performance information, we have previously noted that improved communication could go a long way to ensuring that congressional needs are understood and, where feasible, met. While some House and Senate committee staff stated that it would be difficult to conveniently time these consultations for both OMB and congressional staff, most agreed that they were a necessary step if Congress were to be able to use the PART to inform its deliberations. However, several majority and minority staff questioned how OMB could provide policy-neutral assessments given its institutional role. A couple of congressional subcommittee staff suggested that for any assessment to be considered credible it would have to be conducted or reviewed by an independent entity, such as a commission or a nonpartisan organization. OMB has sometimes been able to engage Congress when it has communicated selected PART results through traditional means of signaling executive branch priorities, such as legislative proposals. For example, as discussed previously, the administration recently proposed to consolidate 18 federal CED programs, including the Community Development Block Grant (CDBG), into a single block grant, citing as one factor the low PART scores received in a crosscutting review of CED programs. The proposal led to hearings by several committees, involving administration officials, programs’ stakeholders, and experts. Although the full House and Senate Appropriations Committee rejected the President's proposal to transfer the CDBG program to the Department of Commerce and instead kept the program at the Department of Housing and Urban Development, the House and Senate reduced the funding level for the CDBG formula grants by $250 million and $347 million, respectively, from last year's level. Congress has initiated other hearings in which the PART has been a central subject of discussion. For example, OMB proposed funding cuts for the Environmental Protection Agency’s science research grant programs (STAR) for the fiscal year 2005 budget because, according to a PART assessment, parts of STAR did not have adequate outcome measures and therefore could not demonstrate results. The Subcommittee on Environment, Technology, and Standards, House Committee on Science, held a hearing to discuss competing claims about whether these programs were contributing to their stated goals. The fact that Congress has held such hearings indicates that certain PART reviews have captured congressional attention and contributed to the policy debate. As we have previously noted, success in performance budgeting should not be defined only by its effect on funding decisions but by the extent to which it changes the kinds of questions raised in Congress and executive agencies. That is, performance budgeting helps shift the focus of congressional debates and oversight activities by changing the agenda of questions asked. Congress has a number of opportunities to provide its perspective on specific performance issues and performance goals—when it establishes or reauthorizes a new program, during the annual appropriations process, and in its oversight of federal operations. Opportunities also exist for Congress to enhance its institutional focus to enable a more systematic assessment of key programs and performance goals. For example, identifying the key oversight and performance goals that Congress wishes to set for its own committees and for the government as a whole, perhaps for major missions such as budget functions could be useful. Collecting the “views and estimates” of authorization and appropriations committees on priority performance issues for programs under their jurisdiction and working with such crosscutting committees as the House Committee on Governmental Reform and the House Committee on Rules could be an initial step. Such a process might not only inform and better focus congressional deliberations, but could allow for more timely input into the PART. It is important that Congress take full advantage of the benefits arising from the reform agenda under way in the executive branch. As we have suggested in the past, one approach to achieving the objective of enhancing congressional oversight is to develop a congressional performance resolution by modifying the current congressional budget resolution, which is already organized by budget function. Ultimately, what is important is not the specific approach or process, but rather the intended result of helping Congress better promote improved fiscal, management, and program performance through broad and comprehensive oversight and deliberation. The federal government is in a period of profound transition and faces an array of challenges and opportunities to enhance performance, ensure accountability, and position the nation for the future. A number of overarching trends—including the nation’s long-term fiscal imbalance— drive the need to reexamine what the federal government does, how it does it, who does it, and how it gets financed. Performance budgeting holds promise as a means for facilitating a reexamination effort and bringing the panoply of federal activities in line with the demands of today’s world. It can help enhance the government’s capacity to assess competing claims for federal dollars and has the potential to better inform the budget debate. PMA and its related initiatives, including the PART, demonstrate the administration’s commitment to improving federal management and performance. Calling attention to successes and needed improvements is certainly a step in the right direction. The PART has helped perpetuate and sustain the performance culture ushered in by the management reforms of the 1990s. The PART has lent support to internal agency initiatives and— whatever criticism may be made regarding the value of scorecards and bottom-line ratings—has highlighted the need for improvements and motivated agencies to do more. There is no doubt that creating a closer link between the resources expended on programs and the results we expect from them is an important goal. The PART made a significant contribution by demonstrating one way to make a direct connection between performance and resource considerations. However, without truly integrating the PART and GPRA in a way that considers the differing needs of the budget formulation and strategic planning processes and their various stakeholders, OMB’s ability to strengthen and further the performance- resource linkages for which GPRA laid the groundwork will continue to be hampered. Successful integration of the inherently separate but interrelated GPRA strategic planning and the PART performance budgeting processes is predicated on (1) ensuring that the growing supply of performance information is credible, useful, reliable, and used; (2) increasing the demand for this information by developing goals and measures relevant to the large and diverse community of stakeholders in the federal budget and planning processes; and (3) taking a comprehensive and crosscutting approach. By linking performance information to the budget process OMB has provided agencies with a powerful incentive for improving data quality and availability and has increased the potential for using performance information to inform the resource allocation process. To be effective, however, this information must not only be timely—to measure and affect performance—and reliable—to ensure consistent and comparable trend analysis over time and to facilitate better performance measurement and decision making—but also useful and used in order to make more informed operational and investing decisions. Improvements in the quality of performance data and the capacity of federal agencies to perform program evaluations will require sustained commitment and investment of resources. However, evaluations can be very costly; opportunities exist to carefully target federal evaluation resources such that the American people receive the most benefit from each evaluation dollar spent. Moreover, the question of investment in improved evaluation capacity is one that must be considered in budget deliberations both within the executive branch and in Congress. Importantly, it is critical that budgetary investments in this area be viewed as part of a broader initiative to improve the accountability and management capacity of federal agencies and programs. Some program improvements related to the PART’s success—such as improving program outcomes, taking steps to address PART findings, improving program management, and becoming more efficient—can often come solely through executive branch action, but for the PART to meet its full potential the assessments it generates must also be meaningful to and used by Congress and other stakeholders. For the PART to result in congressional action on the PART’s funding and policy recommendations as OMB desires, the PART must hold appeal beyond the executive branch. The PART was designed for and is used in the executive branch budget preparation and review process; as such, the goals and measures used in the PART must meet OMB’s needs. Because OMB has not developed an effective strategy for connecting the PART process to congressional needs, Congress generally does not use the PART in its deliberations. Without developing an effective strategy for obtaining and acting on congressional views on what to measure, how to measure it, and how to best present this information to a congressional audience, it is more likely that PART will remain an executive branch exercise largely ignored in the authorization, appropriations, and oversight processes. Infusing a performance perspective into budget decisions may only be achieved when we discover ways to reflect both the broader planning perspective that can add value to budget deliberations and foster accountability in ways that Congress considers appropriate for meeting its roles, responsibilities, and interests. Congress also can facilitate the use of performance information by enhancing its focus on performance in budget, authorizing, appropriations, and oversight processes. Looking forward, opportunities exist to develop a more strategic approach to selecting and prioritizing areas for assessment under the PART process. Targeting PART assessments based on such factors as the relative priorities, costs, and risks associated with related clusters of programs and activities addressing related strategic and performance goals not only could help ration scarce analytic resources but also could focus decision makers’ attention on the most pressing policy and program issues. Moreover, key outcomes in areas ranging from low income housing to food safety to counterterrorism are addressed by a wide range of discretionary, entitlement, tax, and regulatory approaches that cut across a number of agencies. Some tax expenditures amount to hundreds of billions of dollars of annual expenditures—the same order of magnitude as total discretionary spending, yet relatively little is known about the effectiveness of tax incentives in achieving the objectives intended by Congress. Broadening the PART to assess complete portfolios of tools used to achieve key federal outcomes is absolutely critical. A crosscutting approach could also facilitate the use of the PART assessments to review the relative contributions of similar programs to common or crosscutting goals and outcomes established through the GPRA process. As we have previously reported, effective congressional oversight can help improve federal performance by examining the program structures agencies use to deliver products and services to ensure that the best, most cost-effective mix of strategies is in place to meet agency and national goals. While Congress has a number of opportunities to provide its perspective on performance issues and performance goals, such as when it establishes or reauthorizes a program, during the annual appropriations process, and in its oversight of federal operations, a more systematic approach could allow Congress to better articulate performance goals and outcomes for key programs of major concern. Such an approach could also facilitate OMB’s understanding of congressional priorities and concerns and, as a result, increase the usefulness of the PART in budget deliberations. To facilitate an understanding of congressional priorities and concerns, Congress should consider the need for a strategy that includes (1) establishing a vehicle for communicating performance goals and measures for key congressional priorities and concerns; (2) developing a more structured oversight agenda to permit a more coordinated congressional perspective on crosscutting programs and policies; and (3) using such an agenda to inform its authorization, oversight, and appropriations processes. We make three recommendations to OMB. We recommend that the Director of OMB take the following actions: Ensure that congressional leadership and key committees are given an opportunity to provide input early in the PART process on the performance issues and program areas they consider to be the most important and in need of review. Seek input from congressional committees on the performance information they find useful and how that information could best be presented to them. Target individual programs to be reassessed based on factors such as the relative priorities, costs, and risks associated with clusters of related programs, and in a way that reflects the congressional input described above. In commenting on a draft of this report, OMB generally agreed with our findings, conclusions, and recommendations. OMB outlined several actions it is taking to address some of the issues raised in the report, including implementing information technology solutions to make application of the PART less burdensome and more collaborative. OMB also suggested some technical changes throughout the report that we have incorporated as appropriate. OMB’s comments appear in appendix IV. We also received technical comments on excerpts of the draft from the Departments of Labor and Health and Human Services, which are incorporated as appropriate. We are sending copies of this report to the Director of the Office of Management and Budget and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report please contact Susan Irving at (202) 512-9142 or irvings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff making key contributions to this report are listed in appendix V. To address the first two objectives, we reviewed the Office of Management and Budget’s (OMB) materials on the implementation, application, and revision of the Program Assessment Rating Tool (PART) for calendar years 2002 through 2004. We also interviewed OMB branch chiefs and OMB staff on the Performance Evaluation Team (PET). The PET’s role is to provide guidance to budget examiners and help ensure consistent application of the PART across OMB offices. To better understand OMB’s experience with crosscutting reviews, we interviewed OMB staff responsible for coordinating the Community and Economic Development and Rural Water crosscutting reviews conducted for the fiscal year 2006 President’s budget request. To obtain agency perspectives on the relationship between the PART and the Government Performance and Results Act of 1993 (GPRA) and their interactions with OMB concerning that relationship, we interviewed department and agency officials, including senior managers, and program, planning, and budget staffs at (1) the Department of Health and Human Services (HHS), (2) the Department of Energy (DOE), (3) the Department of Labor (DOL), and (4) the Small Business Administration (SBA). We also interviewed officials from these departments and agencies concerning their perspectives and activities in response to the PART recommendations and the effects of implementing those recommendations on operations and results. We selected these three departments and one independent agency for a number of reasons. Collectively, they offered examples of all seven PART program types (e.g., block/formula grants, competitive grants, direct federal, and research and development) for review. These examples covered about a fifth of all the programs subject to the PART as of 2004 and thus could provide a broad-based perspective on how the PART was applied. We also chose to return to HHS and DOE—two of the departments included in our previous study on the PART. To broaden our coverage of agency perspectives we selected DOL and SBA because they had received a “green” score on their President’s Management Agenda Executive Branch Management Scorecard for the budget and performance integration initiative and were considered good candidates for showing progress. Our selection of these four agencies was also influenced by our intent to integrate this work with our related work examining progress in addressing the PART program evaluation recommendations. Approximately half of the evaluation recommendations in the 2002 PART were encompassed in our four case selections. As part of our work on the second objective, we also performed various analyses of the PART recommendations made in all 3 years to discern possible changes or trends in recommendations over time and relationships between the type of recommendations made, type of program, overall rating, total PART score, and answers to selected PART questions. To do these analyses, we classified the recommendations OMB made into the same four categories we used in our prior report, i.e., program assessment, program design, program management, and funding. We employed a slightly modified classification procedure from our previous review, which included the addition of an “other” category for recommendations that did not fit within any of the four categories. We then combined the results of our recommendation classifications with selected data we downloaded from PART summaries and worksheets posted on OMB’s PART Web sites, data developed for our previous report of the 2002 PART, and a data set provided by OMB of programs covered in the 2004 PART. In addition, we also examined relevant OMB and agency documents to help determine how recommendations are tracked and their impact evaluated by OMB and the selected agencies. To address our third objective of examining the steps OMB has taken to involve Congress in the PART process, we interviewed OMB and agency officials and asked questions about the steps OMB and agencies have taken to involve Congress in the PART process or in using the results of the PART. To obtain documented instances of Congress’ uses and views of the PART, we interviewed House and Senate committee staff (minority and majority) for the authorizing and appropriations subcommittees with jurisdiction over our selected agencies as well as OMB and officials from the four selected agencies. Finally, we reviewed fiscal years 2005 and 2006 House and Senate congressional hearing records and reports as well as conference reports for mentions of the PART. In addition, where possible, we corroborated testimonial evidence with documentary evidence of OMB’s and agencies’ strategies for involving Congress as well as evidence of collaboration and coordination, such as planning documents, briefing material, or other evidence of contact with Congress. We did not independently verify the PART assessments as posted on OMB's Web sites; however, we did take several steps to ensure that we reliably downloaded and combined the various data files with our recommendation classifications. Our steps included (1) having the computer programs we used to create and process our consolidated dataset verified by a second programmer; (2) performing various edit checks on the data and (3) selecting computer-processed data elements checked back to source files for a random sample of programs and also for specific programs identified in our analyses or through edit checks. We determined that the data were reliably downloaded and combined, and sufficient for the purposes of this report. While our summary analyses include all or almost all programs subject to the PART for the years 2002 to 2004 or all or almost all programs within a specified subset of programs (e.g., program type, specific year, specific rating), the information obtained from OMB, congressional and agency officials, as well as documentary material from the selected agencies is not generalizable to the PART process for all years or all programs. We conducted our audit work from January 2005 through August 2005 in accordance with generally accepted government auditing standards. OMB provided written comments on this draft that are reprinted in appendix IV. Section I: Program Purpose & Design (Yes, No, N/A) 1. Is the program purpose clear? 2. Does the program address a specific and existing problem, interest, or need? 3. Is the program designed so that it is not redundant or duplicative of any other Federal, State, local or private effort? 4. Is the program design free of major flaws that would limit the program’s effectiveness or efficiency? 5. Is the program design effectively targeted, so that resources will reach intended beneficiaries and/or otherwise address the program’s purpose directly? Section II: Strategic Planning (Yes, No, N/A) 1. Does the program have a limited number of specific long-term performance measures that focus on outcomes and meaningfully reflect the purpose of the program? 2. Does the program have ambitious targets and timeframes for its long- term measures? 3. Does the program have a limited number of specific annual performance measures that can demonstrate progress toward achieving the program’s long-term goals? 4. Does the program have baselines and ambitious targets for its annual measures? 5. Do all partners (including grantees, sub-grantees, contractors, cost- sharing partners, and other government partners) commit to and work toward the annual and/or long-term goals of the program? 6. Are independent evaluations of sufficient scope and quality conducted on a regular basis or as needed to support program improvements and evaluate effectiveness and relevance to the problem, interest, or need? 7. Are Budget requests explicitly tied to accomplishment of the annual and long-term performance goals, and are the resource needs presented in a complete and transparent manner in the program’s budget? 8. Has the program taken meaningful steps to correct its strategic planning deficiencies? Specific Strategic Planning Questions by Program Type RG1. Are all regulations issued by the program/agency necessary to meet the stated goals of the program, and do all regulations clearly indicate how the rules contribute to achievement of the goals? Capital Assets and Service Acquisition Programs CA1. Has the agency/program conducted a recent, meaningful, credible analysis of alternatives that includes trade-offs between cost, schedule, risk, and performance goals and used the results to guide the resulting activity? RD1. If applicable, does the program assess and compare the potential benefits of efforts within the program and (if relevant) to other efforts in other programs that have similar goals? RD2. Does the program use a prioritization process to guide budget requests and funding decisions? Section III: Program Management (Yes, No, N/A) 1. Does the agency regularly collect timely and credible performance information, including information from key program partners, and use it to manage the program and improve performance? 2. Are Federal managers and program partners (including grantees, sub- grantees, contractors, cost-sharing partners, and other government partners) held accountable for cost, schedule and performance results? 3. Are funds (Federal and partners’) obligated in a timely manner and spent for the intended purpose? 4. Does the program have procedures (e.g., competitive sourcing/cost comparisons, IT improvements, appropriate incentives) to measure and achieve efficiencies and cost effectiveness in program execution? 5. Does the program collaborate and coordinate effectively with related programs? 6. Does the program use strong financial management practices? 7. Has the program taken meaningful steps to address its management deficiencies? Specific Program Management Questions by Program Type CO1. Are grants awarded based on a clear competitive process that includes a qualified assessment of merit? CO2. Does the program have oversight practices that provide sufficient knowledge of grantee activities? CO3. Does the program collect grantee performance data on an annual basis and make it available to the public in a transparent and meaningful manner? BF1. Does the program have oversight practices that provide sufficient knowledge of grantee activities? BF2. Does the program collect grantee performance data on an annual basis and make it available to the public in a transparent and meaningful manner? RG1. Did the program seek and take into account the views of all affected parties (e.g., consumers; large and small businesses; State, local and tribal governments; beneficiaries; and the general public) when developing significant regulations? RG2. Did the program prepare adequate regulatory impact analyses if required by Executive Order 12866, regulatory flexibility analyses if required by the Regulatory Flexibility Act and SBREFA, and cost-benefit analyses if required under the Unfunded Mandates Reform Act; and did those analyses comply with OMB guidelines? RG3. Does the program systematically review its current regulations to ensure consistency among all regulations in accomplishing program goals? RG4. Are the regulations designed to achieve program goals, to the extent practicable, by maximizing the net benefits of its regulatory activity? Capital Assets and Service Acquisition Programs CA1. Is the program managed by maintaining clearly defined deliverables, capability/performance characteristics, and appropriate, credible cost and schedule goals? CR1. Is the program managed on an ongoing basis to assure credit quality remains sound, collections and disbursements are timely, and reporting requirements are fulfilled? CR2. Do the program’s credit models adequately provide reliable, consistent, accurate and transparent estimates of costs and the risk to the Government? RD1. For R&D programs other than competitive grants programs, does the program allocate funds and use management processes that maintain program quality? Section IV: Program Results/Accountability (Yes, Large Extent, Small Extent, No) 1. Has the program demonstrated adequate progress in achieving its long- term performance goals? 2. Does the program (including program partners) achieve its annual performance goals? 3. Does the program demonstrate improved efficiencies or cost effectiveness in achieving program goals each year? 4. Does the performance of this program compare favorably to other programs, including government, private, etc., with similar purpose and goals? 5. Do independent evaluations of sufficient scope and quality indicate that the program is effective and achieving results? Specific Results Questions by Program Type RG1. Were programmatic goals (and benefits) achieved at the least incremental societal cost and did the program maximize net benefits? Capital Assets and Service Acquisition Programs CA1. Were program goals achieved within budgeted costs and established schedules? Assess performance targets to ensure they are ambitious. Conduct a performance-focused review that will include, but is not limited to: analysis of program participants; length of time borrowers remain in program; number of borrowers who 'graduate' and return to the program; effectiveness of targeted assistance; and the potential to reduce subsidy rates. Develop an efficiency measure such as 'cost per loan processed' to track administrative expenses and allow comparison among loan programs. Revise long-term performance measure to better assess progress toward meeting the goal of improving economic viability of farmers/ranchers. Action taken, but not completed FSA participated in the USDA Credit Programs Common Efficiency Measure initiative along with FAS, RD, OBPA, and OMB to develop an efficiency measure to be used by al USDA agencies with credit programs: Maintain or reduce operating expense ratio for average loan portfolio. In addition, the PART evaulation contained a recommendation to conduct a performance-focused review of the farm loan program. This review is being completed by an independent contractor and the results will be used to assess effectiveness of guaranteed loans, as applicable. Estimated completion date is 7/30/2006. FSA is developing new, outcome oriented performance measures as part of the agency's strategic planning process and the development of the new FSA Strategic Plan. Program Funding Level (in millions of dollars) Program Summary: The Migratory Bird Program of the U.S. Fish and Wildlife Service is responsible for maintaining healthy migratory bird populations for the benefit of the American people. The program accomplishes this by conserving and restoring migratory bird populations, restoring and acquiring migratory bird habitat, surveying and monitoring migratory birds, and regulating the take of migratory birds. The program works closely with many partners to ensure the conservation of the birds. supporting strategies, the program did not have specific long-term outcome or annual output performance goals. Through the PART process, specific long-term outcome or annual output performance goals were developed. There are no regular objective, independent program performance evaluations of the entire program. Budget requests have not been explicitly tied to long-term performance goals. Program regulations have not been systematically reviewed to ensure consistency in accomplishing program goals or if the program is using the least intrusive and most efficient approach. While the program is working to incorporate performance goals into specific employee performance plans, the program needs to complete this task to ensure full accountability for achieving specific program goals. In response to the PART findings, the Administration will: 1. Adopt long-term outcome and annual output goals developed during PART process. Accomplishment of the outcome goals will depend on the efforts of many and will require the program to continue to work with partners to achieve these goals. 2. Request additional funding in the Budget to develop and implement management plans for five migratory bird species to help achieve the program’ s new long-term goal to increase the percentage of migratory birds that are healthy and sustainable. 3. Develop baseline data and revise targets as necessary for new performance 4. measures. Schedule and carry out independent program evaluations, including the regulatory part of the program. Annual Measure: Percent of bird population management needs met to achieve healthy and sustainable populations of birds listed on the Birds of Management Concern list. (Baseline and targets under development.) 5. Link individual employee performance plans with specific goal-related performance targets for each year. Program Funding Level (in millions of dollars) Thank you for the opportunity to comment on the draft GAO report on program evaluation (Performance Budgeting: PART Focuses Attention on Program Performance, But More Can Be Done to Engage Congress, GAO-06-28). We appreciate GAO’s continued interest in the Program Assessment Rating Tool (PART) and our determination to assess federal programs in a consistent fashion through it. As is acknowledged in your conclusion, “There is no doubt that creating a closer link between the resources expended on programs and the results we expect from those is an important goal.” We fervently believe that the PART has helped do just that, and we are grateful for any guidance you can provide that will help us achieve even better results. In this same spirit, OMB and agencies continue to search for ways to make PART assessments more rigorous and consistent. Additionally, we are implementing information technology solutions to make application of the PART less burdensome and more collaborative. Moreover, we reviewed each newly completed PART this year to ensure the answers were consistent with PART guidance. These steps and others will make the PART more reliable, less of a burden, and hopefully, more focused on identifying what steps programs need to take to become more effective. In many cases, it takes only administrative actions to address weaknesses in program efficiency and effectiveness and the PART process has helped do just that. But where Congressional action is required to ameliorate a program flaw, GAO correctly points out that PART has been less successful. OMB and agencies are grateful for any specific suggestions GAO may have to obtain greater Congressional support for our initiative to improve the performance of all programs. OMB notes the particular interest that GAO has taken in the Administration’s standards for measuring performance. Thank you for your continued enthusiasm about the PART, as well as for your willingness to take our oral and written comments into consideration in the final draft. I look forward to working with you to improve the ways in which we are making the Federal Government more results-oriented. In addition to the contact named above, Denise Fantone (Assistant Director), Thomas Beall, Kylie Gensimore, Joseph Leggero, Patrick Mullen, Jacqueline Nowicki, Stephanie Shipman, Katherine Wulff, and James Whitcomb made significant contributions to this report. Dianne Blank and Amy Rosewarne also provided key assistance. 21st Century Challenges: Performance Budgeting Could Help Promote Necessary Reexamination. GAO-05-709T. Washington, D.C.: June 14, 2005. Management Reform: Assessing the President's Management Agenda. GAO-05-574T. Washington, D.C.: April 21, 2005. Performance Budgeting: States’ Experiences Can Inform Federal Efforts. GAO-05-215. Washington, D.C.: February 28, 2005. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-352T. Washington, D.C.: February 16, 2005. Long-Term Fiscal Issues: Increasing Transparency and Reexamining the Base of the Federal Budget. GAO-05-317T. Washington, D.C.: February 8, 2005. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. Washington, D.C.: February 2005. Performance Budgeting: Efforts to Restructure Budgets to Better Align Resources with Performance. GAO-05-117SP. Washington, D.C.: February 2005. Performance Budgeting: OMB's Performance Rating Tool Presents Opportunities and Challenges for Evaluating Program Performance. GAO-04-550T. Washington, D.C.: March 11, 2004. Results-Oriented Government: GPRA Has Established a Solid Foundation for Achieving Greater Results. GAO-04-38. Washington, D.C.: March 10, 2004. Performance Budgeting: OMB's Program Assessment Rating Tool Presents Opportunities and Challenges for Budget and Performance Integration. GAO-04-439T. Washington, D.C.: February 4, 2004. Performance Budgeting: Observations on the Use of OMB’s Program Assessment Rating Tool for the Fiscal Year 2004 Budget. GAO-04-174. Washington, D.C.: January 30, 2004. Results-Oriented Government: Using GPRA to Address 21st Century Challenges. GAO-03-1166T. Washington, D.C.: September 18, 2003. Performance Budgeting: Current Developments and Future Prospects. GAO-03-595T. Washington, D.C.: April 1, 2003. Performance Budgeting: Opportunities and Challenges. GAO-02-1106T. Washington, D.C.: September 19, 2002. Managing for Results: Views on Ensuring the Usefulness of Agency Performance Information to Congress. GAO/GGD-00-35. Washington, D.C.: January 26, 2000. Performance Budgeting: Fiscal Year 2000 Progress in Linking Plans With Budgets. GAO/AIMD-99-239R. Washington, D.C.: July 30, 1999. Performance Budgeting: Initial Experiences Under the Results Act in Linking Plans With Budgets. GAO/AIMD/GGD-99-67. Washington, D.C.: April 12, 1999. Managing for Results: Measuring Program Results That Are Under Limited Federal Control. GAO/GGD-99-16. Washington, D.C.: December 11, 1998. The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven. GAO/GGD-97-109. Washington, D.C.: June 2, 1997. Managing for Results: Analytic Challenges in Measuring Performance. GAO/HEHS/GGD-97-138. Washington, D.C.: May 30, 1997. Performance Budgeting: Past Initiatives Offer Insights for GPRA Implementation. GAO/AIMD-97-46. Washington, D.C.: March 27, 1997. Executive Guide: Effectively Implementing the Government Performance and Results Act. GAO/GGD-96-118. Washington, D.C.: June 1996. Managing for Results: Achieving GPRA’s Objectives Requires Strong Congressional Role. GAO/T-GGD-96-79. Washington, D.C.: March 6, 1996.
GAO was asked to examine (1) the Office of Management and Budget's (OMB) and agency perspectives on the effects that the Program Assessment Rating Tool (PART) recommendations are having on agency operations and program results; (2) OMB's leadership in ensuring a complementary relationship between the PART and the Government Performance and Results Act of 1993 (GPRA); and (3) steps OMB has taken to involve Congress in the PART process. To do this, we also followed up on issues raised in our January 2004 report on the PART. The PART process has aided OMB's oversight of agencies, focused agencies' efforts to improve program management, and created or enhanced an evaluation culture within agencies. Although the PART has enhanced the focus on performance, the PART remains a labor-intensive process at OMB and agencies. Most PART recommendations are focused on improving outcome measures and data collection, and are not designed to result in observable short-term performance improvements. Since these necessary first steps on the path to long-term program improvement do not usually lead to improved short-term results, there is limited evidence to date of the PART's influence on outcome-based program results. Moreover, as of February 2005--the date of the most recent available OMB data--the majority of follow-on actions have not yet been fully implemented. By design OMB has not prioritized them within or among agencies. Because OMB has chosen to assess nearly all federal programs, OMB and agency resources are diffused across multiple areas instead of concentrated on those areas of highest priority both within agencies and across the federal government. This strategy is likely to lengthen the time it will take to observe measurable change compared with a more strategic approach. OMB has used the PART as a framework for several crosscutting reviews, but these have not always included all relevant tools, such as tax expenditures, that contribute to related goals. Greater focus on selecting related programs and activities for concurrent review would improve their usefulness. OMB has taken some steps to clarify the PART-GPRA relationship but many agencies still struggle to balance the differing needs of the budget and planning processes and their various stakeholders. Unresolved tensions between GPRA and the PART can result in conflicting ideas about what to measure and how to measure it. Finally, we remain concerned that the focus of agencies' strategic planning continues to shift from long-term goal setting to short-term executive budget and planning needs. OMB uses a variety of methods to communicate PART results, but congressional committee staff we spoke with had concerns about the tool itself, how programs were defined, and the usefulness of goals and measures. Most said that the PART would more likely inform their deliberations if OMB consulted them early on regarding the selection and timing of programs; the methodology and evidence to be used; and how PART information can be communicated and presented to best meet their needs. It is also important that Congress take full advantage of the benefits arising from the executive reform agenda. While Congress has a number of opportunities to provide its perspective on specific performance issues and performance goals, opportunities also exist for Congress to enhance its institutional focus to enable a more systematic assessment of key programs and performance goals.
The L.A. courthouse operations currently are split between two buildings—the Spring Street Courthouse built in 1938 and the Roybal Federal Building built in 1992. The Spring Street building currently consists of 32 courtrooms—11 of which do not meet the judiciary’s minimum design standards for size. It also does not meet the security needs of the judiciary. The Roybal Federal Building, on the other hand, consists of 34 courtrooms (10 district, 6 magistrate, and 18 bankruptcy). The space within the L.A. Court’s buildings, like most courthouses, are divided into courtroom space with associated jury and public spaces, chamber space where the judge and staff office space is located, cell blocks and other USMS spaces, and other support spaces, such as administrative offices. Since 2000, the construction of a new L.A. courthouse has been a top priority for the judiciary because of the current buildings’ space, security, and operational problems. Since fiscal year 2001, Congress has made three appropriations totaling about $400 million for a new L.A. courthouse. In fiscal year 2001, Congress provided $35.25 million to acquire a site for and design a 41-courtroom building, and in fiscal year 2004, Congress appropriated $50 million for construction of the new L.A. Courthouse. In fiscal year 2005, Congress appropriated $314.4 million for the construction of a new 41-courtroom building in Los Angeles, which Congress designated to remain available until expended for construction of the previously authorized L.A. Courthouse. Since 2000 when GSA originally proposed building a new courthouse in downtown Los Angeles, the project has experienced repeated delays in its schedule. In 2000, GSA projected occupancy of a new L.A. courthouse by fiscal year 2006. However, after proposing several changes in project scope and design and repeated delays, GSA projected in 2008 the completion of a new courthouse by fiscal year 2014—a delay of 8 years as of now (see table 1). GSA has spent $16.3 million designing a new courthouse and $16.9 million acquiring and preparing a new site for it in downtown Los Angeles. Since no construction has occurred, about $366.45 million remains in GSA’s Federal Building Fund for the construction of a 41-courtroom L.A. Courthouse. The delays were initially caused by GSA’s decision to design a courthouse much larger than what was authorized by Congress. In fiscal year 2001, Congress appropriated funds for project design for a 1,016,300-square-foot courthouse that corresponded with plans for a 41-courtroom courthouse. In November 2001, however, GSA designed a 1,279,650-square-foot courthouse that contained 54-courtrooms. GSA officials said that GSA increased the scope of the project to accommodate the judiciary’s stated need. Judiciary officials stated that the decision was made jointly with GSA and that changes to GSA’s planning criteria contributed to the increased scope. GSA officials disagreed and stated that GSA’s planning criteria did not contribute to the increase in the scope of the project. A year and a half later, after it had conducted the environmental assessments and purchased the site for the new courthouse, GSA informed Congress that it had designed a 54-courtroom courthouse in a May 2003 proposal. However, OMB did not include the 54-courtroom building plan in the President’s fiscal year 2005 budget, which caused GSA to revise its plans and reduce the number of courtrooms in the plans for the new L.A. courthouse to 41. According to GSA, the 54-courtroom courthouse plan was designed to be readily adaptable to a reduced scope, if a larger scope was not approved. Nonetheless, a senior GSA official estimated that the initial decision to design a 54-courtroom courthouse delayed the project 2 years due to redesign and re-procurement requirements. This delay caused the project as initially planned to go over budget due to inflationary cost escalations, and GSA needed to make further reductions to the courthouse in order to procure it within authorized and appropriated amounts. However, GSA and L.A. Court officials were slow to reduce scope, which caused additional delays and led to the need to make additional reductions. For example, GSA did not simplify the building-high atrium and associated curtain wall that were initially envisioned for the new courthouse until January 2006 even though the judiciary had expressed repeated concerns about the construction and maintenance costs of the atrium since 2002. In July 2005, GSA advised the judiciary that the project could not be constructed for the appropriated amounts because of material shortages and other market factors, and in January 2006, GSA completed a redesigned plan with a simplified atrium and curtain wall. In addition, it took 18 months for GSA to formally propose reducing the number of courtrooms in an attempt to reduce costs. In March 2006, GSA cancelled the procurement of the new courthouse due to insufficient competition when one of the two construction contractors bidding on the 41 courtroom project withdrew. Yet, it was not until the following year in May 2007 that the judiciary proposed reducing the number of courtrooms in a new building to 36, and another 4 months before GSA delivered a revised 36-courtroom proposal to Congress. Additionally, an unforeseen, rapid increase in construction costs contributed to delays in the L.A. courthouse project. According to GSA officials, construction costs escalated in the L.A. market at more than twice the inflation factor used by GSA, necessitating scope reductions and redesigns and causing more delays. GSA officials stated that the escalations in construction costs, which went as high as 16 percent in 2006, were unprecedented and unpredictable. According to information provided by GSA, construction costs escalated nationwide and also affected the construction of a California State courthouse in Long Beach, California, which is near Los Angeles. Other issues related to the procurement process for the new courthouse also contributed to the delays in the L.A. courthouse project by diminishing contractor interest in the project or diverting contractors to other projects. For example, GSA solicited bids for the construction of the neighboring San Diego and L.A. courthouses around the same time. According to GSA officials, in hindsight, this may have limited the number of potential bidders for the construction of the L.A. courthouse as contractors with limited regional capacity chose to bid on the smaller San Diego project instead of the L.A. project. Furthermore, the L.A. courthouse project was competing with other public works construction in the Los Angeles area. GSA officials estimated that $50 billion worth of public construction projects in the L.A. market, which includes increased spending to renovate local schools, further limited the number of potential bidders for the L.A. courthouse project. GSA officials also stated that they chose a procurement approach designed to provide contractors with flexibility in meeting budgeted construction costs, but this approach may in actuality have lowered contractor interest by making the contractor responsible for more of the risk of cost overruns. Over 8 years of delay in GSA’s estimated occupancy of the new L.A. courthouse, estimates have nearly tripled, rendering GSA’s currently authorized 41-courtroom courthouse unachievable. In May 2004, GSA estimated the 41-courtroom courthouse project would cost about $400 million, but current estimates for building a new federal courthouse of similar scope now exceed $1.1 billion. At this rate, each day of additional delay costs about $54,000, assuming current escalation rates, according to GSA. Consequently, every 44 days of additional delay cost as much as one 2,400-square-foot district courtroom. GSA is currently at a standstill because current cost estimates for a 41- courtroom courthouse exceed authorized and appropriated amounts and the President’s fiscal year 2009 budget request did not include any funds for the L.A. courthouse project. Consequently, GSA will need to obtain congressional approval to move forward on any plan. Specifically, all options currently under consideration would require approval of a new prospectus and an estimated appropriation of from $282.1 million to $733.6 million if cost estimates are still viable. Because of the delays in the courthouse project, the operational, space, and security issues that made the new courthouse a top priority have persisted and in some cases worsened. The L.A. Court’s operational problems continue. Housing district and magistrate judges in both the Spring Street and the Roybal buildings causes operational inefficiencies, according to judiciary officials. For example, judges, prisoners, juries, and evidence must be transported between buildings, and many judicial offices need to be duplicated. In addition, a high-level L.A. Court official said that the judiciary has stopped investing in the parts of the Spring Street Courthouse for which it is responsible because it expects the judiciary to move into a new building. The L.A. Court’s space needs persist. L.A. Court officials said that the court does not have chamber or courtroom space for four pending district judgeships and that it currently faces growing deficits in a number of support areas (see table 2). Severe security problems at Spring Street remain. According to USMS officials, the Roybal building has strong security, but security at the Spring Street building is poor and cannot be improved due to the age and design of the building. The Spring Street building lacks a secure parking area and secure prisoner corridors for 20 of its 32 courtrooms. In addition, USMS officials said that they do not use the prisoner corridors that do exist because they are unsafe and do not have holding cells just outside the courtrooms in accordance with judiciary security standards. In addition, USMS officials said that the security situation is worsening in Los Angeles because logs showed a five-fold increase in suspicious activities in L.A. federal courthouses from 2004 to 2007. Since 2000, GSA has developed eight different proposals for housing the L.A. court. Three of them are still under consideration (see table 3); proposals still under consideration are bolded in the table and identified as options in the rest of the report. Each of the options under consideration would require balancing court needs with costs, obtaining a new authorization and appropriation, and considering other benefits and challenges. Each of these remaining options expands the use of Roybal as a federal courthouse to varying degrees and only one option would continue to use the Spring Street building as a courthouse (see table 4). Each of these options would require congressional approval beyond what GSA has already received. In September 2007, GSA drafted the 36- courtroom building proposal, but the President did not include any funds for the project in his fiscal year 2009 budget request to Congress. Then, in March 2008, GSA developed the 20-courtroom building proposal, but it has not been authorized and no funds have been appropriated to for it. GSA estimated that this proposed project would cost $1.1 billion—$733.6 million more than Congress has already appropriated—and be completed by 2014 if construction starts in 2009. This project would provide the L.A. Court with 74 courtrooms in total, including 36 district courtrooms, 20 magistrate courtrooms, and 18 bankruptcy courtrooms, all of which would meet or exceed the judiciary’s current design standards for size and security. The main advantage of this project is that it would allow a division of operational and support activities between the new courthouse and the Roybal building according to the function and responsibilities of the judges, which court officials and judges said would be more efficient than the current split. All the district and senior judges would be housed in the new courthouse, while the magistrate and bankruptcy judges would be in the Roybal building. In addition, because this plan includes a large new building, its implementation would not disrupt court operations by substantially renovating space the court simultaneously is using. The court favors this plan, in part, because it would fulfill its need for a larger building through courtroom sharing among senior judges who would occupy the extra chambers in the new building. The challenges of building a 36-courtroom courthouse are the high costs and the possibility that GSA would face the same problems attracting contractors as it did when it attempted to contract for the construction of a 41-courtroom building. GSA estimated that this proposed project would cost $701.1 million— $301.5 million more than Congress has already appropriated—and be completed by 2014 if construction starts in 2009. This project would provide the L.A. Court with 66 courtrooms in total, including 36 district courtrooms, 20 magistrate courtrooms, and 10 bankruptcy courtrooms. With congressional approval, GSA could use existing funds to begin planning and constructing the new building. In addition, the planned 20- courtroom building may be expandable at some future time. This plan would also maximize the use of Roybal as a courthouse. The challenges of building the 20-courtroom courthouse are that district judges would continue to be split between two buildings and it is unclear what support operations would move to the new building. In addition, the success of this plan relies on GSA’s obtaining an authorization and appropriation to add 12 courtrooms in Roybal. Without that appropriation, the L.A. Court would likely have to remain in the Spring Street building—meaning it would be split between three buildings, not just two, as is currently the case. Another challenge related to the 20-courtroom building plan is that GSA would need to build the new courtrooms in Roybal while the building is occupied by the L.A. Court. GSA officials said that this type of renovation is possible if the most disruptive work is done at night and on weekends. However, judiciary officials said that court officials often need to work at nights and on weekends. In addition, the L.A. district judges unanimously opposed it because it would split district judges over a further distance. The proposed location of the 20-courtroom building is about a third of a mile further from Roybal and the Spring Street Courthouse is. The L.A. Court also opposes this plan because it believes that GSA has underestimated the costs, overstated the end capacity, and would have trouble attracting bidders for the project. GSA estimated that this project would cost $648.4 million—$282.1 million more than Congress has already appropriated. In 2008, GSA estimated that it could complete the project by 2016, but to do so, it would have had to start work in January, which it did not do. For example, GSA’s time line for this project assumed that procurement of the design contract would be completed by April 2008; that work has not yet begun. This proposal would provide the L.A. Court with 64 courtrooms in total, which would be composed of 29 district courtrooms, 17 magistrate courtrooms, and 18 bankruptcy courtrooms. GSA’s proposal indicated that some of the courtrooms would not meet the judiciary’s design standards for size. The advantages of this plan are that it would maximize the use of GSA’s current stock of owned buildings in downtown Los Angeles, and that, with congressional approval, GSA could use existing funds to begin working on the project. Another advantage would be that GSA could sell the site it initially purchased for the new courthouse in order to help offset the costs of the project. The plan also would attempt to address the security concerns that currently exist in the Spring Street building. However, many of the same challenges for the 20-courtroom courthouse also exist for this plan, including the need to renovate occupied space and a lack of clarity about where different support operations would be located. In addition, the court’s operations would be split further among the Spring Street building, the Roybal building, and the federal building located between those two buildings. Also, the estimate only covers security upgrades for the Spring Street building, not a complete renovation. This project also has the longest time until completion of the three projects, putting it at greater risk for additional cost escalations. Finally, the L.A. Court considers this the worst of the three options. Because there is neither consensus nor adequate funding to complete any of the plans currently under consideration, another option is for GSA and the judiciary to restart the planning process and develop a new proposal to meet the long-term needs of the L.A. Court that all stakeholders can support. Since GSA has developed numerous proposals on housing the L.A. Court, it is difficult to know which one it believes is the best solution, and the district judges assigned to the L.A. Court unanimously opposed GSA’s most recent proposal to build a 20-courtroom building. Restarting the planning process would help avoid implementing one of the plans that the judiciary does not support. The remaining $366.5 million appropriated for the project could remain in place for meeting the judiciary’s needs in Los Angeles once a project is agreed upon, or the funds could be used for other purposes, such as addressing GSA’s $6.6 billion repair and maintenance backlog by receiving congressional approval to transfer or rescind the funds. This option would not address any of the L.A. Court’s long-standing space deficits, operational problems with a split court, or security and other problems related to the Spring Street building, and some of the problems would likely worsen until a long-term solution could be found. We are not advocating this or any of the other three options. Our intent is to identify current options so that Congress and stakeholders can evaluate them. Nonetheless, it is clear the current process is deadlocked. We provided the Administrative Office of the U.S. Courts and GSA with draft copies of this report for their review and comment. In written comments, the Administrative Office of the U.S. Courts indicated that the report reflects the general sequence of events and circumstances that have led to the current situation. The letter also provided technical comments that we incorporated, as appropriate. The letter and our comments are contained in appendix II. In written comments, GSA indicated that it partially agreed with the report’s findings related to the delays in the L.A. Courthouse project and provided additional technical comments that we incorporated, as appropriate. In the technical comments, GSA indicated that the judiciary has been reluctant to consider any reduction in the scope of the project as requested by GSA. Our report indicates that GSA and the judiciary were slow to reduce scope to stay on budget. GSA’s written comments are contained in appendix III. We are sending copies of this report to the GSA Administrator and the Director of the Administrative Office of the U.S. Courts. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix IV. courthouse in Los Angeles? 2. What effects have any delays in the project had on project costs and court operations? 3. What are the options for the future of the project? including project time line, project options analysis, planning studies, proposals, and other budget data. Toured L.A. federal court sites, including the Spring Street Courthouse, the Edward R. Roybal Federal Building and Courthouse, the federal building on Los Angeles Street, and the planned courthouse site. Interviewed L.A. district and magistrate judges and other court officials, the Administrative Office of the Federal Courts, the General Services Administration (GSA), and the U.S. Marshals Service (USMS). Washington, D.C., from January 2008 to May 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. slow decision making, paired with unexpectedly high cost escalation rates, rendered the currently authorized prospectus for a new 41-courtroom building unachievable with currently appropriated funds. Stakeholders do not agree on how to proceed. GSA has developed numerous proposals on the L.A. courthouse and currently supports the 20-courtroom courthouse. The federal judges in Los Angeles unanimously opposed GSA’s most recent proposal to build a 20- courtroom building. The following are GAO’s comments on the Administrative Office of the U.S. Courts’ letter dated August 25, 2008. 1. The Administrative Office of the U.S. Courts indicated that additional details on the reasoning behind the decision to propose a 41- courtroom courthouse would be helpful. GSA officials said that the decision to propose a 41-courtroom courthouse was based on 80 percent of the federal judiciary’s stated need at the time—80 percent of 51 courtrooms is approximately 41—and that the judiciary could fit within that space by sharing courtrooms. We added this information to the body of the report. 2. We clarified the report in response to this comment. 3. We did not do a detailed assessment of the possible 20-courtroom courthouse plan and, consequently, did not assess whether it provides space for future expansion. However, there may be design concepts that would leave sufficient room for expansion on the 3.7-acre site, which originally supported the 54-courtroom courthouse plan developed by GSA. 4. Our report does not make any statements related to the number of bankruptcy courtrooms required by the federal judiciary in Los Angeles, but does list the number of those courtrooms that GSA projects for each of the current options thus shows that the 20- courtroom courthouse option would provide 8 fewer bankruptcy courtrooms in Los Angeles than the other options currently being considered. 5. Assessing the validity of GSA’s project budget and schedule were outside the scope of this report. The U.S. House of Representatives, Committee on Transportation, Subcommittee on Economic Development, Public Buildings, and Emergency Management requested this information from the GSA Inspector General. 6. We have clarified the report to reflect that the estimated costs to house the L.A. Court have tripled. 7. We clarified the report to reflect that Roybal currently houses 10 district, 6 magistrate, and 18 bankruptcy courtrooms. In addition to the individual named above, David Sausville, Assistant Director; Keith Cunningham; Bess Eisenstadt; Susan Michal-Smith; Jennifer Kim; and Susan Sachs made key contributions to this report.
Since the early 1990s, the General Services Administration (GSA) and the federal judiciary (judiciary) have been carrying out a multibillion-dollar courthouse construction initiative. In downtown Los Angeles, California, one of the nation's busiest federal district courts (L.A. Court), the federal judiciary has split its district, magistrate, and bankruptcy judges between two buildings--the Spring Street Courthouse and the Edward R. Roybal Federal Building and Courthouse. In 2000 the judiciary requested and GSA proposed building a new courthouse in downtown Los Angeles in order to increase security, efficiency, and space. In response, Congress authorized and appropriated about $400 million for the project. GAO was asked to provide information on the construction of the L.A. courthouse. This report answers: (1) What is the status of the construction of a new federal courthouse in Los Angeles? (2) What effects have any delays in the project had on its costs and court operations? (3) What options are available for the future of the project? GAO reviewed project planning and budget documents, visited the key sites in Los Angeles, and interviewed GSA and judiciary officials. In its comments, the judiciary indicated that the report reflects the project's general sequence of events and circumstances, and GSA partially agreed with the report's findings related to the delays. GSA initially estimated in 2000 that the L.A. Court could take occupancy of a new courthouse in fiscal year 2006, but occupancy has been delayed by 8 years to fiscal year 2014 at the earliest. GSA has spent $16.3 million designing a new courthouse and $16.9 million acquiring and preparing a new site for it in downtown Los Angeles. Since no construction has occurred, about $366.45 million remains appropriated for the construction of a 41-courtroom L.A. Courthouse. Project delays were caused by GSA's decision to design a larger courthouse than what was authorized by Congress, slow decision making by GSA and the judiciary to reduce scope and stay on budget, unforeseen cost escalations, and low contractor interest that caused GSA to cancel the entire 41-courtroom courthouse project. Due to the delays, estimated costs for housing the L.A. Court have nearly tripled to over $1.1 billion, rendering GSA's currently authorized 41-courtroom courthouse unachievable and causing the L.A. Court's problems to persist. Because current cost estimates exceed authorized and appropriated amounts, GSA will need to obtain congressional approval to move forward on any plan. Meanwhile, almost half of the courtrooms in the L.A. Court's Spring Street building do not meet the judiciary's standards for size or security, and the U.S. Marshals have chosen not to use the prisoner passageways that exist in the building because they are too dangerous and inefficient. The L.A. Court also estimates that current courtroom and support space shortages will continue to worsen over time. GAO's analysis showed that four options exist for the L.A. Courthouse project, which require balancing needs for courtroom space, congressional approval, and additional estimated appropriations of up to $733 million. First, GSA has proposed building a 36-courtroom, 45-chamber courthouse to house all district and senior judges and adding 4 more courtrooms in the Roybal building to house all magistrate and bankruptcy judges. The L.A. Court supports this option, but it is the most expensive of the remaining options. Second, GSA has proposed constructing a new 20-courtroom, 20-chamber building and adding 12 more courtrooms to the Roybal building. GSA could begin construction with existing funds, but the L.A. Court opposes this option. Third, GSA has proposed housing the L.A. court in the existing buildings by adding 13 courtrooms to the Roybal building and upgrading security at the Spring Street building. GSA could begin work on the project with existing funds but the L.A. Court also opposes this option. Finally, another option, given the lack of consensus and adequate funding, is to restart the planning process. Under this option, the remaining $366.45 million appropriated for the courthouse could continue to be available for meeting the judiciary's needs in Los Angeles or be used for other purposes through a transfer or rescission. While GAO takes no position on this or the other three options, it is clear the current process is deadlocked.
Education’s Direct Loan program provides financing to students and their parents to help students obtain postsecondary education. This program is currently the largest federal direct loan program with $912 billion in outstanding loans as of June 2016. Under this program, Education issues several types of student loans described in the following sidebar. William D. Ford Federal Direct Loan Types Subsidized Stafford Loans: Available only to undergraduate students with financial need (generally the difference between their cost of attendance and a measure of their ability to pay, known as expected family contribution). The interest rate as of July 1, 2016 is 3.76 percent. Borrowers are not responsible for paying interest on these loans while in school and during certain periods of deferment. Unsubsidized Stafford Loans: Available both to undergraduate and graduate school students irrespective of financial need. Interest rates as of July 1, 2016 are 3.76 percent for undergraduates and 5.31 percent for graduate school borrowers. Borrowers must pay all interest on these loans. PLUS Loans: Available to graduate student borrowers and parents of dependent undergraduates. The interest rate as of July 1, 2016, is 6.31 percent. Borrowers must pay all interest on these loans. Consolidation Loans: Available to student and parent borrowers wanting to combine multiple federal student loans (including those listed above) into one loan. Repayment periods are extended up to 30 years, thereby lowering monthly payments. Interest rates are equal to the weighted average of the underlying loans. Education offers a variety of repayment plans for Direct Loan borrowers: Standard, Graduated, Extended, and Income-Driven. Income-Driven Repayment (IDR) is an umbrella term that describes a number of repayment plans available to Direct Loan borrowers who meet specific eligibility requirements, as seen in figure 1. Unlike the Standard, Graduated, and Extended repayment plans, IDR plans offer loan forgiveness at the end of the repayment term. Additionally, their repayment terms are longer than under the Standard and Graduated plans, which are set at 10 years for non-consolidated loans. Borrowers in IDR plans generally have lower monthly payments compared to the Standard 10-year repayment plan. They may also pay less in the long term than they would under the Standard 10-year repayment plan due to the opportunity for eventual loan forgiveness. However, some borrowers may pay more. Borrowers in IDR plans can ultimately pay more in interest on their loans than they would under the Standard 10-year repayment plan due to longer repayment periods. Some borrowers will also fully repay their loans before their IDR plan repayment term ends and, therefore, not receive forgiveness. Additionally, under current tax law any amount forgiven under these plans is subject to federal income tax. In addition to making monthly payments more manageable (and eventually reducing the total amount owed for some borrowers receiving forgiveness), IDR plans may also reduce the risk of default. Borrowers who default on student loans face serious consequences, including damaged credit ratings and difficulty obtaining affordable credit in the future. In 2015, we reported that borrowers in two IDR plans had much lower default rates than borrowers in the Standard repayment plan. Specifically, among borrowers who entered repayment from fiscal year 2010 through fiscal year 2014, less than 1 percent of borrowers in the Income-Based Repayment and Pay As You Earn had defaulted on their loan, compared to 14 percent in the Standard repayment plan. To participate in an IDR plan, borrowers must provide documentation of their adjusted gross income (which we generally refer to as income in this report) to their loan servicer and certify their family size for an eligibility determination. Borrowers must recertify this information annually, which is used to update the borrower’s monthly payment amount. A borrower who fails to provide updated income information can remain in an IDR plan in order to qualify for future loan forgiveness, but their monthly payments will no longer be based on their income. Rather, payments will generally revert to the amount that would be owed under the Standard 10-year repayment plan until the borrower submits the required information. Borrowers who work in public service may lower their long-term loan costs by participating in the Public Service Loan Forgiveness (PSLF) program while repaying their loans through an IDR plan. Beginning in October 2017, borrowers eligible for PSLF can have their remaining Direct Loan balances forgiven after at least 10 years of payments in eligible repayment plans, generally an IDR plan or the Standard 10-year repayment plan. As we recently reported, PSLF may provide substantial savings over the life of the loan for qualifying borrowers in IDR plans compared to what they would pay without the PSLF benefit. In contrast, borrowers in the Standard 10-year repayment plan would pay their loans in full by the time they were eligible for forgiveness under PSLF. (See figure 2.) Participation in IDR plans has grown over time, as seen in figure 3. According to currently available quarterly data released by Education, the percent of outstanding Direct Loan dollars being repaid through IDR plans doubled from June 2013 to June 2016 to 40 percent. The percent of borrowers participating in IDR plans more than doubled over the same time period to 24 percent. However, as we previously reported, some borrowers who could benefit from IDR plans may still not be aware of them. As the variety of IDR options available to borrowers has expanded in recent years, there have been numerous reform proposals with a variety of goals ranging from simplifying IDR plans and better targeting their benefits to changing the tax treatment of IDR plan loan forgiveness. For instance, recent President’s budgets have proposed limiting the available IDR plan options for new borrowers to one revised IDR plan designed to better target benefits to the highest-need borrowers. A proposal has been introduced in the current Congress that would similarly make only one IDR plan available to new borrowers and target more generous benefits to those with lower incomes. Additional legislative proposals would automatically enroll all borrowers in a version of income-driven repayment and withhold payments from borrowers’ paychecks. Other proposed legislation would allow for automatic annual recertification of borrowers’ incomes and automatically place certain delinquent borrowers in an IDR plan. Another proposal would expand IDR plan eligibility to parents with Parent PLUS loans for dependent students. Legislation has also been introduced that would exempt student loan forgiveness under certain IDR plans from being taxed as income. As required by the Federal Credit Reform Act of 1990, Education estimates the long-term costs, known as subsidy costs, of the Direct Loan program annually for inclusion in the President’s budget. For Direct Loans, subsidy costs represent the estimated cost to the government of extending credit over the life of the loan, excluding administrative costs. (In this report, we generally refer to subsidy costs as “costs.”) Subsidy cost estimates are calculated based on the net present value of lifetime estimated cash flows to and from the government associated with these loans. For Direct Loans, cash flows from the government include loan disbursements to borrowers, while cash flows to the government include repayments of loan principal, interest and fee payments, and recoveries on defaulted loans. A positive subsidy cost estimate indicates that the government anticipates a net cost, while a negative subsidy cost estimate indicates that the government anticipates generating net subsidy income, not counting administrative costs. Education also annually reestimates the cost of loans made in each fiscal year, known as a loan cohort. Reestimates take into account actual loan performance as well as changes in assumptions about future performance, such as how many borrowers will default or how many will participate in different repayment plans. Reestimates may result in increases or decreases in subsidy cost estimates. No loan cohorts have been fully repaid, and estimates for all cohorts continue to be updated annually in the President’s budget. To estimate subsidy costs, Education has developed a student loan cash flow model (the student loan model) that incorporates a variety of assumptions about the future. These assumptions concern various aspects of loan performance, such as how many borrowers will prepay their loans, how many borrowers will default, and how successful default collection activities will be. Education uses a supplementary model to assist with the task of estimating repayment patterns for loans in IDR plans. (See appendix II for a description of how this supplementary model for estimating IDR plan repayment patterns works.) In the spring of 2015, Education initiated a redesign of its overall student loan model with technical support from Treasury and guidance from OMB in what is anticipated to be a multi-year project. Through our analysis of data underlying the President’s fiscal year 2017 budget, we found that Education estimates that Direct Loans in IDR plans will cost the government about $74 billion over their repayment term. More specifically, Education estimates that about $355 billion in loans will enter an IDR plan, and $281 billion will ultimately be paid by borrowers. As a result, Education expects a 21 percent subsidy rate, or an average cost to the government of $21 per every $100 in loans disbursed. See figure 4. All of the Direct Loan types eligible to participate in IDR plans contribute to the $74 billion Education estimates the government will incur in subsidy costs. Of these loan types, Consolidation loans are estimated to be the most costly, as seen in figure 5. Consolidation loans, which combine multiple existing federal student loans into one loan, are larger on average than other types of Direct Loans, and may have higher balances forgiven at the end of their repayment term. Further, Education officials said that some borrowers in IDR plans with Consolidation loans have higher default risks than other borrowers, which leads to higher expected subsidy rates for these loans. Education estimates lower subsidy costs for Subsidized and Unsubsidized Stafford and PLUS loans for graduate student borrowers (known as Grad PLUS loans) than Consolidation loans. As figure 6 shows, Education estimates higher subsidy costs for loans participating in IDR plans from more recent loan cohorts compared to loans from older cohorts. Figure 7 shows that these higher estimated costs track closely with the higher loan volume (or total loan dollars) estimated to enter IDR plans for more recent loan cohorts. Education officials confirmed that this higher estimated loan volume is likely related to three key factors: more generous IDR plans available for loans issued since fiscal year increased efforts to make borrowers aware of IDR plans, and increased overall volume of Direct Loans issued as a result of increased college attendance following the 2008 economic downturn and the end of the Federal Family Education Loan program (which guaranteed federal student loans issued by private lenders) in 2010. While borrowers in IDR plans in more recent loan cohorts have access to more generous benefits (which could lead to higher government costs), these loan cohorts do not have higher estimated subsidy rates than earlier loan cohorts, as seen in figure 8. Direct Loan subsidy rates fluctuate according to changes in a variety of factors, and are particularly sensitive to changes in government borrowing costs and borrower interest rates. As we previously reported, government borrowing costs fell sharply in 2009, due to historically low interest rates of Treasury securities. This phenomenon contributed to lower overall estimated subsidy rates for Direct Loans issued following the 2008 loan cohort. Education has raised its estimates of IDR plan costs in recent years through its annual process of revising past budget estimates to account for actual loan performance and updated assumptions about future loan performance. In figure 9, we compare Education’s original IDR plan subsidy cost estimates for loans issued in recent cohorts to its current subsidy cost estimates prepared for the President’s fiscal year 2017 budget. Our results show that current estimated IDR plan costs are more than double what was originally expected for these cohorts. For instance, Education originally estimated in the President’s fiscal year 2012 budget that IDR plan costs for the 2012 cohort would be $1.2 billion. As of the fiscal year 2017 budget, Education’s estimate had grown to $3 billion. (We also compared Education’s fiscal year 2016 IDR plan budget estimates to its fiscal year 2017 budget estimates to illustrate how Education’s cost estimates changed over one budget cycle, and present the results of that analysis in appendix IV.) As seen in figure 10, subsidy rates have remained relatively stable from original to current estimates, while the volume of loans expected to be repaid in IDR plans has increased dramatically. Because Education expects loans in IDR pans to have positive subsidy rates (or to have costs to the government), this growth in estimated loan volume has been accompanied by increasing estimates of IDR plan costs. According to our data analysis and interviews with Education officials, Education may have originally underestimated the volume of loans that would enter IDR plans from these cohorts for several reasons: 1. Education did not include Grad PLUS loans in its IDR plan subsidy estimates until the fiscal year 2015 budget, even though Grad PLUS loans have been eligible for IDR plans since they were first issued in 2006. Education officials said that they had to make a model adjustment in order to include Grad PLUS loans in IDR estimates. Prior to this adjustment they assumed all Grad PLUS loans would be repaid in other repayment plans. 2. Policy changes made IDR plans more generous and available to more borrowers after Education originally estimated costs for some cohorts. For example, the Pay As You Earn repayment plan was implemented in fiscal year 2013 and retroactively made more generous benefits available to certain borrowers with loans issued as early as the 2008 cohort. 3. While some eligible borrowers still may not be aware of IDR plans, participation rates are growing, and officials responsible for budget estimates may not have adequately anticipated participation growth. While we previously reported that there are substantial challenges associated with estimating Direct Loan subsidy costs, these challenges are increased for Direct Loans in IDR plans due to their complex features and other uncertainties. It is difficult for Education to estimate which borrowers have incomes low enough to benefit from or be eligible for IDR plans because Education does not collect income information for all Direct Loan borrowers. Additionally, IDR plan participation rates are difficult to predict. While participation has been growing rapidly in recent years, it is unclear at what rate it will continue to grow. It is also challenging to predict how the incomes of borrowers already participating in IDR plans will change over time and how much loan principal will ultimately be forgiven. Further complicating Education’s task is the fact that the large majority of loans expected to be repaid in IDR plans are from recent cohorts, and many borrowers in these cohorts have not yet started repaying their loans. As a result, there is limited actual repayment data available to inform Education’s estimates. Further, no borrower has received loan forgiveness under IDR plans. Volatility in subsidy cost estimates is generally expected to be greatest early in the life of a loan cohort, and to decrease over time as more actual repayment data are incorporated into estimates. When we compared original, third-year, and currently estimated IDR plan subsidy cost estimates for several recent cohorts, we found that third-year estimates were generally closer to current estimated costs than the original, as figure 11 illustrates. However, estimates will continue to change over time, and actual subsidy costs of a loan cohort will not be known until all loans in the cohort have been repaid, which may take 40 years. While loans in IDR plans are expected to have long-term costs to the government, loans in other repayment plans (Standard, Graduated, and Extended) are expected to generate greater subsidy income, as seen in figure 12. Figure 12 also illustrates that Education currently expects income to be higher for more recent cohorts than older cohorts. However, as mentioned previously, subsidy cost estimates change over time, and the actual costs or income attributable to any Direct Loan cohort will not be known until all loans in the cohort are repaid. Subsidy income estimates for loans participating in non-IDR plans vary by loan type and repayment plan. Unsubsidized Stafford and PLUS loans participating in the Standard 10-year repayment plan are estimated to result in the greatest subsidy income to the government. This could be due in part to the higher interest rates charged to borrowers with Unsubsidized Stafford and PLUS loans compared to Subsidized Stafford loans, as well as a higher volume of loans participating in Standard repayment compared to other repayment plan options. See figure 13. Further, as with loans in IDR plans, Education’s estimates of subsidy income from loans in non-IDR plans have changed over time and will continue to fluctuate as they are updated with actual repayment data and revised assumptions about future cash flows. We found that estimated income associated with loans participating in non-IDR plans increased (about $19 billion more) for some cohorts and decreased (about $36 billion less) for other cohorts when we compared Education’s original and current estimates for those cohorts (2009-2015). While Education currently estimates that loans in IDR plans will have costs to the government, these plans are designed to provide relief to struggling borrowers, which could indicate that government subsidies may be expected. By tying monthly payments to borrowers’ incomes, IDR plans help make potentially onerous student debt payments more affordable for many individuals. Because these borrowers’ repayment amounts may be lower than they otherwise would be, borrowers in IDR plans may have more success in making their loan payments than borrowers in other plans. As we previously reported, substantially lower percentages of participants in the Income-Based Repayment and Pay As You Earn repayment plans had defaulted on their loans compared to those in the Standard 10-year repayment plan, and the great majority of borrowers in these IDR plans were in active repayment status (e.g., not in delinquency, default, or forbearance). Further, because IDR plans attract borrowers experiencing difficulty repaying their loans in other plans, increased IDR participation from these borrowers may lead to lower subsidy rates for non-IDR plans. Education’s approach to estimating IDR plan costs has numerous weaknesses that may result in unreliable budget estimates. Poor quality control practices, such as inadequate model testing, contributed to issues we identified. Further, because Education publishes only limited information about its estimates, it may be difficult for policymakers to assess expected plan costs and consider the potential for alternative outcomes. Due to several methodological limitations, Education’s approach to estimating IDR plan costs may result in unreliable budget estimates. First, Education did not adequately assess the reliability of the data it uses to forecast borrower incomes over time, or assess the level of error these data or its forecasting methods introduced into its IDR plan budget estimates. Second, it did not consider how inflation would affect borrowers’ incomes over time. Third, Education unrealistically assumes that no borrower will fail to recertify their income, which is required of borrowers annually to maintain lower income-driven payment amounts. Fourth, Education does not account for future growth in IDR plan participation rates. Fifth, Education does not produce separate cost estimates for each of the five IDR plans currently available to borrowers. Finally, Education’s cost estimates for Subsidized Stafford, Unsubsidized Stafford, and Grad PLUS loans in IDR plans do not account for likely differences in how they will perform over time. Education’s IDR plan cost estimates are vulnerable to unidentified error because Education has not adequately assessed the reliability of the estimated borrower income data and methods it uses to forecast borrower incomes many years into the future—information that is vital to determining how much borrowers will owe and repay on their loans over time. Education conducted only limited, informal testing to assess the data’s reliability, in part because the agency had short timeframes in which to develop its approach to estimating IDR plan costs, according to officials we interviewed. Education did not measure the amount of error these data introduced into IDR plan cost estimates to determine whether it was acceptable, or if alternative data were needed. Through our data reliability testing, we identified patterns in the estimated historical income data suggesting reliability problems that could make them unacceptable for Education’s purposes. An analysis by Treasury (the agency that created the estimated historical income data) indicates that the data fluctuate on average by 44 percent more per year than the actual income data upon which they were based. In figure 14, we illustrate this fluctuation for five randomly selected borrowers from the estimated dataset over the first 10 years of their repayment period. (See appendix III for more information on how these data were estimated and our evaluation of them.) Education uses individuals’ estimated historical incomes, such as those illustrated in figure 14, to make numerous sequential calculations that determine how much each borrower will owe and pay in each year of the borrower’s repayment period. While the estimated historical income data appeared more reasonable in the aggregate, Education officials confirmed that any unusual fluctuations in them at the individual borrower level could affect the quality of IDR plan budget estimates. In addition to being vulnerable to error associated with the estimated historical income data they use, Education’s IDR plan budget estimates may further be affected by error associated with the agency’s method for forecasting borrowers’ incomes for up to 30 years into the future. The accuracy of any forecast—separately from the reliability of the data used for forecasting—depends on how well the data and forecasting methods can estimate future incomes. However, Education did not assess the amount of error in its forecasts of borrower incomes. Until Education assesses its forecasting methodology, its IDR plan cost estimates may be vulnerable to unidentified error. Both federal guidance for estimating subsidy costs and Education’s own information quality standards emphasize the importance of ensuring that estimates are based on reliable data. Education’s information quality standards and generally accepted statistical practices also recommend measuring error to assess its impact on estimates. Education officials agreed with the concerns we raised regarding their borrower income data and said they are open to improving data quality as necessary to help ensure reliable IDR plan budget estimates. Quality data and methods are essential to Education’s estimation approach, and both should be assessed to determine whether they produce reasonable results. (See appendix III for more information on error associated with Education’s data and methods.) In addition to insufficiently assessing the reliability of its income data and forecasting methods, Education has not adjusted its income forecasts for inflation, causing IDR plan budget estimates to appear higher than they otherwise would be. Adjusting for inflation would increase borrowers’ future incomes and payment amounts, because loan payments are based on borrowers’ incomes. Increasing payment amounts would, in turn, decrease costs to the government. When asked, Education officials said they did not adjust income forecasts for inflation because they did not identify patterns in the estimated historical income data suggesting that incomes would be affected by inflation. Whether or not these patterns were evident when reviewing the data, there was inflation over the almost 20-year period covered by the historical dataset and there is likely to be inflation in the future. Federal guidance for estimating subsidy costs stresses the importance of taking economic effects into account when estimating loan performance. For IDR plan costs, this would include the extent to which inflation affects borrower incomes and payment amounts. By choosing not to adjust income forecasts to capture inflation’s future effects, Education over-estimated IDR plan costs. When we used Education’s data and computer programs to adjust borrowers’ future incomes for inflation, as well as the federal poverty guidelines used to calculate their discretionary incomes, we found that IDR plan budget estimates declined by over $17 billion, when compared to Education’s current IDR plan budget estimates. (See figure 15.) In light of the substantial effects of inflation on borrower incomes and loan repayment amounts, inflation adjustment is essential to developing reliable IDR plan budget estimates. Until Education adjusts for inflation, its budget estimates will continue to inaccurately represent potential IDR plan costs. Additionally, Education assumes that all borrowers in IDR plans will recertify their incomes every year as required, which is likely to be inaccurate and could lead Education to overstate IDR plan costs. In fact, we recently reported that over half of borrowers in an Education sample failed to do so. When borrowers fail to recertify their income, Education generally increases their payments to what they would owe under the Standard 10-year repayment plan until they submit their required recertification. For some borrowers who fail to recertify their income, payments could increase by hundreds of dollars a month. While some borrowers may subsequently recertify within a few months, others may never recertify. Because Education does not take these occurrences into account, it underestimates what borrowers will pay when their certification lapses. Education officials told us they did not include certification lapses in their approach to estimating IDR plan costs because they lacked recertification data linked to individuals. They also believed that certification lapses would not have a large impact on their estimates. Initially, officials said the agency is taking steps to reduce the number of borrowers failing to recertify. However, officials later acknowledged that these efforts are in the early stages of implementation, and there have been some setbacks. Until efforts to improve recertification rates are put in place, certification lapses will likely continue. Further, without data indicating that certification lapses do not have a large impact on borrower payment amounts, Education may overstate IDR plan costs. Federal guidance for estimating subsidy costs states that the information used in the estimation process should reflect actual repayment patterns for loans whose costs are being estimated, which would include instances when a borrower’s payment amount changes due to program rules. Obtaining data on borrowers’ actual repayment patterns after they fail to recertify their income could help Education determine whether its current approach appropriately accounts for the impact of recertification failure on IDR plan costs, and determine whether changes are needed. Education likely underestimates IDR plan participation because it assumes all borrowers will remain in their currently selected repayment plan for their entire repayment period. This assumption conflicts with the fact that borrowers can switch into or out of IDR plans at any time, and IDR plan participation has grown in recent years. Participation is also likely to continue growing. Education agreed with our recent recommendation that the agency increase its efforts to make all borrowers aware of IDR plans. Further, as previously mentioned, the Administration recently announced a goal to enroll 2 million additional borrowers in IDR plans. As a result of Education’s likely underestimation of IDR plan participation, its IDR plan budget estimates may be biased downward, or appear lower than they otherwise should be. We found that Education’s IDR plan budget estimates for loans issued in recent cohorts have more than doubled over what was originally expected ($53 billion vs. $25 billion), primarily because of higher than expected participation in IDR plans. Federal guidance for estimating subsidy costs of federal loan programs states that it is preferable to use methods to estimate costs that are more sophisticated than relying solely on historical data, such as borrowers’ past plan selection. While Education’s current student loan model was not designed to project future changes in plan participation, officials told us that despite the challenge of predicting future borrower behavior they are working with Treasury to develop a more sophisticated model, and have begun incorporating this enhancement into a test version of this new model. Additional work remains to ensure that the new model reasonably reflects trends in IDR plan participation—particularly borrowers switching into IDR plans from other repayment plans. For instance, IDR plans have not yet been added to the new model, which currently includes only the Standard and Extended repayment plans. Education’s model redesign is anticipated to be a multi-year project, and until the model has been completed and tested to ensure reasonable results, Education’s IDR plan budget estimates are vulnerable to underestimated IDR plan participation and costs. Additionally, Education does not produce separate cost estimates for each of the five IDR plans currently available, even though these plans provide different benefits to borrowers and will likely have different costs to the government. For instance, the Income-Contingent Repayment plan has less generous provisions for borrowers than the Pay As You Earn plan, and as a result will likely have lower costs to the government. However, Education does not estimate these plans’ costs separately. According to Education officials, the student loan model, which it uses to generate official estimates of total Direct Loan costs, was created when only one IDR plan was available and cannot produce separate estimates for each IDR plan. While the supplementary model Education uses to estimate IDR plan repayment patterns could track repayment streams separately for each plan, its outputs must conform to the structure of the larger student loan model. Federal guidance for estimating subsidy costs for federal loan programs specifies that agencies should assess the impact of changes in laws or regulations (such as the introduction of new repayment plans) on the reliability of estimates and should ensure that an agency’s methodology reflects these changes. While Education officials expressed concern about the complexity of estimating separate costs for each IDR plan, OMB staff told us that Education should add this capability as part of Education’s efforts to develop a more sophisticated model. Incorporating the ability to track costs of each IDR plan separately would help ensure that estimates more accurately reflect the current loan environment and provide valuable information to policymakers interested in streamlining student loan repayment options moving forward. Lastly, Education combines repayment patterns for several types of loans eligible for IDR plans, obscuring likely differences in their performance over time. As a result, its budget estimates for Subsidized Stafford, Unsubsidized Stafford, and Grad PLUS loans in IDR plans are based on identical repayment patterns, although these types of loans have numerous distinct features. For instance, the current interest rate on a Grad PLUS loan is almost double that of a Subsidized Stafford loan, leading borrowers with Grad PLUS loans to owe much more in interest on those loans over time. Conversely, borrowers with Subsidized Stafford loans will pay down principal on their loans more quickly over time because less of their payment goes toward interest. However, Education’s cost estimates do not reflect higher expected interest payments on Grad PLUS loans in IDR plans or faster principal repayment on Subsidized Stafford loans in IDR plans, because they are based on aggregate repayment patterns that include both types of loans. Education officials told us that, as a result of this practice, all differences in published subsidy rates for these loan types are wholly attributable to fees charged to borrowers at the time the loans are issued and how much interest accrues during the relatively short period that borrowers are still in school. Because Education’s estimates do not reflect differences in performance over the decades that loans in IDR plans may be in repayment, users of the budget are missing key information that could help them assess how IDR plan costs vary by loan type. As an example, some experts have raised concerns that Grad PLUS loans could have relatively high forgiveness amounts because they are larger on average than Stafford loans and may have a large amount of outstanding loan principal at the end of their repayment term. Due to limitations in Education’s current approach, users of the budget cannot determine the extent to which this concern affects subsidy rates for Grad PLUS loans in IDR plans. According to Education officials, they could have separately estimated repayment patterns for each loan type, but did not believe that it was important to do so for several reasons. First, officials stated that they focused their efforts on estimating separate repayment patterns for Consolidation loans because they make up the majority of loans in IDR plans. However, nearly half of IDR plan loan volume—or $164 billion—is made up of Subsidized Stafford, Unsubsidized Stafford, and Grad PLUS loans, and it is important to estimate their repayment patterns accurately as well. Second, officials stated that they did not believe it was necessary to maintain separate repayment patterns for each type of loan because borrowers often have a mix of loans and repay them simultaneously. While this is true, policymakers have an interest in budget information that accurately reflects expected costs for each type of loan eligible for IDR plans. Further, federal guidance for estimating subsidy costs of federal loan programs states that loan characteristics—such as loan types—are critical for identifying factors that predict subsidy costs and should be preserved. Until Education separately tracks repayment patterns for each type of loan in IDR plans, its cost estimates will continue not to take into account important differences in loan characteristics, calling into question the reliability of the cost estimates, and policymakers will be unable to assess the relative costs of different types of loans. Inadequate quality control practices contribute to concerns we identified regarding Education’s approach to estimating IDR plan costs. First, management has not ensured that the agency’s supplementary model for estimating IDR plan repayment patterns is properly documented. Second, management has not reviewed or approved that model. Third, management has not ensured that the model has been sufficiently tested for reliability. Education has not ensured that its supplementary model for estimating IDR plan repayment patterns is properly documented. While a broad narrative summary of the model is available, agency officials confirmed that other technical documentation recommended in federal guidance for estimating subsidy costs does not exist. For instance, Education does not have a flow chart or other similar documentation specifying how elements of the estimation process—which is implemented by nearly 50 computer programs—are sequenced and interact with each other. Additionally, the numerous mathematical formulas embedded in these programs are not separately documented, and there is no data dictionary to decode the variable names and values. Standards for internal control in the federal government state that documentation is a necessary part of an effective internal control system. Federal guidance for estimating subsidy costs states that model documentation should be thorough enough that a knowledgeable independent person could follow the estimation process and replicate its results with little to no assistance. Such documentation is not available for Education’s supplementary model for estimating IDR plan repayment patterns. We recently recommended that Education improve documentation of its overall process for estimating costs of Direct Loans. Education agreed with this recommendation, and officials stated that they were in the process of improving their documentation practices. Further, we found that Education’s managers did not review and approve the supplementary model for estimating IDR plan repayment patterns, as recommended in federal guidance for estimating subsidy costs, after it was developed by staff. Additionally, as a good practice, we have found that agencies often hire an independent firm to ensure that model calculations are accurate and consistent with documentation. However, Education officials confirmed that their supplementary model for estimating IDR plan repayment patterns has not been reviewed by an independent firm. Some of the concerns we identified in the previous section of our report regarding Education’s estimation approach could have been identified and resolved through an internal management review or independent external review. For instance, we found that the decision not to adjust borrower income forecasts for inflation causes IDR plan budget estimates to be $17 billion higher than they otherwise would be. We also found that PSLF loan forgiveness was programmed to begin a year after the benefit will actually become available to eligible borrowers. When we revised these programs to allow loans to be forgiven a year earlier, estimated IDR plan costs rose by $70 million. Agency staff told us this decision was made because borrowers were not likely to make the 120 consecutive on- time payments necessary to qualify for immediate forgiveness. However, Education already makes assumptions about when borrowers will not make scheduled loan payments. An internal management review or independent external review may have pointed toward another solution—such as adjusting how often borrowers are assumed to have periods of non-payment—rather than simply delaying the PSLF start date. We recently recommended that Education create a documented process for management review and approval of its student loan model. Education agreed with this recommendation, and officials told us they also hoped to have their revised student loan model reviewed by an outside party in the future. Although Education currently expects loans in IDR plans to be the most costly component of the Direct Loan portfolio, management has not ensured that its supplementary model for estimating IDR plan repayment patterns has been thoroughly tested. Such testing can help identify weaknesses so that they can be addressed, and help ensure that estimates are reasonable. As we previously mentioned, Education had not conducted the necessary testing to thoroughly assess the reliability of its borrower income data or measured error associated with its income forecasting methodology. Without such testing, Education officials do not know whether their data and methods produce reasonable results, or if alternatives are needed. Further, Education conducted sensitivity analysis on only one key assumption—borrower incomes—at the request of OMB. Federal guidance for estimating subsidy costs states that agencies should conduct sensitivity analysis—which involves adjusting an assumption up or down by a fixed proportion—or other testing to identify which assumptions have the largest influence on cost estimates. This information helps management anticipate the cost implications of alternative scenarios and focus oversight resources on key assumptions to help ensure that they are reliable and reasonable. However, Education officials told us they only conducted sensitivity analysis when asked by others, preferring instead to focus their resources on developing a single set of assumptions they believed were best. Developing a sound set of assumptions is, of course, important. Sensitivity analysis supports, rather than detracts from, this effort. For instance, little is known about how many borrowers are eligible for or will participate in PSLF when it becomes available in October 2017. Despite this uncertainty and concerns among some experts and policymakers that PSLF could be costly to the government, Education has not conducted sensitivity analysis on its PSLF participation assumption. In order to illustrate the importance of conducting sensitivity analysis on major assumptions, we first revised Education’s computer programs to increase the percentage of borrowers expected to participate in PSLF by 5 and 10 percentage points. As illustrated in figure 16, costs rose by $4.4 and $9 billion, respectively. We then decreased the percentage of borrowers participating in PSLF by 5 and 10 percentage points. As seen in figure 17, costs fell by similar amounts. Our results illustrate the potential for PSLF costs to be different than what Education currently expects, and why it is important for Education to monitor this assumption and adjust it as necessary to ensure that it is reasonable. Without conducting similar sensitivity analysis on other major assumptions, monitoring those assumptions carefully, and adjusting them as necessary to ensure that they are reliable, Education’s budget estimates are vulnerable to bias that could result in costs being over- or understated by billions of dollars. In addition to identifying limitations in Education’s approach to estimating IDR plan costs and its quality control practices, we also found that Education has not published sufficient information about its estimates for policymakers to readily assess expected IDR plan costs. The kinds of information that Education has not published—and that could be useful to policymakers—include (1) total expected costs, (2) trend in estimates, (3) sensitivity analysis results, (4) limitations in estimates, and (5) estimated forgiveness amounts. Education officials noted that the department takes its responsiveness to policymakers and the general public seriously, and that the agency has responded to information requests about IDR plan cost estimates by congressional staff. However, congressional interest in IDR plans is high, and currently available information may be insufficient for policymakers to accurately assess likely plan costs, and consider the potential for alternative outcomes. For instance, as a part of the President’s budget, Education publishes IDR plan loan volume and subsidy rate estimates for loans issued in the current and two most recent cohorts. This information can be used to calculate expected IDR plan costs for this limited group of loans. However, it is not possible to use this information to determine total expected costs for all loans in IDR plans. Additionally, Education has disclosed in reports accompanying the President’s budget that IDR plans are major contributors to upward revisions in estimated Direct Loan costs as a whole—it has not reported the amount by which IDR plan costs have risen or clearly described the reasons why. Using unpublished data from Education, we found that total current expected IDR plan costs are about $74 billion, or $21 for every $100 issued. We also found that expected IDR plan costs have doubled from $25 to $53 billion for loans issued from fiscal years 2009 through 2016— primarily due to the growing volume of loans expected to be repaid in IDR plans. Publishing more comprehensive information like this could help policymakers better understand currently expected costs and monitor trends in the Direct Loan portfolio. Additionally, by publishing sensitivity analysis results and limitations in estimates, Education could help policymakers understand what is known about possible IDR plan costs, and what is still unknown. Our own sensitivity analysis illustrates that IDR plan costs could be billions of dollars more or less than currently estimated if PSLF participation is higher or lower than expected. Given growth in IDR plan cost estimates over time due to the rising volume of loans expected to be repaid in these plans, it would also be useful to disclose that current estimates assume that no borrowers will switch from other repayment plans into IDR plans in the future. Lastly, sharing the amount of principal Education expects to forgive on loans in IDR plans could help policymakers better understand a key plan feature that contributes to their expected costs. Education officials raised concerns that publishing forgiveness amounts could be misleading, because it is possible for the government still to generate income on loans with principal forgiven, particularly if borrower interest payments exceed forgiveness amounts. While this is true, loan amounts forgiven do represent foregone cash flows to the government. Further, legislation has been introduced in Congress to make forgiveness under certain IDR plans tax-free. Sharing information about expected forgiveness amounts could help policymakers better understand the scope of currently expected loan forgiveness and the potential tax implications of excluding forgiveness from taxable income. We calculated currently expected IDR plan forgiveness amounts using cash flow estimates provided by Education. For our analysis, we calculated the amount of loan principal Education expects borrowers in IDR plans to repay, and the amount it expects borrowers not to repay due to forgiveness and other reasons. Our results are in figure 18. When discussing expanded information sharing, Education officials and OMB staff agreed that there could be value in reporting additional information about IDR plan cost estimates. An Education official raised concerns about the agency’s ability to publish additional cost information, because OMB determines what is presented in the President’s budget. OMB staff agreed that such information would be too detailed for the President’s budget, but suggested that Education could provide more detailed IDR plan cost information through separate reports. Education’s strategic plan emphasizes the importance of information transparency as a tool to encourage data-driven decision-making and improve the U.S. educational system. Standards for internal control in the federal government also note that management should share quality information externally. By more thoroughly disclosing IDR plan cost information— such as total estimated costs, sensitivity analysis results, key limitations in estimates, and expected forgiveness amounts—Education could help policymakers better assess the cost implications of current IDR plan provisions and consider whether reforms are needed. Policymakers need reliable budget estimates to help align federal expenditures with policy priorities. In an environment of scarce resources, quality budget information becomes all the more important, as policymakers face difficult funding decisions. While IDR plans are a promising tool to help alleviate the burden of student loan debt and reduce borrowers’ risk of default, they may be costly for the federal government. Some uncertainty is unavoidable when anticipating long- term loan costs, but we found numerous shortcomings in Education’s estimation approach and quality control practices that call into question the reliability of its budget estimates and affect the quality of information Congress has to make informed budget decisions. Because Education administers the federal government’s largest direct loan program, it is especially important that the agency corrects its methodological weaknesses associated with estimating IDR plan costs. More specifically, until Education assesses and improves the quality of data and methods it uses to forecast borrowers’ future incomes and accounts for inflation in its estimates, its IDR plan budget estimates may be unreliable. Further, until Education obtains data needed to estimate the impact of income recertification lapses on borrower payment amounts, it will not know whether borrower payments are currently underestimated and whether adjustments are needed to avoid overstating IDR plan costs. In addition, until Education’s planned revisions to its student loan model have been completed and tested to ensure reasonableness, the agency’s IDR plan budget estimates will not reasonably reflect participation trends in IDR plans, particularly the extent to which borrowers in other repayment plans may switch into them. In the interim, Education may continue to understate IDR plan costs by billions of dollars, as past trends in estimates indicate. Without separately tracking how available IDR plans and the types of loans eligible for them perform relative to each other over time, Education’s estimates will lack the detail needed to inform policymakers’ ongoing efforts to streamline plans and better target costs. In addition to correcting its methodological weaknesses, Education could enhance the reliability of its budget estimates by implementing more robust quality control practices. Implementing our previous recommendation to more thoroughly document and review its approach could help Education’s management identify and resolve weaknesses. More robust model testing, including more extensive sensitivity analysis, could also help Education’s management identify and mitigate problems that may reduce the reliability of its budget estimates. Moreover, as Education works to improve the quality of its IDR plan budget estimates, it could also help policymakers better understand the scope of currently expected costs and the potential for alternative outcomes by publishing more detailed information about its estimates, such as total estimated costs, the results of sensitivity analysis, key limitations, and expected forgiveness amounts. This information could help better support efforts to assess the cost-effectiveness of IDR plans and design any needed reforms. We recommend that the Secretary of Education take the following six actions: 1. Assess and improve, as necessary, the quality of data and methods used to forecast borrower incomes, and revise the forecasting method to account for inflation in estimates. 2. Obtain data needed to assess the impact of income recertification lapses on borrower payment amounts, and adjust estimated borrower repayment patterns as necessary. 3. Complete efforts to incorporate repayment plan switching into the agency’s redesigned student loan model, and conduct testing to help ensure that the model produces estimates that reasonably reflect trends in Income-Driven Repayment plan participation. 4. As a part of the agency’s ongoing student loan model redesign efforts, add the capability to produce separate cost estimates for each Income-Driven Repayment plan and more accurately reflect likely repayment patterns for each type of loan eligible for these plans. 5. More thoroughly test the agency’s approach to estimating Income- Driven Repayment plan costs, including by conducting more comprehensive sensitivity analysis on key assumptions and adjusting those assumptions (such as the agency’s Public Service Loan Forgiveness participation assumption) to ensure reasonableness. 6. Publish more detailed Income Driven Repayment plan cost information— beyond what is regularly provided through the President’s budget—including items such as total estimated costs, sensitivity analysis results, key limitations, and expected forgiveness amounts. We provided a draft of our report to the U.S. Department of Education (Education) for its review and comment. We provided relevant excerpts from our report to the U.S. Department of the Treasury and incorporated its technical comments as appropriate. We provided a draft of our report to the Office of Management and Budget for technical review, and did not receive technical comments in response. Education generally agreed with our recommendations, stating that in light of growing IDR plan participation, the agency has focused efforts on improving IDR plan budget estimates. Additionally, Education said that estimating the federal student loan costs is a task it takes very seriously, and that the agency is constantly seeking to enhance and refine its models. First, Education agreed to assess and improve its borrower income forecasts, and listed additional factors it wished to consider when determining how to incorporate inflation into its forecasts. Second, Education agreed to attempt to obtain data to assess the impact of income recertification lapses on borrower payment amounts. Education reiterated its belief that such lapses may only have a small impact on plan costs, but did not provide data to support that view. We clarified the language in our recommendation to indicate that model adjustments should only be undertaken as needed, based on the outcome of Education’s review of relevant data. Third, Education also agreed to incorporate repayment plan switching into its redesigned student loan model, and reiterated that efforts to incorporate this capability had begun despite challenges inherent in predicting borrower behavior. Fourth, Education agreed to add the capability to produce separate cost estimates for each IDR plan and each eligible loan type into its redesigned student loan model. Given the concern Education raised in its letter that revising its current approach to improve loan-type estimates may not be a good use of resources, we revised our recommendation to clarify that this improvement could be undertaken as a part of the agency’s longer-term efforts to redesign its student loan model. Fifth, Education agreed to test its approach to estimating IDR plan costs more thoroughly, including through more comprehensive sensitivity analysis. Education further explained its rationale for delaying the Public Service Loan Forgiveness (PSLF) start date in its cost model, citing preliminary evidence suggesting that few borrowers will make the 120 consecutive on-time payments necessary to receive forgiveness in the program’s first year. Education also raised concerns that using the correct start date (which we found caused estimated costs to rise by $70 million) would overstate costs. We noted Education’s rationale and concerns in our report, and responded that another solution—such as adjusting how often borrowers are assumed not to make scheduled loan payments— may be more appropriate than simply delaying the PSLF start date. Education agreed with our sixth recommendation to publish more detailed IDR plan cost information and stated that it plans to present sensitivity analysis results and key limitations in upcoming financial reports. Education’s comments are reproduced in appendix V. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 15 days from the report date. At that time, we will send copies to interested congressional committees and to the U.S. Departments of Education and the Treasury and the Office of Management and Budget. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0534 or emreyarrasm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. This appendix discusses in detail our methodology for addressing (1) the U.S. Department of Education’s (Education’s) current Income-Driven (IDR) Repayment plan budget estimates and how they have changed over time and (2) the extent to which Education’s approach to estimating IDR plan costs and its quality control practices help ensure reliable budget estimates. To address these objectives, we reviewed relevant federal laws, regulations, and guidance on the William D. Ford Federal Direct Loan (Direct Loan) Program and IDR plans. We reviewed documentation and interviewed officials from Education about the agency’s approach to estimating costs and its quality control practices. We also interviewed officials from the Congressional Budget Office and the U.S. Department of the Treasury (Treasury) and staff at the Office of Management and Budget (OMB), as well as higher education policy experts, to discuss issues related to federal budgeting practices and estimated IDR plan costs. To answer our first objective, we analyzed and reported on data underlying Education’s annual budget estimates for the Direct Loan program. To answer our second objective, we evaluated Education’s estimation approach and conducted sensitivity analysis to determine the impact of alternative assumptions on Education’s cost estimates. We also calculated the proportion of loan dollars Education expects to forgive under IDR plans using estimated cash flow data provided by Education. These analyses are described in more detail below. To assess the reliability of Education’s budget estimates, we interviewed agency officials, reviewed related documentation, and conducted extensive electronic testing. We believe the data are reliable to report in objective one as a representation of the funding Education reports is necessary to operate the Direct Loan program, and in objective two, to illustrate the sensitivity of Education’s budget estimates to different assumptions about future loan repayment activity and to illustrate currently expected forgiveness amounts. To analyze Education’s current IDR plan budget estimates and how they have changed over time, we reviewed Education’s annual submissions to the President’s budget for fiscal years 2011 through 2017, which include estimated IDR plan loan volume and subsidy rates for Direct Loans to be issued in the year the of the budget and the two preceding fiscal years. For example, the budget submission for fiscal year 2011 included estimated IDR plan costs for loans in fiscal years 2009, 2010, and 2011. We used these budgets to identify the original IDR plan cost estimates for the 2009 through 2016 cohorts. Education did not publish subsidy cost estimates by repayment plan prior to the 2011 budget and could not easily provide the information necessary to determine original IDR plan cost estimates for previous cohorts. We also reviewed supplemental unpublished data from Education to illustrate current IDR plan subsidy cost estimates for loans issued in fiscal years 1995 through 2017 using assumptions underlying their estimates for the President’s fiscal year 2016 and 2017 budget. We used these supplemental data, along with published data for the 2017 cohort, to calculate current total reestimated subsidy costs and subsidy income for each repayment plan, loan cohort, and loan type. We also compare original published IDR plan subsidy estimates for the 2009-2016 cohorts to the current reestimated IDR plan subsidy costs for those cohorts. We limited our comparison to these cohorts because Education did not publish subsidy cost estimates by repayment plan in earlier budgets and does not maintain information that would be needed to identify past estimates. (In appendix IV, we also compare Education’s IDR plan subsidy cost estimates for the fiscal year 2017 budget with those prepared for the fiscal year 2016 budget to illustrate how estimates changed from one budget to the next.) We compared the supplemental unpublished data to published data from the fiscal year 2016 and 2017 credit supplements to the President’s budget and interviewed Education to clarify reasons for minor discrepancies. To understand and evaluate Education’s approach to estimating the cost of loans in IDR plans, we first reviewed available documentation from Education on the supplementary model Education created to estimate repayment patterns of loans in IDR plans (referred to as the IDR plan repayment model in this appendix). We also reviewed documentation on Education’s student loan model, which uses information from the IDR repayment model and other assumptions to calculate total subsidy costs. (See appendix II for detailed information on how Education estimates IDR costs using these models.) This documentation provided limited details regarding the steps of Education’s IDR repayment model or how assumptions were operationalized and programmed in the model. Given the limited documentation available regarding Education’s IDR plan repayment model, we reviewed the computer programs and datasets used in the model. Education provided us with SAS program files and data input files used in the model. The data input files contained the sample of Direct Loan borrowers Education used in its analysis as well as estimated historical incomes of those borrowers provided by Treasury. (See appendix III for more information on these historical income estimates.) The SAS program files implementing the model forecasted those borrowers’ future incomes and scheduled IDR plan payment amounts, as well as forecasted events that would lead to non-payment, such as default, death or disability, prepayment of loans through consolidation, or forgiveness of loans through the Public Service Loan Forgiveness (PSLF) program. To get further clarification on the documentation, data, and computer programs provided, we interviewed Education officials who created and manage the IDR plan repayment model and the overall student loan model, which is used to calculate subsidy costs for all Direct Loans. We assessed the IDR plan repayment model’s major assumptions for reasonableness and evaluated them against federal guidance for estimating subsidy costs developed by the Federal Accounting Standards Advisory Board. We evaluated methods used in the model, particularly Education’s approach to forecasting borrower incomes, against this guidance and accepted practices in statistics and the social sciences. We also assessed whether the model appropriately replicated IDR plan program rules. Finally, we conducted an in-depth review of the Treasury- created estimated historical income data used in Education’s approach. We assessed the reasonableness of the data by conducting electronic testing and producing summary statistics, which we asked Treasury to compare to the actual taxpayer data upon which its estimates were based. We reviewed related documentation from Treasury about the estimation process, and interviewed Treasury officials to clarify factual details and obtain their views on the process. (See appendix III for more information on our review of these data and Education’s subsequent forecasting approach.) Based on our detailed review of the assumptions, methods, and data used in the IDR plan repayment model, we identified two separate areas for testing the sensitivity of Education’s IDR plan cost estimates to changes in assumptions. First, we tested the effects of inflation on income projections and poverty guidelines, both of which are used to estimate borrower payment amounts. We adjusted borrower incomes and poverty guidelines for inflation due to the exclusion of inflation from Education’s current model and the results of a prior Education analysis showing that cost estimates were sensitive to changes in borrower incomes. Second, we tested Education’s assumption about PSLF participation and the year borrowers would be first eligible for forgiveness under the program. We focused on PSLF participation because actual participation is not yet known for this program and Education assumed that any borrower they estimated to be eligible for PSLF would choose to participate. We carried out each sensitivity analysis by rewriting relevant portions of the existing SAS computer programs that Education developed to implement the IDR plan repayment model. To conduct these analyses, we first produced baseline cash flow estimates using the existing programs we received from Education. We sought to produce baseline estimates that were identical to those from Education’s existing model. The baseline replication ensured that the new model assumptions, rather than different versions of programs or input data, were solely responsible for any changes in the estimates. The replication process included selecting random samples of the data files and using the SAS Compare procedure to detect any differences in observations and variables. We interviewed Education officials to confirm the sequence and versions of programs and to establish our final baseline file. After producing baseline estimates, we wrote two new sets of SAS program files to implement each new assumption and produce new cash flow output for each analysis. The final output data, which we sent to Education to produce subsidy rates, consisted of cash flows summed across all borrowers in repayment within each fiscal year and within loan population type (non-consolidated loans, loans consolidated from default, and loans consolidated not from default). We provided this output to Education officials, who uploaded and ran the new estimates through the larger student loan model. Education officials provided revised subsidy rates for each loan type and origination cohort, reflecting the new IDR plan cash flows under our alternative assumptions. For each sensitivity analysis, we compared the baseline and revised IDR cash flows and subsidy cost estimates and calculated the percent change. We tested Education’s assumptions regarding borrower participation and the first year that borrowers are eligible for PSLF. Education estimates borrower eligibility for PSLF using survey data that may not be representative of borrowers in newer IDR plans. In addition, Education assumes that 100 percent of borrowers who are estimated to be eligible for PSLF will choose to participate after making 120 payments in a qualifying repayment plan. Lastly, Education assumes that no borrower will become eligible to benefit from PSLF until a year after the program is scheduled to begin. To assess the impact of altering these three assumptions, we increased and decreased the estimated percentage of borrowers eligible and participating in PSLF by 10 and 5 percentage points, and moved up the PSLF start date by a full year. Adjusting Projected Incomes and Poverty Guidelines for Inflation We tested the extent to which cost estimates were sensitive to adjusting incomes and poverty guidelines for inflation for future years after 2013. As described in appendix II, Education forecasts borrowers’ incomes by substituting the historical incomes of borrowers with similar characteristics, but does it not adjust these projected incomes for inflation. Education also uses 2013 poverty guideline data for future years, with no inflation adjustment. To implement this adjustment, we obtained inflation factors from OMB for all future repayment years, and inflated Education’s forecasted borrower incomes and poverty guidelines into the appropriate year’s dollar units. Specifically, we applied adjustment factors to the 2013 dollar amounts to inflate them to each future year’s dollar units. We then applied the existing repayment model using the inflated incomes and poverty guidelines as input, without altering any additional model assumptions or calculations. To calculate expected forgiveness amounts for loans entering repayment in fiscal years 1995 through 2017, we analyzed cash flow data from Education, which provided detailed information on the amount of loan principal expected to be paid and not repaid. First, we determined the overall amount of loan principal in IDR plans estimated not to be repaid for any reason, as Education recommended. We did this by subtracting the amount of principal expected to be repaid from the total volume of loans disbursed to borrowers. The remaining amount represented loan principal estimated not to be repaid. We then subtracted the amount of loan principal estimated to be discharged due to a borrower’s death or disability. We attributed the remaining balance of unpaid principal to loan forgiveness under IDR plans and PSLF. Because Education expects to recover all defaulted loan principal through the collections process, loan defaults did not contribute to total non-payment of loan principal. We assessed Education’s quality control practices by reviewing relevant documentation and interviewing officials in the office responsible for developing and managing the estimates. We evaluated Education’s practices against federal guidance related to estimating subsidy costs and standards for internal control in the federal government. We also assessed Education’s information sharing against standards for internal control in the federal government, and Education’s strategic plan. We conducted this performance audit from March 2015 to November 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Based on our review of Education’s computer programs, model documentation, and interviews with agency officials, we confirmed that Education estimates subsidy costs for loans in Income-Driven Repayment (IDR) plans in the following way. First, Education estimates how many loan dollars will enter IDR plans from each loan cohort. Second, Education estimates repayment patterns for those loans over time. It performs this first task within its larger student loan model that calculates cash flows for cohorts of loans and incorporates various assumptions about the future. Education addresses the second task inside a supplementary microsimulation model for estimating IDR plan repayment patterns—referred to as the IDR plan repayment model in this appendix— that was designed to predict the repayment behavior of individual borrowers from a sample of borrowers with loans in IDR plans. Through interviews, Education officials stated that they combine the resulting pieces of information in their larger student loan model to generate subsidy cost estimates. Education estimates the percentage of loans in each cohort that will enter each repayment plan—Standard, Extended, Graduated, and IDR—inside its student loan model. According to Education’s model documentation and follow-up information from agency officials, Education based its cost estimates reported in the President’s fiscal year 2017 budget on a random sample of loans drawn from its National Student Loan Data System in January 2015. For loans issued after September 2014, Education applied repayment plan participation rates from a past cohort. For Consolidation loans, Education used 2014 cohort data because borrowers generally begin repaying those loans immediately. For non- consolidated loans, which generally do not enter repayment for several years while borrowers are in school, Education used participation rates from the 2011 cohort. IDR plan participation rates for the 2015 through 2017 cohorts were adjusted upward in comparison to the 2014 cohort to account for expanded eligibility for two newer IDR plans (Pay As You Earn and Revised Pay As You Earn). For the fiscal year 2017 budget, this upward adjustment ranged from 1.4 to 6.2 percent, depending on the cohort and type of loan. Education officials stated that they then apply the percentage of loans assumed to enter IDR plans to the total dollar value of loans originated (or loan volume) in each loan cohort. Education uses a separate IDR plan repayment model to forecast cash flows, which we refer to as repayment patterns, of loans in IDR plans based on a sample of borrowers with loans in repayment as of September 2013. This random sample of borrowers was also drawn from the National Student Loan Database and reflected all loan activity through the end of fiscal year 2013, including but not limited to, the amount borrowed, loan type, and repayment plan. From this sample, Education selected all borrowers who had already begun repaying their loans under an IDR plan by September 2013. For the purpose of modeling future loan cohorts, Education assumes all borrowers entering repayment after September 2013 will have the same characteristics as borrowers who entered repayment in 2013. Education then estimates how much each of these borrowers will owe on their loans over a 31-year span, based on the borrower’s estimated adjusted gross income (income) and family size for each year in the repayment period and the rules of the IDR plan selected. Education used a two-step process to forecast borrower income and family size: 1. To forecast borrowers’ future incomes, Education first worked with Treasury to estimate past incomes, filing status, and family sizes of the sample of borrowers who had entered repayment by the end of fiscal year 2013. Treasury developed these estimates because Education does not collect tax data on all borrowers. Treasury collects this information for all U.S. tax filers, but did not share actual data from these borrowers’ tax filings due to privacy restrictions. Instead, it created a tax file that contained substituted, or “imputed,” information based on borrower characteristics including age, gender, loan balance, dependency status and family income. Treasury first estimated if a borrower would file taxes in a given year. For each borrower estimated to file taxes, it then imputed estimated nominal incomes and number of tax exemptions (approximating family size) for each of the borrower’s repayment years that occurred in tax years 1996 through 2013. For example, borrowers who entered repayment in 1996 would have 18 years of imputed incomes while borrowers entering repayment in 2000 would have 14 years. (See Evaluating Income Data Used in Education’s Approach in appendix III for more information on Treasury’s methodology to estimate borrower incomes and our assessment of error associated with its approach). 2. To forecast future incomes of each borrower in its sample from 2014 through the end of the borrower’s repayment period (up to 31 years in the future), Education first converted the estimated historical income data from Treasury from calendar years into “repayment years.” A borrower who began repaying his loan in calendar year 2000 would have estimated historical income data covering repayment years 1 through 14 (formerly calendar years 2000 through 2013). To forecast that borrower’s future income in repayment year 15, Education first matched the borrower with a set of borrowers with similar characteristics. Education then randomly selected a borrower from this matched set of borrowers and substituted the nominal historical income observation from the same repayment year. It repeated this step for each subsequent year of the borrower’s maximum repayment period, choosing a different borrower’s nominal historical income observation in each year. Because Education matched the borrowers in their file with Treasury’s based on their repayment year (as opposed to calendar year), the nominal historical income values used in the forecasts could come from various non-sequential calendar years. (See Evaluating Income Data Used in Education’s Approach in appendix III for more information on Education’s methodology and the error associated with its approach). Once Education forecasted incomes and family size for each borrower in the sample’s entire repayment period, Education then applied the rules of the borrower’s selected IDR plan to calculate the amount the borrower would owe over time. For instance, a borrower in the New Income-Based Repayment plan would pay 10 percent of her discretionary income for 20 years, whereas a borrower in the Income-Contingent Repayment plan would pay 20 percent of her discretionary income for 25 years. For borrowers in the file who had not yet selected an IDR plan as September 2013 (i.e., those estimated to enter repayment in 2014 or later), Education selects a plan for them based on assumptions about borrower behavior. For borrowers in the Income-Contingent Repayment plan, the IDR plan repayment model annually reevaluates whether they will switch into the Income-Based Repayment plan, and does so if the borrower’s monthly payment amount would be lowered by at least $50 by switching into the Income-Based Repayment plan. The IDR plan repayment model switches a borrower into the Revised Pay As You Earn Plan if that borrower is not eligible for the Pay As You Earn or New Income-Based Repayment plans and if the borrower saw his or her payment fall by $50 compared to what would be owed under the Income-Based Repayment plan. Borrowers were assumed to choose Revised Pay As You Earn if the present value of the payments with Revised Pay As You Earn were lower than the present value of payments without Revised Pay As You Earn using a 30 percent discount rate. Education used a high discount rate because borrowers would likely place much less weight on the higher payments that would be likely to occur toward the end of the repayment period. Borrowers already in IDR repayment were assumed to choose Revised Pay As You Earn in the first year their payments fell by $50 a month or more. Borrowers who had not yet chosen an IDR plan were assumed to choose Revised Pay As You Earn if their payments would be lower in the first year. Borrowers stay in an IDR plan for their entire repayment period, even if their income rises beyond the point at which they would qualify if they were applying in that year, in order to calculate possible loan forgiveness amounts. The IDR plan repayment model also includes predictions about when borrowers will delay repaying their loans (through deferment and forbearance); when they will fail to repay their loans (due to default, death, and disability); when they will prepay their loans through consolidation; and when their loan balances will be forgiven due to participation in the Public Service Loan Forgiveness program or at the end of their IDR plan’s full repayment term. The IDR plan repayment model’s final output consists of cash flows received (broken out by principal and interest) and foregone (such as through default, death, and disability) for each of the 31 years of repayment. These cash flows are summed across all borrowers who enter repayment in the same year for three different groups: (1) borrowers with Subsidized Stafford, Unsubsidized Stafford, and Graduate PLUS loans, (2) borrowers who defaulted and then consolidated their loans, and (3) borrowers who consolidated their loans without defaulting. Cash flows from the IDR plan repayment model are then exported to the larger student loan model. According to Education officials, the student loan model allocates these cash flows, which are organized by the year in which loans enter repayment, back to the appropriate loan origination cohorts using an assumption about the rate at which loans originated in a given year will enter repayment. Education assumes that all loans being repaid in IDR plans in a particular loan origination cohort will have the same cash flow patterns as loans in the sample used in the IDR plan repayment model. The student loan model discounts estimated cash flows to present value using the Office of Management and Budget’s credit subsidy calculator tool to determine the subsidy cost. The subsidy rate is determined by taking the ratio of the subsidy cost to the volume of loan obligations estimated to be made in that year. The U.S. Department of Education’s (Education’s) supplementary model for estimating Income-Driven Repayment (IDR) plan repayment patterns—referred to as the IDR plan repayment model—blends statistical analysis and assumptions about the future behavior of borrowers. Education uses data on borrowers’ historical incomes to estimate their incomes in future repayment years. According to Education staff, the previous version of the agency’s IDR plan repayment model used data from the Current Population Survey, a general population sample survey administered by the U.S. Census Bureau, which the agency matched to a sample of borrower data. Education staff told us they searched for a different source of income data beginning in 2013, due to the relatively short 2-year Current Population Survey panel length. The short panel required Education to combine incomes from different individuals and Current Population Survey samples to project over enough repayment years. For Education’s current IDR plan repayment model, developed in 2015, agency staff sought income and other data from federal income tax returns, as collected by the U.S. Department of the Treasury (Treasury). Taxpayer data offered the potential for more accurate data than matched Current Population Survey data, which only covered a sample of borrowers for relatively short 2-year period. (According to Education officials, the Current Population Survey does not contain data on student borrowing so the prior model had to assume that borrowers and non- borrowers had the same income patterns.) Despite the expected benefits of using actual taxpayer data, Treasury staff indicated that rules concerning taxpayer privacy prevented them from providing data on actual borrower incomes directly. Education staff said they initially hoped to receive data from Treasury that matched borrowers’ actual incomes as closely as possible, perhaps with a random distortion to protect taxpayer privacy. Staff mentioned Education’s National Postsecondary Student Aid Study restricted-use file as an example of a similar dataset. However, Treasury’s chosen approach involved imputing borrower income categories. Education staff then requested that Treasury convert these categorical values into dollar- scaled incomes for use in Education’s IDR plan repayment model. Based on our review of documentation from Treasury and Education, to assemble the data prior to imputation, Treasury matched a sample of borrower data that Education drew from the National Student Loan Data System containing loan information from September 2013 to their tax return data for filing years 1996 through 2013. Treasury assumed that borrowers did not file returns if they did not have matching tax return data for a given year. Key tax variables matched to the file for loan modeling purposes included adjusted gross income, number of exemptions, and filing status, among others. The final matched dataset included observations for approximately 1.3 million borrowers in each of 18 tax years. After matching the files, Treasury used a data mining algorithm, known as “graphical models,” to create an imputed version of the matched data. According to Education staff, they asked Treasury to provide an imputed dataset that resembled the actual data as closely as possible, for all of the tax variables joined to their borrower records. Education staff said they expected incomes to be accurate within categories but having random distortion to preserve taxpayer privacy. Treasury staff told us that they lacked the time and prior experience with Education’s data to have a pre-existing model to meet these specifications. Instead, Treasury used graphical methods to automatically identify a model that best fit the joint distribution of the data across several variables and allowed for the simulation of new imputed data. Treasury staff said that this approach was simpler than what they might have done given more time, but it is unclear whether greater complexity in the model would have yielded better results. Based on our review of documentation and interviews with Treasury staff, Treasury’s exact method of imputation had several steps. First, Treasury rescaled all variables from their natural scales into discrete categories, which primarily affected borrower incomes, which are naturally measured in continuous nominal dollars. Using categories of incomes rather than the continuous dollar scale allowed Treasury to develop a model using graphical methods that required less computing power. Second, Treasury used graphical methods to identify the relationships (or dependencies) among the borrower and tax variables in the matched data, in the form of multivariate crosstabulations. The model first estimated the probability that a borrower would file a tax return in a given year, and then modeled the joint distribution of the data, given that the first-stage model estimated that a borrower would file a return. After estimating these crosstabulations, Treasury created a single imputed dataset by drawing random variates from the fitted joint distribution of the data, in order to replace records in the actual dataset. When imputing incomes, Treasury staff told us they took an additional step to transform the imputed income variables from a categorical to continuous scale by drawing random dollar values from probability distributions. For borrowers with incomes imputed in the lowest and highest categories, Treasury simulated continuous incomes by drawing from normal or log-normal distributions with moments set to their sample values. For borrowers in all other categories, Treasury drew independent random variates from uniform distributions with support on the range of each imputed category. According to Treasury staff, they constrained the imputation model to replicate some of the longitudinal structure in the tax data over time within borrowers. The model imputed a borrower’s income in the current year based on the borrower’s one-year lagged income, which generally ensures that the imputed data recreates the correlation between incomes in adjacent years (i.e., the first-order autocorrelation). In addition, Treasury staff said that the imputation variables were stratified by year, in order to allow the conditional distributions to vary over time. Any imputed data will have imputation error, the amount of which depends on the predictive power of the model or method used to create them. According to statistical theory, imputation models can produce imputed datasets that are systematically biased, in the sense that the imputed distribution of the data does not resemble the actual distribution across many imputations. Imputation can also produce imputed datasets that are unbiased but have a high degree of imputation error. (More specifically, the variance of a model’s posterior predictive distribution can be large.) In these cases, the imputed distribution of the data will resemble the actual distribution across many imputations, but the imputed distribution in any one sample of imputed data, often one random simulation, may be quite different. Measuring error can involve calculating confidence intervals. A larger confidence interval relative to the estimate would suggest imputed data that are more prone to error. A user of imputed data typically would consider the size of a confidence interval as one criterion when assessing whether imputed data are sufficiently precise for a specific application, along with the imputation model’s potential bias and the user’s tolerance for error. These two types of imputation error can affect the analysis of imputed data. Ordinary methods of statistical analysis generally assume that variables (like borrower incomes) are measured without error—an assumption that is clearly not valid when analyzing imputed data. Analyses that do not account for imputation error can produce estimates that are biased or more or less precise than ordinary statistical theory would imply, depending on the nature of the analysis. To address these features of imputed data, generally accepted statistical practices suggest a number of methods for the analysis of imputed data. One common method uses “multiple imputation” to impute the data several times, producing “implicate datasets.” Implicates are randomly generated copies of the imputed data, produced by the same imputation model. The imputed data will vary randomly across implicates, depending on the nature and precision of the imputation method, because most imputation models have a partially random or probabilistic structure. By assessing the degree to which analytical results vary across implicates, analysts can incorporate the error of imputation when estimating the error of their estimates more generally. Imputation error “propagates” into other measures of precision, such as sampling error. As an alternative or complement to assessing imputation error directly, the statistical literature recommends that analysts use imputed data as preliminary information, prior to replicating their analyses using actual data. This approach applies to situations in which an analyst may not access the underlying data, but can provide computer code or algorithms to another analyst who may access the actual data and replicate the work. Education staff told us they did not request information that would have allowed them to assess how imputation error would have affected its cost estimates, nor did they provide their computer code to a Treasury analyst who had access to the data to replicate their work. Education staff conducted an informal assessment of the quality of the borrower income data by reviewing correlations between incomes and key factors like education and borrowing levels. They did not use more formal methods to assess and address imputation error in their estimates, such as those discussed above. In addition, Education staff did not thoroughly document and evaluate the imputation methods that Treasury used, nor did they request evidence of an adequate model fit. Instead, according to Education staff, Treasury provided Education with limited documentation which included a broad overview of the imputation approach. Treasury developed detailed documentation of the imputation model and error at our request, several months after Education had accepted the final imputed data. Treasury staff reported having computing and other resource constraints that affected their choice of models and methods. These constraints would have affected Treasury’s capacity to run their model multiple times and produce multiple implicate datasets. Through our review of summary documentation and limited descriptive and graphical analysis from Treasury, we found indications of imputation error that Education may not find acceptable for its purposes. This error relates to the imputation of incomes in the lowest and highest categories as well as the longitudinal structure of borrower incomes over time. This error warrants further evaluation by Education, given that the agency sought income data that would resemble borrowers’ actual, historical incomes as closely as possible, including accurate longitudinal profiles of incomes. Accurate longitudinal profiles of income are important, because Education’s IDR plan repayment model includes a number of calculations at the borrower level, such as specific payment amounts and borrowers’ eligibility for specific IDR plans, which use the sequence of data within borrowers over time as inputs and are re-calculated each year over the repayment period. Treasury staff told us they did not seek to impute incomes on the dollar scale. Rather, staff imputed income categories, and then evaluated model fit using the categorical distributions of the imputed and actual income data. After developing and validating the categorical imputation model, Treasury provided a simple transformation of the income categories into dollar values, using the secondary imputation methods we describe above. Treasury staff described this aspect of the imputation as a practical solution to meet needs that Education clarified after Treasury had developed the imputation method and its goals. Because dollar- scaled incomes were not originally specified, Treasury staff told us they did not assess the fit of the imputed dollar-scaled incomes to the joint distribution of the data. Treasury’s comparison of the imputed and actual income data indicates the imputed categorical data generally resembled the actual data, but its secondary step to produce dollar scaled data introduced additional error, particularly for observations in the highest and lowest income categories. Treasury provided us with tabulations and plots of the imputed and actual data, along with predictive p-values. The frequency statistics showed that the marginal and joint distributions of key variables were generally similar in the imputed and actual data for the categorical data, but that the secondary imputation of dollar scale incomes produced additional error for borrowers in the lowest and highest income categories. As the Treasury-provided summary statistics in table 1 show, the mean imputed income was about 2.1 times smaller than the mean actual income among borrowers who earned $12,000 or less. The imputed mean was about 1.9 times larger than the actual mean for borrowers in the highest income category. Treasury officials agreed that imputation error may be greater in the lowest and highest categories, but speculated that the error may not be practically consequential for the calculation of income-based loan payments. However, because these data form the foundation of numerous individual-level sequential calculations that determine what borrowers are estimated to repay to the government, error associated with the data should be measured and its effect on budget estimates should be assessed. By design, Treasury’s imputation model ensured that the correlation between incomes in adjacent years was similar in the imputed and actual data. Despite this important constraint, the model did not seek to accurately impute complete, realistic profiles of dollar-scaled incomes over time within the same borrower for all observed years. Consequently, the imputed profiles of incomes over time within borrowers were not designed to ensure that they resemble those observed in the actual data at the individual level. Treasury staff confirmed this feature of the imputed data. Our limited exploratory analysis of the imputed dollar-scaled income data revealed patterns consistent with these features of the imputation. Incomes were less strongly correlated between adjacent years in the imputed data than in the actual data, based on statistics that Treasury staff provided. Specifically, let Var(AGIt | ADIt-1, AGIt-2, … , AGIt-k) denote the marginal variance of incomes at time t, conditional on k previous values. In the imputed data, the limited evidence available to us suggests that the estimated conditional variance in the imputed data may exceed the actual variance in the population of student loan borrowers. The Pearson correlation between incomes in the current and previous year, truncated to the interval of positive incomes less than $1 million, was 0.84 in the actual data and 0.58 in the imputed data. In other words, one measure of the year-to-year instability of incomes was about 44 percent larger than in the actual data. In addition, the absolute value of incomes changed between adjacent years by 52 percent in the actual data but 75 percent in the imputed data. Consistent with these aggregate statistics, figure 19 shows how imputed incomes vary by large amounts from year to year within the same 10 randomly selected borrowers presented previously in the report, this time with their tax filing status indicated. Figure 20 illustrates how imputed incomes vary by large amounts from year to year within the same 60 randomly selected borrowers. The secondary imputation of incomes in dollars may explain the patterns above. Simulating dollar-scaled incomes from a set of uniform, normal, or log-normal distributions would have added some amount of approximation error, potentially inflating the conditional variance above in the imputed data. The degree of error would depend on how strongly the actual income distribution within each category diverged from the assumed distribution (e.g., its nonlinearity when simulated as uniform). Consistent with this explanation, Treasury staff reported nearly identical Pearson correlations between current and 1-year lagged categorical incomes in the imputed and actual data, at 0.69 and 0.67, respectively. The approximation error for continuous incomes may have compounded across years when Treasury staff independently simulated incomes for the same borrowers in adjacent years, without constraining the imputed distribution to preserve the potential dependency of incomes across second-order lags and higher. Treasury staff emphasized that adjusted gross income can be more volatile over time than other measures of income. Adjusted gross income includes gross wages, business income, and asset income, among other sources, as well as certain deductions and credits. According to Treasury staff, adjusted gross income can vary more substantially over time than other sources of income, such as wages, and that such variation is common among upper-income filers. However, we found that the absolute value of imputed adjusted gross incomes varied between adjacent years by 15 to 77 percent for the middle 50 percent of the sample of borrowers with adjusted gross incomes above 0 and less than $400,000. The widespread nature of the volatility conflicts with an explanation that emphasizes volatile sources of income, deductions, and credits among borrowers with high incomes. We did not receive sufficient information to fully evaluate the nature and extent of imputation error in the Treasury data, and how it would affect Education’s IDR plan cost estimates. For instance, we did not receive Treasury’s computer code or the actual tax data. Instead, Treasury staff described the analysis in interviews and written briefing slides, as well as a 7-page summary of the analysis that they previously provided to Education. The correlations and proportional change statistics above are limited in their ability to fully describe complete profiles of incomes at the individual borrower level and their dependence over time, because they describe linear associations only between data from adjacent years. Additional analysis, with full access to the imputation model code and tax data and a thorough assessment of the longitudinal structure of incomes within borrowers over time, is necessary to confirm the imputation error suggested by the limited evidence we obtained from Treasury. The IDR plan repayment model uses the imputed data on borrower incomes and other characteristics to forecast these data for future repayment years that have not yet occurred. The model uses a different method of imputation, known as the “hot-deck,” to make these forecasts. Below, we describe this method in detail and evaluate it against generally accepted statistical practices. According to our review of the imputed data that Education received from Treasury, the data could span a variable portion of each borrower’s repayment history. Education received imputed data for tax years 1996 through 2013. For a borrower who entered repayment in 1996, this period would span the entire historical repayment period through 2013, but it would not cover future years when the borrower may still be in repayment. Conversely, for a borrower who entered repayment in 1986 and repaid all debt in 2000, the data would span the last four historical years but not the first 10. Many other types of overlap are possible. The repayment model uses these historical data and hot-deck methods to impute or forecast data for repayment years that have not yet occurred. The hot-deck is a general purpose method of imputation, which statistical organizations commonly use to impute missing survey data. For a set of records needing imputation, hot-deck methods use a set of covariates to identify one or more records in the data with observed values on all variables, which is similar to the record needing imputation. The method then substitutes the observed values for the values needing imputation, often using random selection among the donor records when multiple donor records are available. Once the repayment model forecasts data for unobserved repayment years, it treats them as known, observed data. The repayment model uses the forecasted data as input for the second stage of modeling, which applies various assumptions about how borrowers will repay their loans over time. The second stage modeling incorporates neither the error associated with Treasury’s imputation of the matched tax and loan data nor the error associated with Education’s hot-deck forecasting. As mentioned previously, any method of imputing or forecasting unknown observations will have error associated with its predictions. Although the nature of this error depends on the method and data, generally accepted statistical practices typically recommend quantifying the error and incorporating it into subsequent analysis of the predictions. Education’s method of forecasting borrower incomes does not quantify the error associated with the method or incorporate it into subsequent analyses. Education’s analysis seeks to forecast the values of several variables, most notably income, for a set of borrowers over up to 31 future repayment years. One could view this as either a longitudinal econometric modeling problem or a general purpose imputation of missing data. Using either approach, accepted statistical practice involves quantifying and propagating the error that is inherently associated with prediction. An econometric approach would use an explicit statistical model for how the forecasted variable depends on several other variables (or covariates). Additional assumptions would describe how the imputed variable varies over time, either through covariates (such as time indicators) or assumptions about the variable’s random fluctuation around its long-term mean (such as an error term with an autoregressive order 1 structure). These model assumptions provide explicit formulas to predict future values of the variable and to quantify the likely error of prediction. The latter formulas for prediction error (or the posterior predictive distribution) can allow analysts to propagate the error of prediction into subsequent analyses of those predictions. An alternative approach would use more advanced methods of imputing missing data, such as multiple, maximum likelihood, or expectation- maximization imputation. These methods assume an explicit probability model for the joint distribution of the data, with parameters that can be estimated from the data. Analysts can use various methods for drawing from the fitted data distribution, in order to generate multiple implicate datasets, as discussed above. This allows for analysts to quantify and propagate error across subsequent analyses of the imputed data. Education’s application of the hot-deck method does not follow these general statistical principles. The method imputes the future values of all unknown data, using donor cases that are similar on a set of covariates, such as gender and highest educational program level. After making predictions, the method does not quantify the prediction error of the estimates, using in-sample statistics such as mean-squared error, misclassification rates, deviance statistics, predictive p-values, or the estimated variance of the posterior predictive distribution. Once the repayment model forecasts the income data for future years, it assumes that the estimates have zero prediction error associated with them, or equivalently, that the error does not affect the repayment model’s loan cost estimates. Since any applied method of forecasting or imputation produces error, and Education’s IDR plan cost estimates are highly sensitive to changes in borrower income forecasts, it is important for Education to measure this error and determine its ultimate impact on IDR plan cost estimates. Moreover, the IDR repayment model uses source data that have their own unquantified imputation error from Treasury’s imputation. These two sources of error—Treasury’s imputation and Education’s forecast—may interact and combine in ways that further increase the bias and imprecision of Education’s loan cost estimates. The presence of multiple forms of error, at different stages of analysis, emphasizes the importance of propagating all sources of error through the entire analysis, or else eliminating imputation error from the imputed data by using actual observations. At a minimum, Education should acknowledge the presence of imputation error and identify how it might affect estimates from the repayment model. Such acknowledgements would provide more transparent information to users of its estimates, compared to point estimates that do not disclose the limitations of the source data. Statistical organizations accept the need for users of imputed data, such as Education, to quantify and assess the effects of imputation error, despite their release of public data that have been imputed. The U.S. Census Bureau warns users that methods of estimating sampling error will underestimate total error when data have been imputed. In recent years, the Census has generated imputed data for several surveys, the Survey of Income and Program Participation and Longitudinal Business Database, but has warned that analyzing imputed data without necessary corrections may understate the variance of estimates. This guidance to data users is consistent with the criteria discussed above, which recommend quantifying and propagating imputation error, despite statistical agencies’ widespread use of imputed data in public data products. The following tables include a summary of available loan cohort data underlying the U.S. Department of Education’s (Education’s) submissions to the President’s fiscal year 2016 and fiscal year 2017 budgets. These tables are provided to illustrate how Education’s estimates of IDR plan costs shifted over the past two President’s budgets. Some of the differences are attributable to the change in Education’s methodology for estimating IDR plan costs, which was implemented for the President’s fiscal year 2017 budget and is described in this report. Other differences are due to the policy assumptions in place when the budgets were developed. Specifically, for the fiscal year 2016 budget, Education used provisional policies for its newest IDR plan that were under negotiation. Estimates prepared for both the President’s fiscal year 2016 and fiscal year 2017 budgets included legislative proposals affecting new borrowers. Finally, the fiscal year 2017 budget estimates include increased costs associated with the addition of the 2017 loan cohort, as well as the updated current reestimated costs of older cohorts. In addition to the contact named above, Kris Nguyen (Assistant Director), Ellen Phelps Ranen (Analyst-in-Charge), Rachel Beers, James Bennett, John Karikari, Karissa Robie, Amber Sinclair, and Jeff Tessin made key contributions to this report. Additional assistance was provided by Deborah Bland, Jessica Botsford, Russ Burnett, Marcia Carlsen, David Chrisinger, Cole Haase, Carol Henn, Susan J. Irving, Marissa Jones, Sheila McCoy, Erin McLaughlin, Jeffrey G. Miller, Andrew Nelson, Jason Palmer, Jessica Rider, Amrita Sen, Brian Schwartz, Michelle St. Pierre, Adam Wendel, Charlie Willson, and Rebecca Woiwode.
As of June 2016, 24 percent of Direct Loan borrowers repaying their loans (or 5.3 million borrowers) were doing so in IDR plans, compared to 10 percent in June 2013. Education expects these plans to have costs to the government. GAO was asked to review Education's IDR plan budget estimates and estimation methodology. This report examines: (1) current IDR plan budget estimates and how those estimates have changed over time, and (2) the extent to which Education's approach to estimating costs and quality control practices help ensure reliable estimates. GAO analyzed published and unpublished budget data covering Direct Loans made from fiscal years 1995 through 2015 and estimated to be made in 2016 and 2017; analyzed and tested Education's computer code used to estimate IDR plan costs; reviewed documentation related to Education's estimation approach; and interviewed officials at Education and other federal agencies. For the fiscal year 2017 budget, the U.S. Department of Education (Education) estimates that all federally issued Direct Loans in Income-Driven Repayment (IDR) plans will have government costs of $74 billion, higher than previous budget estimates. IDR plans are designed to help ease student debt burden by setting loan payments as a percentage of borrower income, extending repayment periods from the standard 10 years to up to 25 years, and forgiving remaining balances at the end of that period. While actual costs cannot be known until borrowers repay their loans, GAO found that current IDR plan budget estimates are more than double what was originally expected for loans made in fiscal years 2009 through 2016 (the only years for which original estimates are available). This growth is largely due to the rising volume of loans in IDR plans. Education's approach to estimating IDR plan costs and quality control practices do not ensure reliable budget estimates. Weaknesses in this approach may cause costs to be over- or understated by billions of dollars. For instance: Education assumes that borrowers' incomes will not grow with inflation even though federal guidelines for estimating loan costs state that estimates should account for relevant economic factors. GAO tested this assumption by incorporating inflation into income forecasts, and found that estimated costs fell by over $17 billion. Education also assumes no borrowers will switch into or out of IDR plans in the future despite participation growth that has led budget estimates to more than double from $25 to $53 billion for loans made in recent fiscal years. Predicting plan switching would be advisable per federal guidance on estimating loan costs. Education has begun developing a revised model with this capability, but this model is not complete and it is not yet clear when or how well it will reflect IDR plan participation trends. Insufficient quality controls contributed to issues GAO identified. For instance: Education tested only one assumption for reasonableness, and did so at the request of others, although such testing is recommended in federal guidance on estimating loan costs. Without further model testing, Education's estimates may be based on unreasonable assumptions. Due to growing IDR plan popularity, improving Education's estimation approach is especially important. Until that happens, IDR plan budget estimates will remain in question, and Congress's ability to make informed decisions may be affected. GAO is making six recommendations to Education to improve the quality of its IDR plan budget estimates. These include adjusting borrower income forecasts for inflation, completing planned model revisions and ensuring that they generate reasonable predictions of participation trends, and testing key assumptions. Education generally agreed with GAO's recommendations and noted actions it would take to address them.
During our review, we compared DFAS’ efforts to plan and manage its Year 2000 program to our Year 2000 assessment guide for evaluating the awareness and assessment phases. We also performed limited work in areas where DFAS had progressed to the renovation and validation (testing) phases. Specific audit work was performed at DFAS headquarters and three of the five DFAS centers—Cleveland, Denver, and Indianapolis—that, as of April 1997, maintained responsibility over about 83 percent (179 of 216) of the systems that are currently being tracked under the Year 2000 program. Of the 179 systems at these locations, we discussed with system and technical managers the status of 48 systems, about 27 percent of the total. We selected these systems because they cover the three categories of active systems—those to be replaced, those to be renovated, and those already compliant—that DFAS is tracking under its Year 2000 program. Additional factors included consideration of the system size, number of system interfaces, and whether systems were intended to replace existing legacy systems. Appendix I provides a list of the systems that we reviewed. At DFAS headquarters, we met with the Deputy Director for Information Management, who is responsible for the guidance and direction for the Year 2000 program, to discuss DFAS’ strategy for meeting the Year 2000 mandate. We also met with the Year 2000 project manager, who is responsible for the coordination, dissemination, and reporting for the DFAS Year 2000 program, to get an understanding of ongoing activities and requirements. To determine the status of DFAS’ contingency planning for automated systems, and its relationship to the Year 2000 program, we obtained DFAS’ Corporate Contingency Plan and discussed contingency provisions with officials from the DFAS Plans and Management Deputate and their associates at the DFAS Denver center. We also met with functional managers for six systems DFAS reported as being noncompliant that were the responsibility of the Information Management and Finance Deputates to determine the status of their efforts to achieve Year 2000 compliance. At the three DFAS centers, we met with individual center directors, who had been given responsibility for ensuring that the systems under their respective centers are Year 2000 compliant. Also, we met with Financial Systems Activity (FSA) Directors, who have technical responsibility for systems maintenance at those locations. We also met with each center’s Year 2000 point of contact who is to disseminate Year 2000 information, and coordinate and consolidate Year 2000 reporting at the center level. Finally, we met with functional and technical managers of 42 separate systems who are being held accountable for ensuring that the systems they are responsible for are Year 2000 compliant. Using the assessment guide, we reviewed the status of DFAS’ Year 2000 awareness by obtaining information on and discussing DFAS’ strategy and program management initiatives with top management. We also obtained Year 2000 guidance provided to headquarters and center-level staff, and discussed this guidance with DFAS management responsible for administering the Year 2000 program and ensuring the compliance of DFAS systems. To determine the extent of assessment and renovation activities being performed, we identified Year 2000 policies and procedures that had been issued, reviewed the status of systems inventories, systems priority processes, risk assessments and contingency planning, and reporting and oversight activities. We interviewed functional and technical managers responsible for specific DFAS systems to discuss the status of their system assessments, the use of assessment tools, and the completeness and reliability of quarterly status report information such as identified milestones, adequacy of resources, interfaces and written agreements, and potential obstacles to meeting Year 2000 compliance. We also discussed with DFAS staff responsible for assessing DFAS’ hardware and systems software infrastructure, including its mid-level and communications processors, the status of their efforts to compile an inventory and plan for testing. We also spoke with several systems managers to obtain the status of DFAS’ efforts to test and validate commercial-off-the-shelf (COTS) applications. In evaluating the extent of DFAS’ validation phase activities, we met with selected technical managers for six systems classified as compliant to determine the extent of testing that had been performed for asserting compliance. In addition, we tracked the status of DFAS’ progress in actually replacing 18 systems that were scheduled to be replaced during a 3-month period. Our audit work was performed from August 1996 through May 1997 using generally accepted government auditing standards. The Department of Defense provided written comments on a draft of this report. These comments are discussed in the “Agency Comments and Our Evaluation” section and are reprinted in appendix III. DFAS is the accounting firm of the Department of Defense. It was established in January 1991 to strengthen DOD’s financial management operations by standardizing, consolidating, and streamlining finance and accounting policies, procedures, and systems. DFAS accounts for DOD’s worldwide operations with assets totalling well in excess of $1 trillion. Each year, DFAS pays nearly 4 million active military and civilian personnel, 2 million retirees and annuitants, and approximately 23 million invoices to contractors and vendors. Due to DFAS’ reliance on computer systems to carry out its operations, the Year 2000 issue has the potential to impact virtually every aspect of the DFAS accounting and finance mission. The majority of DFAS finance and accounting systems are 20 or more years old and are primarily written in the Common Business Oriented Language (COBOL) programming language. DFAS recognizes that millions of lines of code must be analyzed and rewritten in systems that will still be operational in the year 2000. The Year 2000 problem is rooted in the way dates are recorded and computed in automated information systems. For the past several decades, systems have typically used two digits to represent the year, such as “97” representing 1997, in order to conserve on electronic data storage and reduce operating costs. With this two-digit format, however, the year 2000 is indistinguishable from 1900, or 2001 from 1901. As a result of this ambiguity, system or application programs that use dates to perform calculations, comparisons, or sorting could generate incorrect results when working with years after 1999. Although DFAS is responsible for the majority of DOD’s finance and accounting systems, DFAS is not responsible for all the systems that produce financial data. Systems that support other functional areas such as acquisition, medical, logistics, and personnel originate and process a significant amount of financial data that is ultimately reported on financial statements. These military service and Defense component systems provide financial data to DFAS through systems interfaces that DFAS needs to consider in addressing the Year 2000 problem. The systems that interface with DFAS systems are just as vulnerable to the Year 2000 problem as its own systems. Accordingly, DFAS’ ability to sustain operations in the Year 2000 time frame is dependent not only on its own systems, but also on a host of Defense component systems upon which it is largely reliant for accounting transaction data. In February 1997, we published the Year 2000 Computing Crisis: An Assessment Guide that addresses common issues affecting most federal agencies and presents a structured approach and a checklist to aid them in planning, managing, and evaluating their year 2000 programs. The guide describes five phases—supported by program and project management activities—with each phase representing a major year 2000 program activity or segment. The guidance draws heavily on the work of the Best Practices Subcommittee of the Interagency Year 2000 Committee, and incorporates guidance and practices identified by leading organizations in the information technology industry. The five phases are consistent with those prescribed by DOD in its Year 2000 Management Plan. The phases and a description of each phase follows: Awareness—Define the Year 2000 problem and gain executive-level support and sponsorship. Establish a Year 2000 program team and develop an overall strategy. Ensure that everyone in the organization is fully aware of the issue. Assessment—Assess the Year 2000 impact on the enterprise. Identify core business areas and processes, inventory and analyze systems supporting the core business areas, and prioritize their conversion or replacement. Develop contingency plans to handle data exchange issues, lack of data, and bad data. Identify and secure the necessary resources. Renovation—Convert, replace, or eliminate selected platforms, applications, databases, and utilities. Modify interfaces. Validation—Test, verify, and validate converted or replaced platforms, applications, databases, and utilities. Test the performance, functionality, and integration of converted or replaced platforms, applications, databases, utilities, and interfaces in an operational environment. Implementation—Implement converted or replaced platforms, applications, databases, utilities, and interfaces. Implement data exchange contingency plans, if necessary. In addition to following the five phases described, the Year 2000 program should also be planned and managed as a single large information system development effort. Agencies should promulgate and enforce good management practices on the program and project levels. According to DFAS officials, they have been working on the Year 2000 issue since 1991, although the Year 2000 program did not officially begin until March 1996. As of April 1997, DFAS was tracking 216 systems for Year 2000 purposes: 71 of the 216 were to be renovated to become Year 2000 compliant, 79 were expected to be replaced by migration or interim migration systems, and 66 were designated by DFAS as already compliant.Of the 71 systems DFAS expects to make compliant through renovation, 32 were reported as being in the assessment phase, 36 were in the renovation phase, one was in the implementation phase, and two systems did not show a phase. DFAS has estimated that it will cost $33.7 million to renovate its systems, which contain about 63 million lines of code, to meet Year 2000 requirements. DFAS has taken a number of positive steps to ensure that its personnel are fully aware of the impact should DFAS finance and accounting systems not be compliant at the turn of the century. During the awareness phase, DFAS developed a Year 2000 strategy that adopts the five-phased approach of awareness, assessment, renovation, validation, and implementation. The strategy, which is embedded in DFAS’ written executive plan, establishes accountability for Year 2000 systems compliance from DFAS headquarters management to individual system/program managers at the center level. The Deputy Director for Information Management, who serves as the DFAS Chief Information Officer (CIO), is responsible for managing the Year 2000 program, and overseeing efforts to ensure that all DFAS systems are Year 2000 compliant by December 31, 1998, and within existing funding. As of April 1997, DFAS had taken the following actions as part of its efforts to address the Year 2000 problem: established a Year 2000 systems inventory, prepared cost estimates for systems to be renovated, instituted a quarterly Year 2000 status reporting process, appointed a project manager to provide Year 2000 guidance and track Year established a Year 2000 certification program that defines the conditions that must be met for automated systems to be considered as Year 2000 compliant. DFAS has also performed and documented an analysis of personal computers and workstations, which covered the Year 2000 hardware problems, test procedures and results, and corrective actions. The results of the analysis were provided to DFAS centers for use in testing their hardware. As of April 1997, DFAS reported that over one-third of its nearly 20,000 personal computers had been tested, and that only about 1 percent were found to have been noncompliant. The DFAS Deputy Director for Information Management expects to replace those personal computers that failed the Year 2000 test with compliant computers during the normal equipment upgrade cycles prior to the year 2000. In addition, DFAS has reported that it has entered its major accounting and financial information systems into DOD’s Defense Integration Support Tools (DIST) database. DIST is the database that DOD uses to track its information systems and it is intended to facilitate the Year 2000 effort through its identification of functional systems interfaces and data exchange requirements. DFAS has taken numerous positive actions during the Year 2000 awareness and assessment phases. However, DFAS is moving forward into renovation, testing, and validation—the more difficult and complex phases of Year 2000 correction—without fully addressing some critical steps associated with the assessment phase. Specifically, DFAS has not identified, in its Year 2000 plan, all critical tasks for achieving its objectives or established milestones for completing all tasks, performed formal risk assessments of all systems to be renovated and ensured that contingency plans are in place in the event that renovations are not completed in time or if systems fail to operate properly, identified all systems interfaces and only a fraction of written interface agreements have been finalized with interface partners, and adequately planned to ensure that testing resources are available when needed. DFAS’ risk in these areas is increased due to its dependence on the military services and other Defense agencies, such as DISA, to ensure that their systems and related operating environments are Year 2000 compliant. DFAS will need to work closely with these organizations to ensure that system renovations are performed, interface agreements are completed, and proper and timely test environments are provided. If its systems are not operational at the year 2000, DFAS’ ability to pay military and civilian personnel, retirees and annuitants, and Defense contractors and vendors could be severely impacted. In turn, DFAS’ ability to interact with other DOD components that both provide and use financial data could be jeopardized. The Year 2000 program is expected to be the largest and most complex system conversion effort undertaken by federal agencies. Due to the complexities and scope of the Year 2000 problem, it is critical that agencies develop comprehensive plans that establish schedules for all tasks and phases of the Year 2000 program, set reporting requirements, assign conversion or replacement projects to Year 2000 project teams, provide measures for assessing performance, and anticipate the need for risk assessments and contingency plans. DFAS has issued a high-level Year 2000 executive plan that sets forth its strategy and approach. The plan covers four general areas that DFAS believes will ensure its ability to meet the Year 2000 challenge as follows. Ensure that DFAS personnel are aware of the Year 2000 problem by establishing Year 2000 points of contact at multiple organization levels, participating in the DOD Year 2000 Working Group, and distributing Year 2000 information. Assess the impact of Year 2000 on DFAS by establishing a systems inventory, replacing systems rather than renovating systems unless impacted prior to replacement, and developing systems as Year 2000 compliant. Ensure that DFAS systems are Year 2000 compliant and handle renovations through standard configuration management procedures. Establish responsibility at various levels of the organization—program/system manager, center director and headquarters deputy director, and Deputy Director for Information Management—for achieving Year 2000 compliance. Track Year 2000 progress by creating a quarterly consolidated progress report by system that contains information on each system’s Year 2000 efforts, such as target implementation date, interface information, and percentage of completion. Our review of DFAS’ Year 2000 plan disclosed that the plan includes a number of positive actions that are consistent with our assessment guide and DOD’s management plan. For example, the plan assigns responsibility for achieving Year 2000 compliance, sets forth reporting requirements, and establishes an overall Year 2000 compliance date. However, the plan does not address all phases of the Year 2000 problem, specifically the actions that will be needed during the validation (testing) and implementation phases. The plan also does not establish schedules for completing each phase of the Year 2000 program or milestones for meeting critical tasks under each phase, such as identifying system interfaces and securing interface agreements, preparing contingency plans, defining requirements for and establishing operational Year 2000 compliant test facilities, completing tests of personal computers and servers, or identifying performance measures for evaluating DFAS and center-level progress. DFAS officials have informed us that, while this step is not included in its Year 2000 plan, its system/program managers are now required to have all systems interfaces identified and written interface agreements completed by September 30, 1997, and all personal computers tested by December 31, 1997. Without comprehensive planning of the Year 2000 project, DFAS runs the risk that it will not have the information to make proper decisions or that necessary tasks will not be addressed in a timely manner. For example, it is important that DFAS establish time frames for completing specific tasks under the Year 2000 program that can be used by DFAS Year 2000 managers as indicators to gauge the progress of individual systems. Equally important is the need to formalize what DFAS managers expect to accomplish during each phase of the Year 2000 effort and within what time frames. For instance, if DFAS system managers have performed system renovations to become Year 2000 compliant, but planning was not conducted early in the process to ensure that adequate test resources or facilities would be available, DFAS runs the risk of systems failure if systems are left untested, or the loss of flexibility to pursue other alternatives before the year 2000. DFAS has initiated actions to require contingency planning for noncompliant systems that are at risk of not being replaced prior to impact of the year 2000. However, it has not extended its contingency planning to cover systems being renovated as Year 2000 compliant that may not operate at the turn of the century. Contingency plans are important because they identify the alternative activities, which may include manual and contract procedures, to be employed should critical systems fail to meet their Year 2000 deadlines. DOD’s Year 2000 Management Plan and our Year 2000 Assessment Guide call on agencies to initiate realistic contingency plans during the assessment phase for critical systems and activities to ensure the continuity of their core business processes. From an overall agency perspective, DFAS has a Corporate Contingency Plan, which was recently updated in May 1997, that establishes policies, programs, and procedures and assigns responsibilities for the contingency planning process. The plan discusses various possible threats to DFAS activities, but does not specifically address potential year 2000 system failures, nor does it require DFAS managers to include year 2000 failures as part of the updated plan. DFAS has adopted a strategy for making all systems impacted by the Year 2000 compliant that includes (1) replacing legacy systems with compliant migration or interim migration systems and (2) renovating systems expected to be operational on and after the year 2000. However, DFAS’ strategy has two inherent risks. First, because of delays in implementing some migration or interim migration systems, all legacy systems that are expected to be replaced may not be replaced prior to the year 2000. Second, systems being renovated to be compliant may not be completed as scheduled, and renovated systems and those systems DFAS believes to already be compliant, may not correctly operate at the turn of the century. Although DFAS has begun taking steps to address alternative actions if migration systems are delayed, DFAS’ overall Year 2000 strategy has not required managers to address alternative actions should systems not operate correctly at the turn of the century. If this latter risk is not addressed and various critical applications fail to operate properly near to or at the turn of the century because of Year 2000 problems, DFAS will encounter interruptions to its accounting and financial activities with no clear alternative actions to help ensure continuity of operations. DFAS is relying on the success of the DOD migration program to solve a significant portion of its Year 2000 problem. DFAS’ April 1997 quarterly status report indicates that about 80 existing legacy systems are to be replaced with 27 interim migration and migration systems, and several COTS systems. The DOD migration program, however, has a long history of problems, including missed milestones. Our current work has shown that many of DFAS’ legacy systems had not been replaced according to projected plans. Also, in some instances, replacement decisions had not been finalized because of concerns over incorporating the legacy system requirements into the migration or interim migration systems. To assess the likelihood that systems already scheduled to be replaced would be replaced as planned, we identified 18 systems from DFAS’ October 1996 Year 2000 quarterly report that were scheduled for replacement or termination by January 1997 (see appendix II for the number of systems not replaced or terminated by location). Of those 18 systems, we found that 11 had not been replaced or terminated as planned. While these systems could incur additional slippage and still be replaced before being impacted by the Year 2000 problem, DFAS’ ability to meet tight deadlines for replacing systems may well become more difficult as the need for technical staff and resources increase for Year 2000 activities. One example of this problem is the deployment of the Standard Accounting and Reporting System (STARS). STARS is a DFAS interim migration system intended to replace eight noncompliant legacy accounting systems before the year 2000. In September 1996, we reported that the STARS migration project had experienced a number of problems over the years, including incomplete planning, missed milestones, and budget overruns. One specific system that was to have been replaced by STARS in January 1997 is the Naval Civilian Engineering Laboratory Financial Management Data System (NCEL-FMDS). However, as of April 1997, DFAS reported that NCEL-FMDS would be replaced by a COTS system by August 1997. Another system—the U.S. Naval Academy Trust Fund Accounting System (NTFAS)—was to have been replaced by STARS as of December 1996. While the April 1997 DFAS Year 2000 quarterly status report shows that a date for replacing NTFAS with STARS had yet to be determined, recent discussions with DFAS personnel indicate that NTFAS had been terminated. At the time of our report, DFAS had not provided documentation to support this assertion. DFAS recognizes the concern that some migration and interim migration systems may be delayed, and has begun actions to renovate some legacy systems that were originally designated for replacement. DFAS has also required its systems managers, for all systems to be replaced, to assess and report the risk—using a high-; medium-; or low-risk designation—of their systems not being replaced prior to being impacted by Year 2000 problems. Systems managers for all systems to be replaced that were designated as having a high or medium risk are to have prepared plans identifying how they intend to fix the Year 2000 problem. Beginning in July 1997, DFAS reports that it intends to start monitoring the existence of these systems plans through its quarterly reporting process. Further, DFAS also intends to begin tracking other information that should help it make decisions as to whether to renovate, or take other alternative actions, for additional legacy systems that are now expected to be replaced. For example, systems managers will be required to report the latest possible start date to initiate a system renovation that would still allow them to meet the compliance deadline. This requirement calls for having contingency plans in place for over 80 percent of the systems to be replaced. However, because of the uncertainty associated with the implementation of DFAS’ migration systems, potentially there are still gaps that may necessitate DFAS extending this requirement to all systems scheduled to be replaced that support critical operations or provide data to those systems. The year 2000 represents a great potential for operational failure to DFAS that could adversely impact its core business processes as well as those of entities that depend on DFAS for accounting and financial reporting. To mitigate this risk of failure, our assessment guide and DOD’s management plan suggest that agencies perform risk assessments and prepare realistic contingency plans that identify alternatives to ensure the continuity of core business processes in the event of a failure. These alternatives could include performing automated functions manually or using the processing services of contractors. While DFAS managers have begun preparing contingency plans for legacy systems that may not be replaced by compliant systems prior to the year 2000, the DFAS Year 2000 strategy does not require managers to assess risk, or plan for contingencies, if systems being renovated fail to operate at the year 2000. Also, the recently updated DFAS Corporate Contingency Plan does not require managers to address contingencies for a potential Year 2000 failure. DFAS needs this protection to ensure that, in the event of an operational failure, major functional activities are not disrupted at the year 2000. DFAS currently has identified 71 systems that it plans to make compliant through renovation, and an additional 66 systems that are reported as being compliant, but have not yet been fully tested in a Year 2000 operating environment. Although the DFAS Year 2000 program calls for these systems to eventually be validated prior to implementation, even with a structured process for assessing systems’ compliance, DFAS systems are still at risk that unanticipated operational failure could occur. In addition, DFAS systems interact with many DOD component and military service systems, and as a result, an operational failure in one system or process would not only impact functions these systems currently perform but could also impact other related activities. Because many of the continuity of operation alternatives that traditionally apply to threats, such as back-up processing sites, cannot be relied upon to address Year 2000 issues, it is important that DFAS’ functional and technical managers have policies and procedures in place to ensure that critical activities can be performed in the event of system failure. The absence of good contingency planning increases DFAS’ risk that its operations could be disrupted. The success of DFAS finance and accounting operations hinges on the proper and timely exchange of data with others. DFAS systems interface internally with hundreds of other DFAS systems and externally with military services, Defense components, and various federal government systems. DFAS receives an estimated 80 percent of the data it uses in its finance and accounting processes from non-DFAS systems. It is critically important during the Year 2000 effort that agencies protect against the potential for introducing and propagating errors from one organization to another and ensure that interfacing systems have the ability to exchange data through the transition period. This potential problem may be mitigated through formal agreements between interface partners that describe the method of interface and assign responsibility for accommodating the exchange of data. DOD’s Year 2000 Management Plan places responsibility on component heads or their designated Year 2000 points of contact to document and obtain system interface agreements in the form of memorandums of agreement (MOA) or the equivalent. DFAS’ Year 2000 strategy calls for its system managers to identify interfaces for all systems that are to be renovated for Year 2000 compliance and to obtain written MOAs between interface partners. System managers also are required to identify the number of internal and external systems interfaces and the number of interfaces that are covered by MOAs and include this as part of the DFAS Year 2000 quarterly reporting process. The number of interfaces not impacted by the Year 2000 problem are also reported separately for each system. As of April 1997, DFAS reported that system/program managers had identified 904 internal and external interfaces that are affected by the year 2000 problem, although managers still had not identified all interfaces. Of the 904 system interfaces that had been identified, 451 were reported as internal interfaces and 453 were identified as external interfaces. According to DFAS, written MOAs, covering how and when the interfaces are to be accomplished, had been completed for only 230 of the system interfaces. DFAS’ quarterly report shows that significantly less progress has been made in securing written MOAs for external system interfaces than for those internal to DFAS. Of the 230 completed MOAs, only 82 were with external interface partners. DFAS officials have set September 30, 1997, as the deadline for securing all MOAs, both internal and external, with interface partners. While the number of interfaces is a major Year 2000 concern to DFAS, the importance and complexity of the interface issue is compounded due to DFAS’ use of different strategies for making systems Year 2000 compliant. DFAS reports that about one-third of the systems it plans to renovate are using procedural code or sliding windows as the predominate strategies for becoming Year 2000 compliant. As such, the use of different strategies in systems that exchange data through interfaces may require the use of bridging. With these strategies, each interface partner will have to clearly understand the logical date interpretations that each is using to ensure that the appropriate century is applied when exchanging two-digit year data. Additional monitoring and oversight may be necessary to ensure that compliant date strategies that depend on date logic are implemented correctly. Timely and complete information on all system interfaces that may be affected by Year 2000 changes is critical to the success of DFAS’ Year 2000 compliance program. The amount of work required to coordinate the data being exchanged between systems must be known as early as possible, and documented in written MOAs, in order that DFAS may complete maintenance schedules, allocate resources, plan testing, and schedule implementation. We expect that agencies may need over a year to adequately validate and test converted or replaced systems for Year 2000 compliance, and that the testing and validation process may consume over half of the Year 2000 program resources and budget. While DFAS technical managers have performed certain Year 2000 tests of individual systems as part of their normal software maintenance processes, DFAS has not yet performed sufficient planning to ensure that all necessary testing will be conducted prior to Year 2000 impact. Also, some systems that DFAS has designated as already compliant had not been fully tested to support that assertion. Our assessment guide calls on agencies to develop validation strategies and test plans, and to ensure that resources, such as facilities and tools, are available to perform adequate testing. During the assessment phase, DFAS had not yet developed test plans for all systems it plans to have operational at the year 2000, including those systems to be renovated and those already classified as compliant. DFAS also had not yet defined what Year 2000 test facilities it expected to use and ensured their availability. Much of DFAS’ testing is dependent upon others to provide the needed assurances that systems are Year 2000 compliant. For example, about 40 percent of DFAS’ systems are technically maintained by central design activities (CDAs) that are managed by another Defense component or a military service. These activities are likely to have differing processes for conducting system testing. Also, before DFAS managers can be assured that systems under their responsibility are compliant, the systems will need to be tested in an operating environment using a Year 2000 compliant operating system. DISA is responsible for providing a Year 2000 compliant operating environment and resources for testing systems, including many DFAS systems, at DISA megacenters. DFAS will also need assurances from vendors that its COTS applications are Year 2000 compliant. While DFAS’ recent establishment of a certification program should provide additional assurance that systems have been tested, the program will need to be properly implemented at all locations. Without planning for the proper and timely testing of all systems, DFAS runs the risk of potential contamination to systems data or interference with the operation of production systems. On the basis of our analysis, we found that DFAS had not performed adequate testing to assert that its compliant systems are capable of transitioning into the year 2000. According to the DFAS April 1997 quarterly status report, 66 of its 216 systems tracked for Year 2000 purposes are classified as compliant. DFAS’ compliant systems can be grouped into four categories: already converted, not date sensitive, under development, and compliant COTS products. To determine if DFAS had a sufficient basis for asserting Year 2000 compliance, we selected six systems that DFAS had designated as compliant and reviewed supporting documentation provided by technical managers for three of the six that were identified as already converted. The remaining three systems were either in the process of being developed Year 2000 compliant or deemed to not be date sensitive. Managers of the already converted, compliant systems we spoke with indicated that they had performed some tests on the transfer and storage of dates, but had not completed all Year 2000 compliance tests. For example: A technical manager for the Defense Transportation Pay System (DTRS) stated that system integration tests to input four-digit year data from a keyboard entry and from an electronic entry had been performed. However, system acceptance tests to determine Year 2000 compliance had not been performed at the time of our review. A technical manager for another compliant system—the Uniform Microcomputer Disbursing System (UMIDS)—indicated that the system had already been converted to accommodate the year 2000, and that some testing had been performed. All systems tests, however, including those to determine if UMIDS could operate in a Year 2000 environment with its interfaces, were not scheduled to be completed until fall 1997. A technical manager of another system reported as compliant—the Standard Army Financial Inventory Accounting and Reporting System-MOD (STARFIARS-MOD)—stated that this system could not be completely tested until a Year 2000 compliant compiler for Ada was available. On April 11, 1997, the DFAS Deputy Director for Information Management issued guidance for establishing a Year 2000 automated information system certification program. The intent of the certification program is to define conditions, through completion of a certification checklist, that must be met in order that systems can be considered as Year 2000 compliant. Under the certification program, systems identified in DFAS’ Year 2000 quarterly status report as to be renovated, being developed compliant, and compliant, are to be certified. System/program managers are to complete the certification no later than 1 month after a system acceptance test is performed and the system is deemed compliant. The certification checklist requires signatures of the technical manager responsible for performing system changes, the system/program manager responsible for ensuring that the system is compliant, and the center director or headquarters deputy director responsible for Year 2000 compliance of all systems at their respective locations. Because the certification program had only recently been established, we were unable to assess to what extent it had been implemented, and therefore, how well the process was working. If implemented effectively and consistently, the process should provide DFAS a more reliable basis for asserting compliance of its systems. DFAS systems have not been tested for their ability to operate in a compliant Year 2000 environment because DISA has not installed Year 2000 compliant operating systems. DISA plans to upgrade its large-scale IBM and Unisys operating systems to be Year 2000 compliant over the next 2 years. For example, DISA and DFAS plan to incrementally implement the new IBM OS/390 operating system and make necessary conversions to existing applications from April 1997 to October 1998. Once implementation is completed for a particular domain, system testing can begin. About 45 DFAS systems, many of which are processed on mainframes, are classified as already converted, compliant, or not date sensitive, but still need to be tested to ensure that they do not encounter problems with the new operating system. DFAS has not defined a validation process for ensuring that its COTS applications are Year 2000 compliant. Since most suppliers of COTS software do not disclose their source code or the internal logic of their products, testing should be complemented by a careful review of warranties and/or guarantees. At the time of our review, DFAS had not required the testing of COTS applications that are being reported as compliant. Although systems managers had been given responsibility for obtaining written assurances from their vendors that COTS products are compliant, no documentation had been obtained to provide these assurances. Without an effective validation process for assuring COTS Year 2000 compliance, DFAS runs the risk that these applications will not operate correctly in the future. While initial progress has been made, there are several critical issues facing DFAS, that if left unaddressed, may well result in the failure of its systems to operate at the year 2000. As the accounting arm of DOD, DFAS has a responsibility to its customers to ensure that its systems support their needs and produce accurate, reliable, and timely financial information on the results of DOD’s operations. At the same time, its operations hinge on the ability of systems belonging to the military services and other components to be Year 2000 compliant. Additionally, DFAS is dependent on numerous central design activities that are not under its control to perform Year 2000 renovations to many of its systems. Although DFAS managers have recognized the importance of solving Year 2000 problems in their systems, to reduce the risk of failure with its own Year 2000 effort, it is critically important that DFAS take every measure possible to ensure that it is well-positioned to deal with unexpected problems and delays. This includes promptly implementing Year 2000 project and contingency planning as well as addressing critical systems interfacing and testing issues. We recommend that you direct the DFAS Deputy Director for Information Management to: Build upon the existing DFAS project plan to ensure that it identifies the actions and establishes the schedules for completing each phase of the Year 2000 program, including the validation (testing) and implementation phases. The plan should also identify the milestones for meeting critical tasks under each phase, such as identifying system interfaces and securing interface agreements, preparing contingency plans, defining requirements for and establishing operational Year 2000 compliant test facilities, completing tests of personal computers and servers, and identifying performance measures for evaluating DFAS and center-level progress. Ensure that DFAS’ Corporate Contingency Plan addresses the Year 2000 crisis and provides guidance for ensuring continuity of operations. The guidance should require DFAS managers to perform risk assessments and prepare contingency plans for all critical systems impacted by the year 2000 and for all noncritical systems impacted by the year 2000 that provide data to critical systems. Specifically, risk assessments and contingency plans should be required for all critical systems, including the identification of alternatives in the event that (1) replacement systems are not available, (2) systems to be renovated are not completed, and (3) systems fail to operate as intended prior to Year 2000 impact. Require the timely identification of all internal and external systems interfaces and the completion of signed, written interface agreements that describe the method of data exchange between interfacing systems, the entity responsible for performing the system interface modification, and milestones identifying when the modification is to be completed. Require the full implementation of the recently established Year 2000 certification process and ensure that Year 2000 compliance is predicated on testing all systems, including COTS applications and personal computers and servers. Devise a testing schedule that identifies the test facilities and resources needed for performing proper testing of DFAS systems to ensure that all systems can operate in a Year 2000 environment. In written comments on a draft of this report, the Office of the Under Secretary of Defense (Comptroller) concurred with all of our recommendations to improve the DFAS Year 2000 program. In response to our recommendations, DFAS agreed to update its existing Year 2000 Executive Plan to ensure that it identifies the actions and establishes the schedules for completing each phase of the Year 2000 program, including the validation (testing) and implementation phases, and the milestones for meeting critical tasks under each phase. DFAS also agreed to update its Corporate Contingency Plan to require a risk assessment and business impact analysis of all mission critical systems and critical direct support systems for the Year 2000 crisis, including the addition of requirements to test critical systems for Year 2000 compliance and to identify contingency strategies for dealing with noncompliant situations. In addition, DFAS agreed to have all written interface agreements with interface partners in place by September 30, 1997, and to implement its Year 2000 certification process for ensuring all systems are compliant. Further, DFAS agreed to develop a testing schedule that identified the test facilities and resources needed for performing proper testing of DFAS systems to ensure those systems can operate in a Year 2000 environment. DFAS pointed out that it is working closely with DISA to coordinate the implementation of the Year 2000 environment, since DFAS is dependent on DISA to actually install and operate that environment. The full text of DOD’s comments are provided in appendix III. This report contains recommendations to you. Within 60 days of the date of this report, we would appreciate receiving a written statement on actions taken to address these recommendations. We appreciate the courtesy and cooperation extended to our audit team by DFAS officials and staff. We are providing copies of this letter to the Chairman and Ranking Minority Member of the Senate Committee on Governmental Affairs; the Chairmen and Ranking Minority Members of the Subcommittee on Oversight of Government Management, Restructuring and the District of Columbia, Senate Committee on Governmental Affairs, and the Subcommittee on Government Management, Information and Technology, House Committee on Government Reform and Oversight; the Honorable Thomas M. Davis, III, House of Representatives; the Deputy Secretary of Defense; the Acting Under Secretary of Defense (Comptroller); the Assistant Secretary of Defense (Command, Control, Communications and Intelligence); the Director of the Office of Management and Budget; and other interested parties. Copies will be made available to others upon request. If you have any questions on matters discussed in this letter, please call me at (202) 512-6240 or Ronald B. Bageant, Assistant Director, at (202) 512-9498. Major contributors to this report are listed in appendix IV. To be renovated (reengineering) To be renovated (reengineering) PBAS-FD - Program and Budget Accounting System - Fund Distribution STARFIARS-MOD - Standard Army Financial Inventory Accounting and Reporting System - MOD SRD-1 - Standard Finance System -Redesign (Subsystem 1) COA Host - Controller of the Army Host EDMS - Electronic Document Management System - Loss and Damage ADARS - Automated Drill Attendance Reporting System JUSTIS - Jumps Terminal Input System TAXMRI - Tax Machine Readable Input UCS - Unemployment Compensation System DTRS - Defense Transportation Pay System Compliant TD&RS - Transportation Disbursing and Reporting System (continued) To be renovated (reengineering) STARCIPS - Standard Army Civilian Payroll System SNIPS - Standard Negotiable Instrument Processing System CRISPS - Consolidated Return Items Stop Payment System STANFINS - Standard Financial System DIFS - Defense Integrated Financial System To be renovated CMCS - Case Management Control System - Accounting Segment GAFS - General Accounting and Finance System - Base Level SMAS - Standard Material Accounting System DCMS - Departmental Cash Management System MAFR - Merged Accountability and Fund Reporting System NIFMS - NAVAIR Industrial Financial Management System STARS - Standard Accounting and Reporting System DFRRS - Departmental Financial Reporting and Reconciliation System SYMIS - Shipyard Management Information System RIMS - Real Time Integrated Management System NRDPS - Naval Reserve Drill Pay System (continued) Cleveland (CL) Indianapolis (IN) Kansas City (KC) John A. Spence, Information Systems Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Defense Finance and Accounting Service (DFAS) program for solving the year 2000 computer systems problem, focusing on the: (1) status of DFAS' efforts to identify and correct its year 2000 systems problems; and (2) appropriateness of DFAS' strategy and actions for ensuring that problems will be successfully addressed. GAO noted that: (1) DFAS managers have recognized the importance of solving the year 2000 problem; (2) if not successfully addressed it could potentially impact DFAS' mission; (3) to help ensure that services are not disrupted, DFAS has developed a year 2000 strategy which is based on the generally accepted five-phased government methodology for addressing the year 2000 problem; (4) this approach is also consistent with GAO's guidelines for planning, managing, and evaluating year 2000 programs; (5) in carrying out its year 2000 strategy, DFAS has assigned accountability for ensuring that year 2000 efforts are completed, established a year 2000 systems inventory, implemented a quarterly tracking process to report the status of individual systems, estimated the cost of renovating systems, begun assessing its systems to determine the extent of the problems, and started to renovate and test some applications; (6) DFAS also established a year 2000 certification program that defines the conditions that must be met for automated systems to be considered year 2000 compliant; (7) while initial progress has been made, there are several critical issues facing DFAS, that if left unaddressed, may well result in the failure of its systems to successfully operate in 2000: (a) DFAS has not identified in its year 2000 plan all critical tasks for achieving its objectives or established milestones for completing all tasks; (b) DFAS has not performed formal risk assessments of all systems to be renovated or ensured that contingency plans are in place; (c) DFAS has not identified all system interfaces and has completed written interface agreements with only 230 of 904 interface partners; and (d) DFAS has not adequately ensured that testing resources will be available when needed to determine if all operational systems are compliant before the year 2000; (8) DFAS' risk of failure in these areas is increased due to its reliance on other DOD components; (9) DFAS is also dependent on military services and DOD components to ensure that their systems are Year 2000 compliant; and (10) it is essential that DFAS take every possible measure to ensure that it is well-positioned as it approaches 2000 to mitigate these risks and ensure that defense finance and accounting operations are not disrupted.
IoT has no generally accepted, all-inclusive definition. Instead, IoT is generally described as a concept referring to how connected devices interact and process information. The devices themselves are generally not computers, but have embedded components that connect to a network. We define IoT for the purposes of our report as the concept of connecting and interacting with a wide array of objects through a network. An IoT-enabled object is often referred to as a “smart” or a “connected” device, which allows that object to communicate and potentially process information and thereby provide capabilities and functionality beyond what that object would normally provide. For example, connected vehicles—vehicles that “talk” to infrastructure and other vehicles—provide the capability to identify threats and hazards on the roadway and allow drivers to receive notifications and alerts of dangerous situations, potentially reducing the number of accidents. IoT devices are used in a variety of settings, such as the home (e.g., smart appliances), manufacturing (e.g., predictive maintenance), or a health care setting (e.g., remote patient monitoring). In this report, we focus on the application of IoT within a community setting—often referred to as a “smart city” or “smart community”—with the aim to generally improve the livability, management, or service delivery of that community. For example, a community may deploy streetlights with embedded sensors that detect sound or motion and that are programmed to switch on and off or raise dimmed lighting levels when vehicles or pedestrians pass. By managing the level of streetlight use, communities seek to improve energy efficiency and costs, as well as reduce maintenance costs by reducing service trips to replace burned-out lights. Figure 1 illustrates other examples of how IoT technologies may be used in a community. More indirectly, these technologies can drive economic growth by generating demand for new products, new companies, and new skilled jobs, according to literature and industry experts. While the idea of connecting objects is not new, recent advancements in technologies that support IoT—such as the decreasing cost and size of electronics and the expansion of connectivity (e.g., broadband networks and Wi-Fi), are driving a proliferation in the number and types of uses of connected devices. One projection puts the future number of IoT devices (excluding computers, mobile phones, and tablets) at over 10 billion in 2020. This would represent an increase of more than 200 percent between 2015 and 2020, at which point the number of connected things would outpace the number of computers, tablets, and smartphones currently connected. There are also, however, hurdles that may impede widespread use of IoT devices. Security and privacy risks can originate from unintentional threats, such as equipment failures, or intentional threats, such as from hackers. As noted in a Commerce green paper on IoT, published in January 2017, while these risks are generally not unique to IoT, ubiquitous connectivity and growth in IoT devices raise new challenges. As new and more “things” become connected, they increase not only the opportunities for security and privacy breaches, but also the scale and scope of any resulting consequences. For example, in October 2016, a cyberattack that involved the hacking of thousands of unsecured IoT devices interrupted Internet access to a number of major websites across the United States for hours. These hurdles demonstrate the need for strategies to respond to concerns about safety and increased risks to privacy and security. We have reviewed and continue to review some of these issues; for example, we recently issued a report that discusses the specific implications of IoT technologies, including safety, security, and privacy issues. IoT projects are also complex, crosscutting, and require expertise to design and deploy. IoT projects involve the deployment of rapidly evolving technologies and also often collect and store vast amounts of data that must be analyzed before they can be used to develop community solutions. For example, as we have previously reported, when local transportation departments deploy IoT-enabled sensors on traffic lights to monitor and collect data on traffic flow to help manage congestion, they may need assistance in developing the system requirements; collecting, analyzing, and protecting the vast amounts of data collected; and identifying interdependent goals for the community, such as improving air quality. As such, the local transportation department may hire consultants to help with procurement and deployment of a system; purchase the system from a vendor; partner with academia for support in data analysis and innovative solutions; collaborate with other local-government departments to identify interdependent goals; and collaborate with other entities, such as transit operators or regional planning departments, to leverage outcomes, among other activities. As noted earlier, researchers and industry stakeholders have noted that successful integration of technologies and projects—including across sectors (e.g., transportation, energy, or public safety)—is key to realizing the full potential of IoT. That is, when systems are interoperable—or work in concert with one another—they may support interdependent goals. For example, sensors deployed in community infrastructure to report on traffic conditions or environmental conditions such as air quality can also, if equipped with audio or visual capability, provide real-time traffic information to public-safety or emergency-response persons, to help determine the fastest route to an emergency. Government departments that work with each other to build integrated systems can help optimize resource expenditures and maximize services to community residents. One research institute estimated in 2015 that IoT applications in cities could have a global economic impact between $930 billion to $1.6 trillion per year in 2025, with systems that are interoperable enabling more than 40 percent of that value. No single federal agency addresses all aspects of IoT. We identified at least 11 federal agencies that have a key role in supporting IoT in communities, either because they support research or communities, oversee privacy or security protections and threats, or have direct authority over IoT issues. The 11 key federal agencies that we identified include Departments of Commerce (Commerce), Energy (DOE), Health and Human Services (HHS), Homeland Security (DHS), Justice (DOJ), Transportation (DOT), as well as the Environmental Protection Agency (EPA), Federal Communications Commission (FCC), Federal Trade Commission (FTC), National Science Foundation (NSF), and Office of Science and Technology Policy (OSTP). See appendix I for more information on selected agencies’ missions and examples of relevant support for communities’ IoT applications. The EU also has made investments in supporting IoT projects in communities. For example, in 2013 the EU adopted a research and innovation framework program for 2014 to 2020 (Horizon 2020) that includes cross-cutting focus areas in both: IoT, which aims to enable the emergence of an IoT environment that is supported by open (i.e., publicly available) technologies and platforms, and “Smart cities and communities,” which aims to bring together cities, industry, and citizens to demonstrate solutions and business models that can be scaled up and replicated. These focus areas are supported by bi-annual funding strategies. The European Commission, similarly to the U.S. government, is divided into departments and executive agencies that have varied responsibilities. For example, the European Commission has departments for Energy; Mobility and Transport; and Communications Networks, Content, and Technology that each have responsibility in carrying out the European Commission’s policies related to the respective industry sector that the departmental entity oversees. With support from the EU, as well as local initiatives, European communities are increasing investments in IoT projects, and according to literature we reviewed and some U.S. and European industry experts we interviewed, some of those communities are generally recognized as having more advanced and mature IoT projects than communities in the United States. Many of the federal agencies we reviewed are conducting or funding broad research in IoT-related technologies. As communities increasingly deploy IoT devices, they are more dependent on the underlying communications systems—both wired and wireless network systems— that enable those devices to communicate with each other and with other systems. And with wireless systems likely playing an increased role in supporting IoT, demand for access to spectrum—a limited resource already in high-demand—will also rapidly increase. Recognizing the increasing demand for connectivity and spectrum access, 8 of the 11 federal agencies are conducting or funding research on communication systems and the related impacts of those systems—such as privacy, security, and demand for spectrum access—that could subsequently support communities’ IoT projects. For example: NSF awarded 19 universities over $8 million over 3 years beginning in 2016, through its US Ignite program, for fundamental research in networking technologies to further both the capabilities and understanding of high-speed networking infrastructure to meet the demands of future applications, including community applications. NSF also awarded in 2016 more than $6 million over 2 years for exploratory research in connecting networked computing systems with physical devices through NSF’s EArly-concept Grants for Exploratory Research (EAGER) funding mechanism. Commerce’s National Telecommunications & Information Administration (NTIA) has a research lab that is developing an IoT testbed for testing the potential of interference posed by new IoT- related spectrum use to existing spectrum users in a dense environment, such as a city. The growth in wireless communications has increased the potential for harmful interference—an action that interrupts or obstructs communication service—when two systems use the same or adjacent spectrum frequencies in the same geographic area. NTIA officials also told us that this lab is also supporting DOT’s efforts to investigate the potential interference of unlicensed wireless devices operating in the licensed spectrum for dedicated short-range communications (DSRC)—the wireless technology that, according to DOT, is expected to be used in a connected vehicle environment. FTC’s Office of Technology Research and Investigation researches and evaluates the impact of IoT technologies on consumers, including issues related to privacy and security. In addition, in January 2017, the FTC announced a prize competition that challenges the public to develop a tool that consumers can deploy to guard against security vulnerabilities in IoT devices. And while the challenge is not directed to communities, FTC staff noted that it is possible that some of the proposed submissions could help address security issues related to IoT devices in communities. In late 2016, DOJ formed a threat analysis team to study the potential national-security threats posed by IoT devices as part of a broader effort to assess the next -generation of cyber threats. According to DOJ officials, the team has focused on how IoT devices may be exploited by terrorists or others to cause loss of life or disrupt the nation’s increasing reliance on IoT technologies, which included surveying other federal agency efforts and non-government experts on this issue. DOJ hopes in the future to support an interagency approach to this issue. In addition to networked communications systems, federal agencies are conducting or funding broad technical research on IoT devices that support communities, such as sensors and intelligent transportation systems technologies, as described in the examples below. DHS awarded three $100,000 small business innovation research contracts in January 2016 for research and development of modular (i.e., composed of standardized units), low-cost, integrated, IoT- enabled flood inundation sensors. These sensors would (1) monitor flood-prone areas in real-time across large geographic areas and (2) allow emergency responders to predict, detect, and react to flood conditions, among other things. All three awardees received follow-on awards to test and evaluate the sensors in the field, beginning in April 2017. The final phase of the awards involves commercialization of the sensors, and while not funding commercialization directly, DHS plans, among other things, to help bridge relationships between the awardees (sensor developers) and potential buyers (such as first responders). EPA’s Office of Research and Development formally coordinates with two universities on research related to management and analysis of data collected through sensor networks. This research has included collaborating to develop a freely available data-hosting and visualization tool, as well as analyzing high-resolution air pollution emissions data. DOE established the Grid Modernization Initiative to support modernization, including ensuring the resiliency and security, of the nation’s electricity grid—commonly referred to as a smart grid. Under this initiative, DOE not only makes funding available but also supports research projects related to IoT technologies, such as sensors, which, according to the initiative’s multi-year program plan, are necessary to assess the health of the grid in real time, predict its behavior, and respond to events effectively. According to DOE officials, the initiative’s projects are in their first year, so no reported results are available, but a peer review panel was held in April 2017 to provide lessons learned and share best practices. DOE officials also told us that they anticipate the initiative to continue through at least 2018. NSF, through its Smart and Connected Communities program, anticipates awarding about $18.5 million in grants under a 2016 program. This program solicits projects that support interdisciplinary research activities to improve understanding of smart and connected communities and enable sustainable change to enhance community functioning. Also, in 2015, NSF’s $3 million grant awarded through its Major Research Instrumentation program supported the development of a new tool for a project known as the Array of Things. The project’s goal is to install a sensor platform in the City of Chicago that collects data on a variety of community factors, including air quality and traffic, and makes these data publicly available to encourage innovative community solutions from third parties. Finally, DOT’s Intelligent Transportation Systems Joint Program Office conducts a variety of research and demonstration projects that, according to its current strategic plan, includes the testing of ideas that might be developed into intelligent transportation systems technologies and subsequently deployed to advance transportation. The federal government is also engaged in overseeing IoT-related issues. In doing so, all but two of the federal agencies we reviewed are developing and distributing IoT-related guidelines, seeking input on and making policy recommendations, and convening or participating on working groups that support the development of voluntary consensus standards. As communities continue to deploy IoT devices and analyze the increasing amount of resulting data, federal policies and guidance can help them better understand the benefits of using IoT-related technologies, and help them address the challenges. Notably, Commerce issued a paper in January 2017 that, among other things, sought input on the role of the federal government in fostering IoT and related policy recommendations. The paper also discusses both benefits, such as improvements in safety and efficiency for consumers and governments, and challenges of IoT, including risks to security and privacy. Other examples include: Safety: DOT issued an Automated Vehicles Policy in September 2016 in order to speed the delivery of an initial regulatory framework and best practices to guide the safe design and deployment of automated vehicles. Security: In October 2016, through public meetings, NTIA convened a multi-stakeholder process on IoT security upgradability and patching with a goal of fostering a marketplace that offers devices and systems that support security upgrades through increased consumer awareness and understanding, among other things. And, in November 2016, DHS issued a set of industry-neutral, non-binding principles to provide stakeholders with suggested practices that help to account for security and other challenges as stakeholders develop, implement, or use IoT devices. Privacy: FTC staff issued a report in January 2015 that both summarized a 2013 staff-hosted workshop discussion on benefits and risks of IoT and provided an update on post-workshop developments, including a report on data privacy issued by the President’s Council of Advisors on Science and Technology., The report also included the FTC staff’s recommendations on privacy and security, which continued to recommend that Congress enact broad-based privacy legislation. Also, HHS published a report in July 2016 that highlighted what it referred to as gaps in regulation, as well as confusion among consumers on privacy of health data collected by entities not regulated by the Health Insurance Portability and Accountability Act (HIPAA)., Interoperability: Commerce’s National Institute of Standards and Technology (NIST) participates in international standards-setting organizations. NIST has convened an international public working group to help develop a consensus framework to enable interoperable community IoT solutions and plans to publish a draft consensus framework in late 2017. This effort (and others at NIST) provides the technical basis for NIST contributions to work in international standards-setting organizations. DHS contracted with an international consortium of more than 500 companies, government agencies, and universities to demonstrate how proprietary systems, specifically sensors used by first responders, could be made interoperable using open standards—that is, standards that are publicly available and maintained by a collaborative and consensus-driven process. In addition to broad research and oversight of IoT issues, we identified three federal agencies (Commerce, DOT, and EPA) that are more directly supporting communities through expanded funding for community IoT projects. In December 2015, DOT launched a two-phase prize competition, the Smart City Challenge, which, to date, included the largest single award amount ($50 million) made available by the federal government to support IoT in communities. According to DOT, it had unprecedented community interest, attracting 78 mid-sized cities to apply for the first phase. According to DOT, it was one of the first times, if not the first, that federal funds were made available to explicitly encourage communities to integrate systems across sectors to achieve interdependent goals. As of April 2017, the challenge winner— Columbus, Ohio—is still finalizing its project schedule and details. DOT officials noted that it is challenging to identify measurements that define success and ensure that the projects provide adequate data to inform any evaluation, but that DOT officials are working with Columbus representatives and an independent evaluator to develop a strategy to do so. DOT officials also noted that they provided seven finalist communities $100,000 each and that some communities have used that money to revise their original proposals and bid on other available federal funds. Two other federal agencies also recently announced funding for deployment of community IoT projects. In August 2016, EPA launched prize competition—called the Smart City Air Challenge—for the purpose of learning how communities would deploy hundreds of air quality sensors, manage high volumes of data, and make the data public. EPA awarded $40,000 each to Baltimore, Maryland, and Lafayette, Louisiana, to develop innovative strategies for deploying sensor platforms and managing the data collected from 300 sensors, as well as sharing lessons learned with other communities. According to EPA officials, as of April 2017, the communities are testing and deploying the sensors and will meet quarterly throughout 2017 with EPA officials to share knowledge and will participate in webinars with other communities to share best practices. EPA will evaluate the projects at the end of 2017 to determine whether it will award a second round of funding of $10,000 each. Commerce’s NIST awarded, in September 2016, a total of $350,000 to four communities to collaborate and deploy replicable smart solutions to address community issues. These grants include such projects as using Wi-Fi-enabled sensors to alert first responders to emergencies in a senior community and developing computer models to predict urban flood events. In September 2017, the awardees are expected to submit final reports that include evaluation of the projects against specific criteria including evidence of effective use of existing standards for interoperability across systems and clear and quantifiable performance goals with measurement capabilities incorporated into the system design. NIST officials noted that communities may find it challenging to meet these criteria during a 1- year, small-scale project but noted that regular bi-weekly meetings and continuing interaction between NIST and the communities is intended to help. DOT also provides funds through other federal grant programs that do not specifically target IoT, but can still be used to support IoT projects. For example, DOT published a guide for communities that identifies existing funding programs or initiatives, such as its Advanced Transportation and Congestion Management Technologies Deployment (ATCMTD) initiative and DOT’s Transportation Investments Generating Economic Recovery (TIGER) grant program that could be used towards funding IoT projects, according to the DOT guide. The guide provides examples of IoT technologies that meet eligibility criteria under these programs, such as the ability of sensor-based infrastructure as an eligible technology for maintenance and monitoring under the ATCMTD Initiative. DOT officials reported that DOT announced for fiscal year 2016, nine ATCMTD and TIGER grants supporting IoT projects, such as a project in the city and county of Denver, Colorado that includes deploying technologies to support its connected freight program with a goal of reducing freight congestion. Some federal agencies are also working to maximize those federal funds by encouraging or requiring grant recipients to leverage private funds. For example, both DOT’s Smart City Challenge and EPA’s Smart City Air Challenge strongly encouraged communities to leverage funding from the private sector and others. DOT officials told us that Columbus, Ohio, the winner of the DOT Smart City Challenge, was able to leverage an additional $350 million or more from community partners beyond DOT’s $40 million contribution. The EPA challenge solicitation specifically noted that the two community awards of $40,000 each were intended to be seed money for communities to leverage other resources. Furthermore, federal agencies are promoting project replicability, so that solutions in one community can be more easily deployed in other communities. Due to the complexity of IoT projects and communities’ limited resources, two representatives from industry and academia highlighted that communities are hesitant to be the first adopters of these projects without models or leading practices to follow. In recognizing this hesitance, some federal agencies are promoting the design of projects that are replicable to other communities. Most notably, NIST launched its Global City Teams Challenge (GCTC) program—a collaborative platform for the development of “smart cities”—to encourage collaboration and the development of technology standards. In doing so, NIST recognized that IoT projects tend to be isolated and customized—that is, not interoperable with other projects or replicable to other communities. According to NIST, many custom-designed systems are not cost effective, and the growth of the smart cities market is also hindered by deployments that are customized. And, with standards-based solutions—or replicable projects—communities can build on each other’s work and make their solutions available to other communities that may lack resources. For example, the multi-stakeholder team that is deploying the sensor and computer research platform, called the Array of Things, in the City of Chicago first began as a partnership between the City of Chicago, University of Chicago, and Argonne National Laboratory. This partnership participated in the GCTC program and also received an NSF research instrumentation grant, as discussed above. The team has had inquiries from nearly 90 cities around the world, and is preparing to deploy the IoT technology in an initial set of pilot cities, including Seattle, Washington, and Amsterdam, the Netherlands. $3.1 million award through NSF’s major research instrumentation program and more than $1 million from DOE’s Argonne National Laboratory. Project description: Chicago’s Array of Things project plans to install hundreds of interactive, modular sensor boxes across the city to collect real-time data on the city’s environment, infrastructure, and activity for research and public use, essentially measuring factors that impact livability such as climate, air quality and noise. The project also reserves space for additional sensors in support of future data collection in other areas and industry sectors. Electric Power Board (EPB) of Chattanooga, the owner and operator of the region’s smart grid, staff from DOE’s Oak Ridge National Lab use their expertise to test new technologies and develop new analyses, among other things, to help EPB to use its electricity data to improve its operations. DHS officials also told us that while in the past they have not provided in- kind support, they are currently drafting a contract in which they will be providing in-kind support, including drones and some open-source software, to a quasi-governmental organization that works to advance community IoT projects. Issues related to IoT in communities cut across multiple sectors and government agencies—that is, no single government agency addresses all aspects of IoT or communities’ IoT efforts. And similar to other cross- cutting federal efforts, achieving meaningful results requires collaborative efforts of multiple programs and agencies spread across the federal government and often more than one sector or level of government. Both Congress and the executive branch have recognized the need for improved collaboration across the federal government. We also have previously reported that agencies face challenges when attempting to work collaboratively and that the agencies can enhance and sustain their collaborative efforts by engaging in such practices as establishing mutually reinforcing or joint strategies designed to help achieve a common outcome and identifying and addressing needs by leveraging resources to support common outcomes. To promote government-wide collaboration in supporting deployment of IoT in communities, the White House created an interagency Smart Cities and Communities task force in July 2016—co-chaired by representatives from DOT, NIST, and NSF—that is coordinated through the Networking and Information Technology Research and Development (NITRD) program. Twenty-two federal departments and agencies have participated on the task force as of January 2017 with initial efforts focused on developing (1) a federal strategic plan and (2) a resource guide for communities. On January 12, 2017, the task force released for public comment a draft federal strategic plan that offers a high-level framework to guide and coordinate smart community-related federal initiatives, with an emphasis on local government and stakeholder engagement. The draft plan highlighted five goals motivating the strategy, including accelerating innovation and infrastructure improvement and facilitating cross-sector collaboration and bridging existing silos. It also identified four strategic priorities and next steps that include promoting interagency collaboration and developing a road map for specific federal actions to execute the strategic priorities. According to federal officials who are chairing the efforts, the public comments will help inform a revised federal strategic plan, which, as of April 2017, is anticipated for publication in the summer of 2017. According to these officials, following the completion of the strategic plan, the task force will be dissolved, and the NITRD program’s standing Cyber-Physical Systems interagency working group will identify and coordinate any additional action that is needed to support these efforts, such as any activities related to the execution of the federal strategic plan. The task force’s second effort included the launching of an interactive website resource guide in March 2017 that describes federally funded research and development programs in smart cities and communities. According to the task-force, the guide aims to facilitate collaboration and coordination among task force member agencies, academia, industry, local cities and communities, and other government entities. These officials noted that the guide will be reviewed and updated annually, and that they are using aggregate data on use and search patterns to evaluate the effectiveness and usability of the guide. At the same time, individual federal agencies are formally and informally collaborating at a program level on specific agency projects or efforts related to community IoT projects. Federal agency officials told us that this collaboration helps bridge issues that cut across agencies, as well as leverage expertise. For example, in 2016, DOT and DOE signed a memorandum of understanding (MOU) that recognizes their departments’ mutual interest in realizing the economic, environmental, and national security benefits achieved by the growing use of smart transportation technologies. The MOU formally states their intention to coordinate actions to leverage DOE’s traditional focus and expertise in transportation energy technology systems and DOT’s traditional focus and expertise in transportation safety technology systems to accelerate the analysis and application of “smart” transportation systems. Under this MOU, and as mentioned above, DOE supports a national lab expert, as part of a technologist-in-cities pilot program, in Columbus, Ohio, who serves as a complement to DOT’s Smart City effort and focuses on energy-related components of the planned projects. In addition, according to NTIA officials, its research lab also coordinates with DOT’s Intelligent Transportation Systems Joint Program Office, including investigating the potential interference of unlicensed wireless devices operating in the licensed spectrum for DSRC—the wireless technology that, according to DOT, is expected to be used in a connected vehicle environment. Some federal agencies have undertaken efforts to support collaboration at the community level, across local governments, academia, and the private sector. NIST’s GCTC program, as discussed earlier, enables local governments, nonprofit organizations, academic institutions, technologists, and private corporations from all over the world to form project teams, or “action clusters,” to work on community IoT projects and facilitate interoperability, according to NIST officials. And since the GCTC program launched in September 2014, GCTC has recruited and supported over 160 project teams, with participation from over 150 cities and 400 companies or organizations from urban and rural communities across the United States and their counterparts in other countries. The White House also has supported collaboration at the community level through promotion of the MetroLab Network—a networking consortium of city-university partnerships that seeks to bring interested cities and universities together to share expertise and lessons learned across municipalities. According to a 2016 MetroLab Network report, the membership consists of more than 35 partnerships and has developed a library of more than 120 research, development, and deployment projects that are currently under way across its membership. The library resource, as well as knowledge-sharing and networking events convened as part of the consortium, enables collaboration with, and the sharing of lessons from communities that have deployed innovative community solutions. For example, according to an official from the Portland Bureau of Planning and Sustainability, a meeting at a MetroLabs event resulted in a partnership between Portland and a stakeholder leading Chicago’s Array of Things project to share information and leverage resources in testing sensors and a sensor platform for an air quality project. In the European Union (EU), the European Commission directs research on IoT-related technologies and oversees IoT policy-related issues, similar to the U.S. federal government. Within the EU’s Horizon 2020’s 7-year research program, the European Commission developed an initiative to support innovation, which includes a specific focus area for IoT that is cross-cutting. The focus area aims to enable an IoT environment that is supported by technologies and technology platforms that are open (i.e., publicly available). Funds for the 2016-2017 programs are to be used to demonstrate scientific progress that enables advanced IoT applications. The EU also has taken legislative steps related to IoT oversight. For example, the EU adopted the General Data Protection Regulation in spring 2016, which, according to European Commission officials, seeks to simplify protection for individuals’ data, including data from IoT devices, by providing a single set of rules that apply to all EU member states. It is scheduled to be implemented over the next 2 years. At the country level, according to community representatives in Sweden with whom we spoke, policy makers are also investigating potential regulatory changes. For example, in one instance, the Swedish government created “policy labs” (also sometimes called a “regulatory holiday,”) specifying some geographic locations or a specific time frame that is free of regulation so that project partners can test what policies are needed in that environment—such as a connected vehicle environment—to make the solution successful. The EU also directly supports IoT applications in communities through direct funding. Sometimes it does so through formalized programs, such as joint-departmental funding that is focused on “smart cities and communities.” Specifically, the Horizon 2020 research program includes a cross- cutting focus area on “smart cities and communities”—which aims, in part, to bring together cities, industry, and citizens to demonstrate community solutions and business models that can be scaled up and replicated, and that lead to measurable benefits in energy and resource efficiency, new markets, and new jobs. The supporting funding program combines funds from multiple departments—the European Commission’s departments on energy, transportation, and communications technology—to support IoT projects in communities that span these sectors. Under this program, the EU has funded three different large-scale pilot projects since 2015 that focus on replicability and support IoT projects. For each pilot project, three European cities were selected as “lighthouse cities,” with up to five different follow-on cities in which successful projects are to be replicated. The lighthouse cities design and implement their smart projects, and when successful, the follow-on cities begin deployment. We also found examples where individual countries supported community deployment of IoT projects and encouraged those communities to leverage resources from private industry and others. For example, according to community representatives in Sweden, the Drive Sweden project—focused in part on local and national traffic management and deployment of autonomous vehicles and fleets—is jointly funded by three Swedish government agencies: the Swedish Energy Agency, the Swedish Research Council, and Sweden’s Innovation Agency. The project has to leverage private funds, as it is part of a program supporting nationally funded projects that require private industry cost-sharing. European Commission departments and countries also collaborate in administering programs for community IoT projects and support collaboration among other stakeholders. For example, in 2012, recognizing that IoT technologies span all sectors of the economy and society, the European Innovation Partnership on Smart Cities and Communities (EIP-SCC) was launched. The partnership is a stakeholder group that aims to significantly accelerate the deployment of smart city solutions integrating technologies from energy, transportation, and communications technology. The EIP-SCC is jointly administered by three European Commission departments with jurisdiction over the energy, transportation, and communications technology sectors. It has a mechanism called a stakeholder platform that serves as a collaborative, networking, and knowledge-sharing tool for communities, collecting and analyzing input from all stakeholders. According to its agenda, it seeks to provide bottom-up contributions, such as those from communities, to ensure that EU policy on smart communities reflects the needs and engagement of communities. Also, at the country level, Sweden announced in June 2016 a formal program that supports collaboration among government, private, and academic stakeholders to create innovative solutions for specific societal challenges, one of which is “smart cities.” In planning and deploying IoT projects, all of the communities we reviewed are using federal funds with other direct funding and in-kind support. As described previously, grant recipients are sometimes required by federal agencies to leverage private funds. For example, the Chattanooga Electric Power Board leveraged more than $115 million in nonfederal investment as part of its federal award of $111.6 million through DOE’s smart grid investment grant program in 2010. Chattanooga used these federal and nonfederal funds to expand its fiber optic network to support communication of its smart grid equipment, which includes smart meters for more than 170,000 energy utility customers. According to community representatives we interviewed, without a federal cost-sharing requirement, federal funds alone may not be sufficient to cover all project costs. For example, according to City of Columbus representatives, it used its $50 million award from DOT’s Smart City Challenge to leverage about an additional $90 million in support from community partners. Some of the support has come in direct funding while other support has been in-kind, such as research, programmatic support, or equipment contributions. installed a fiber based gigabit internet infrastructure to improve efficiency and resiliency in its utility energy distribution The project included installation of more than 170,000 smart meters for utility customers. Since this project was completed, Chattanooga is making this fiber infrastructure available to other community stakeholders for other IoT projects, including deploying a network of air quality sensors to detect asthma- aggravating particulate matter and pollen in metropolitan Chattanooga. million in direct funds and in-kind contributions, including a $10 million grant from Vulcan, Inc. projects: The Integrated Data Exchange is an open data environment that will: (1) contain data from many different sources; (2) generate performance metrics for program monitoring and evaluation; (3) transparently serve the needs of public agencies, researchers, and entrepreneurs; and (4) provide practical guidance and lessons learned to other potential deployment sites. Local government officials from two of the communities that we reviewed discussed beginning to leverage value from public assets, making public infrastructure available in exchange for financial support, such as in-kind donations of technology equipment. For example, representatives from two communities highlighted their use of community-based “technology incubators.” In these cases, a technology incubator is generally an entity that supports the collaboration of public, private, and oftentimes academic partners and provides public assets—such as data, resources, and infrastructure—to test technologies and develop innovative solutions to community needs. For example, Chattanooga provides access to its fiber network for use as a test bed to a variety of partners and a variety of projects. One project involves deploying and connecting a network of air quality sensors to detect asthma-aggravating particulate matter and pollen in the community with a goal of providing real-time alerts to end users, such as asthma patients, health institutions, and others affected by elevated pollen levels. Representatives from three of the communities we reviewed, as well as four other industry and academic stakeholders that we spoke with, discussed that developing such business models where local governments leverage value from public assets can help finance the project, and in some cases help sustain the project after initial grant funding runs out. Highlights of Sweden’s technology incubators Sweden has 33 science parks, which are described as a stimulating meeting place for academia, research, the public sector, and industry. Stockholm Science Park: Kista Science City (KSC) is operated by a not- for-profit foundation that includes public, private, and academic organizations, for the purpose of facilitating collaboration among these stakeholders. An open testbed—the Urban ICT Arena—will serve as the testbed and co-creation arena for developing, testing, and showcasing community IoT solutions. For example, the City of Stockholm will provide access to a fiber optic network for industry and academics to test community technology solutions, including solutions for clean water and efficient transportation. Representatives from all three of the European communities that we visited also reported having technology incubators where public, private, and academic entities partner to test innovative community solutions. While incubator-like entities can be found all over the world, Sweden has 33 formal technology incubators across the country, called Science Parks, which are jointly funded by industry, universities, and the local governments. These science parks facilitate collaboration among these stakeholders to develop community solutions. Representatives from three science parks that we met with in Gothenburg and Stockholm highlighted the importance of collaboration of public, private, and academic entities— typically referred to as the “triple-helix” model—to the success of the projects. For example, at a science park open testbed in Stockholm, the local and regional government, an industry partner, among others, have provided access to wired and wireless communications networks for other entities to use in developing innovative community solutions, such as improving water quality and transportation efficiency. Representatives from communities we reviewed and industries we spoke with discussed the importance of collaboration among public, private, and academic entities to make the best use of the unique expertise that each member group brings to the table. Representatives from two communities discussed that local governments can provide a policy framework and help ensure that the solutions are based in the community’s needs, as opposed to driven by the latest technology invention. A Columbus representative discussed its “culture of collaboration” and how private partners have brought invaluable resources to its DOT Smart City Challenge projects, particularly expertise on technologies and connections to other industry players for innovative ideas and solutions. Representatives from two of the communities we spoke with have active “convener” organizations that bring public, private, and academic stakeholders together on an on-going basis to discuss collaborative opportunities to address community problems. While representatives from all of the communities discussed the benefits of collaboration, an industry representative also highlighted that maintaining well-functioning partnerships takes time and resources. Representatives from all three European communities also discussed the value of collaboration for providing a variety of expertise, and in some cases, independence. For example, in The Netherlands, a non-profit organization oversees part of Eindhoven’s “smart city” projects. Representatives from this organization highlighted this as advantageous to providing government and commercial independence—that is, to best balance the government’s oversight needs with private industry interests. In Stockholm, a local government representative highlighted that the role of some academics is to independently evaluate the IoT projects’ success, both in meeting environmental sustainability goals and economic goals. Although integrated projects can help maximize the potential of IoT applications, communities can face challenges in integrating projects. All of the domestic communities we reviewed are planning or have deployed discrete IoT projects—projects that, at least initially, generally focus on addressing a singular issue, such as traffic congestion on a particular corridor. Domestic and foreign community representatives that we spoke with pointed to four main factors that can hinder the deployment of integrated projects, and in some cases, offered perspectives on solutions to these challenges. Siloed community sectors can make it challenging to integrate IoT projects. A variety of representatives that we spoke with—representatives from three of the domestic communities and two of the foreign governments, as well as seven academic and industry representatives— highlighted that local government departments and federal grants tend to be focused on one sector (e.g., transportation, energy, public safety) of a community, inhibiting IoT project integration. An industry representative who works with communities on IoT projects said that community projects tend to be planned and deployed in isolation, in part because it is difficult to leverage resources and benefits across the silos created by government departments. A local government representative from a domestic community noted that there is no sense of a single organization at the federal level when pursuing grants and that while federal grants are helpful, tracking opportunities and developing proposals are a lot of work, and consumes both time and resources. Representatives from one domestic community noted that collaboration between transportation, energy, and environmental monitoring appears be most advanced, in part because these industries have recognizable synergies. Domestic and industry representatives that we spoke with offered perspectives on how internal or external leadership and a federal strategy could help overcome silos and promote integration. Leadership: Representatives from a domestic community, as well as four industry representatives, noted that an individual or department within the local government could serve as a mechanism to bridge silos. For example, an industry representative that works with communities on IoT projects noted that some domestic communities are using a Chief Information Officer or Chief Data Officer as a leader who can help integrate projects across departments. And according to local representatives from three domestic communities and two industry representatives, entities that are external to the local government, such as the community’s technology incubator or other consortium, could help convene various stakeholders. For example, in one domestic community, a consortium of public, private, and academic entities helped identify how to leverage a past investment in traffic sensors to expand the capacity to evaluate air quality. Federal Strategy: While collaboration is helpful to integrating projects, representatives from three communities, three industry and academic representatives, and officials from two federal agencies noted that some kind of federal strategy or guidelines could be helpful. For example, a representative from one community noted the desire for a federal vision or framework that organizes multiple industries and agencies and supports strategies and resources for working together to achieve common goals. As discussed above, through the White House’s NITRD program, a task force that consists of 22 federal agencies, recently issued a draft federal strategic plan that includes a high-level framework to guide and coordinate smart community- related federal initiatives, with an emphasis on local government and stakeholder engagement. Federal officials working on the task force anticipate that the final plan, which will be informed by public comments, will be published in summer 2017. Representatives from three of the domestic communities and one of the foreign communities, as well as four industry stakeholders we interviewed, identified challenges related to proprietary vendor systems in deploying integrated projects. For example, while private industry can provide communities with needed financial support and expertise, representatives from two communities noted that private interests also encourage the development of proprietary systems that are solely owned by a vendor. Representatives from two communities also noted that proprietary systems risk making the community dependent on one vendor that could go out of business or raise maintenance costs. The use of proprietary systems raises confidence that the components within a system will work together, but challenges arise when communities seek to integrate systems from different vendors, perhaps across sectors. We also recently reported on similar challenges experienced by transit providers, including difficulties changing vendors after an intelligent transportation system has been deployed and getting vendors to work with one another to integrate systems amid concerns about making changes to those systems. Representatives from all of the domestic and foreign communities we reviewed, as well as four academic and industry stakeholders, said that standards-based and open data platforms could help support integrated projects and innovative solutions. Some federal agencies—such as NIST, EPA, and DHS—are taking steps to address interoperability issues, including promoting consensus-based standards that would encourage proprietary systems to at least be designed to be interoperable. For example, NIST recognized that IoT projects are generally based on custom systems that are not interoperable, portable across cities, or cost- effective. Subsequently, NIST is helping convene an international public working group to develop a consensus framework to enable smart city solutions. Representatives from one foreign community told us that they are working to create an “umbrella” platform to unify all of the diverse systems developed by different projects with different business plans and timelines. Representatives from all three of the foreign communities noted that publicly funded projects often require that data collected be open— that is, available for others to use, including the public or other vendors. Representatives from two of these communities, as well as representatives from three of the domestic communities, noted that open data allow third-party entities access to information they can use to develop innovative solutions. Communities with limited resources—both financial and staff expertise— can face challenges integrating IoT projects due, in part, to the complexity and cross-cutting nature of these projects. As two academic representatives who work to support community projects noted, it is less resource-intensive for communities to deploy singular, discrete IoT projects than to deploy integrated projects that require time and resources to develop a holistic vision and business plan. Industry representatives from two communities and three other industry representatives, however, highlighted that leveraging value from public assets could help finance communities’ IoT projects, as well as help financial sustainability. For example, as discussed above, some communities are making public infrastructure available in return for payment or in-kind donations of technology equipment. Representatives from one of the foreign communities discussed a government push to design projects that are financially sustainable, in such a way that they would be financially viable after grant funding runs out. U.S. federal efforts to support community collaboration have helped communities share their lessons learned on developing new business models, a process that also helps communities invest their funds more efficiently and effectively, by reducing the unknowns and subsequently the risk of investment. These rapidly developing technologies often also require new and unique expertise, particularly across sectors or disciplines, to deploy and maintain. A local transportation department representative from one community noted that his department’s responsibilities are no longer confined to technical engineering skills but also involve expertise in information technology and data management, which requires additional training. Representatives from two domestic and three foreign communities discussed efforts to create new staff positions or retrain staff in existing positions. Representatives from a foreign community discussed creating a chief digital officer whose responsibilities include using data to improve efficiency for citizens and combining knowledge in data analytics with public policy. Representatives from two domestic communities we reviewed highlighted the fact that IoT-related technologies are constantly evolving, an evolution that ultimately makes integrating projects even more challenging. And, as we have reported in the past, integrating technologies often requires multiple phases of testing, which requires time and resources, and ultimately may require changes to the system or technology and re- testing. Communities with limited resources may prefer to focus on discrete projects, rather than risk investment in integrated projects with uncertain results. For example, representatives from two of the domestic communities spoke about project integration as the “next step” after they deploy the discrete projects. And representatives from two domestic communities, as well as three other industry and academic representatives, said that it is risky to develop a holistic, integrated project when the specific technology is not proven to be effective or could be completely different in 5 or 10 years. Selected federal efforts seek to help communities design replicable IoT projects and reduce the risk for subsequent communities. These and other efforts, while under way, are likely to take several years or longer to fully implement and measure success. We requested comments on a draft of this product from Commerce, DOE, HHS, DHS, DOJ, DOT, EPA, FCC, FTC, NSF, and OSTP. Commerce, DOE, FTC, NSF, and OSTP provided technical comments, which we incorporated as appropriate. HHS, DHS, DOJ, DOT, EPA, and FCC did not provide comments. We are sending copies of this report to the appropriate congressional committees, relevant federal agencies, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or members of your staff have questions about this report, please contact Mark Goldstein at (202) 512-2834 or goldsteinm@gao.gov or Nabajyoti Barkakati at (202) 512-4499 or barkakatin@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. In addition to the contacts named above, Susan Zimmerman (Assistant Director), Gretchen Snoey (Analyst in Charge), Eli Albagli, Edward Alexander, Jr., Ana Ivelisse Avilés, Brett Caloia, Joseph Cook, John de Ferrari, Camilo Flores, Sara Ann Moessbauer, Christopher Murray, Amy Rosewarne, and Andrew Stavisky made key contributions to this report. Tommy Baril, Jennifer Beddor, Leia Dickerson, Karen Doran, Lawrance Evans, Jr., Philip Farah, John Neumann, Malika Rice, Stephen Sanford, and Sarah Veale also made contributions to this report. Internet of Things: Enhanced Assessments and Guidance Are Needed to Address Security Risks in DOD. GAO-17-514SU. Washington, D.C.: June 7, 2017. Technology Assessment: Internet of Things: Status and implications of an increasingly connected world. GAO-17-75. Washington, D.C.: May 15, 2017. Health Care: Telehealth and Remote Patient Monitoring Use in Medicare and Selected Federal Programs. GAO-17-365. Washington, D.C.: April 14, 2017. Cybersecurity: Actions Needed to Strengthen U.S. Capabilities. GAO-17-440T. Washington, D.C.: February 14, 2017. Data Analytics and Innovation: Emerging Opportunities and Challenges. GAO-16-659SP. Washington, D.C.: September 20, 2016. Intelligent Transportation Systems: Urban and Rural Transit Providers Reported Benefits but Face Deployment Challenges. GAO-16-638. Washington, D.C.: June 21, 2016. Critical Infrastructure Protection: Measures Needed to Assess Agencies’ Promotion of the Cybersecurity Framework. GAO-16-152. Washington, D.C.: December 17, 2015. Intelligent Transportation Systems: Vehicle-to-Infrastructure Technologies Expected to Offer Benefits, but Deployment Challenges Exist. GAO-15-775. Washington, D.C.: September 15, 2015. Critical Infrastructure Protection: Cybersecurity of the Nation’s Electricity Grid Requires Continued Attention. GAO-16-174T. Washington, D.C.: October 21, 2015. Spectrum Management: FCC’s Use and Enforcement of Buildout Requirements. GAO-14-236. Washington, D.C.: February 26, 2014. Intelligent Transportation Systems: Vehicle-to-Vehicle Technologies Expected to Offer Safety Benefits, but Deployment Challenges Exist. GAO-14-13. Washington, D.C.: November 1, 2013.
Communities are increasingly deploying IoT devices generally with a goal of improving livability, management, service delivery, or competitiveness. GAO was asked to examine federal support for IoT and the use of IoT in communities. This report describes: (1) the kinds of efforts that selected federal agencies have undertaken to support IoT in communities and (2) how selected communities are using federal funds to deploy IoT projects. GAO reviewed documents and interviewed officials from 11 federal agencies identified as having a key role in supporting IoT in communities, including agencies that support research or community IoT efforts or that have direct authority over IoT issues. GAO interviewed a non-generalizeable sample of representatives from multiple stakeholder groups in four communities, selected to include a range of community sizes and locations and communities with projects that used federal support. GAO also reviewed relevant literature since 2013 and discussed federal efforts and community challenges with 11 stakeholders from academia and the private sector, selected to reflect a range of perspectives on IoT issues. GAO requested comments on a draft of this product from 11 federal agencies. Five agencies provided technical comments, which GAO incorporated as appropriate. Six agencies did not provide comments. The internet of things (IoT) generally refers to the technologies and devices that allow for the network connection and interaction of a wide array of devices, or “things.” Federal agencies that GAO reviewed are undertaking two kinds of efforts that support IoT in communities: Broad federal research and oversight of IoT-related technologies and issues: For example, 8 of the 11 agencies GAO reviewed are involved in broad research efforts, often on communication systems—both wired and wireless network systems. In addition, nine agencies have oversight efforts that include providing IoT-related guidance, often on data security and privacy. More direct efforts to support communities, including funding community IoT projects (see figure) and fostering collaboration among the agencies and communities: For example, DOT recently awarded $40 million in federal funds to a community for a suite of “smart” projects related to improving surface transportation performance, and EPA awarded $40,000 each to two communities to develop strategies for deploying air quality sensors and managing the data collected from them. To foster such collaboration, in July 2016, the White House formed an interagency task force that has developed a draft Smart Cities and Communities Federal Strategic Plan . A final plan will be released in summer of 2017, according to federal officials. All four of the communities that GAO reviewed are using federal funds in combination with other resources, both financial and non-financial, to plan and deploy IoT projects. For example, one community used the $40 million DOT award to leverage, from community partners, more than $100 million in additional direct and in-kind contributions, such as research or equipment contributions. Communities discussed four main challenges to deploying IoT, including community sectors (e.g., transportation, energy, and public safety) that are siloed and proprietary systems that are not interoperable with one another.